filename
stringlengths 9
127
| text
stringlengths 133
11k
|
---|---|
2402.11960v1.pdf | DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Hong Chen1*, Chengtao Lv1*, Liang Ding2, Haotong Qin1, Xiabin Zhou4,
Yifu Ding1, Xuebo Liu3, Min Zhang3, Jinyang Guo1, Xianglong Liu1†, Dacheng Tao2
1Beihang University2The University of Sydney
3Harbin Institute of Technology, Shenzhen4Jiangsu University
{18373205, lvchengtao, qinhaotong, xlliu}@buaa.edu.cn ,liangding.liam@gmail.com
Abstract
Large language models (LLMs) have signifi-
cantly advanced the field of natural language
processing, while the expensive memory and
computation consumption impede their practi-
cal deployment. Quantization emerges as one
of the most effective methods for improving
the computational efficiency of LLMs. How-
ever, existing ultra-low-bit quantization always
causes severe accuracy drops. In this paper,
we empirically relieve the micro and macro
characteristics of ultra-low bit quantization and
present a novel Dual-Binarization method for
LLM s, namely DB-LLM . For the micro-level,
we take both the accuracy advantage of 2-bit-
width and the efficiency advantage of binariza-
tion into account, introducing Flexible Dual Bi-
narization (FDB ). By splitting 2-bit quantized
weights into two independent sets of binaries,
FDB ensures the accuracy of representations
and introduces flexibility, utilizing the efficient
bitwise operations of binarization while retain-
ing the inherent high sparsity of ultra-low bit
quantization. For the macro-level, we find the
distortion that exists in the prediction of LLM
after quantization, which is specified as the de-
viations related to the ambiguity of samples.
We propose the Deviation-Aware Distillation
(DAD ) method, enabling the model to focus
differently on various samples. Comprehensive
experiments show that our DB-LLM not only
significantly surpasses the current State-of-The-
Art (SoTA) in ultra-low bit quantization ( e.g.,
perplexity decreased from 9.64 to 7.23), but
also achieves an additional 20% reduction in
computational consumption compared to the
SOTA method under the same bit-width. Our
code will be released soon.
1 Introduction
Recently, Large Language Models (LLMs), such as
ChatGPT (Brown et al., 2020) and LLaMA (Tou-
vron et al., 2023a) have catalyzed a paradigm shift
*Equal contribution.
†Corresponding author.
2 4 8 16 32 64 128
Model Size (GB, log scale)5101520Perplexity
7.596.535.524.84FP16
AWQ 3bit
GPTQ 2bit
DB-LLM (Ours)Figure 1: The perplexity on WikiText2 for LLaMA
family models. 2-bit DB-LLM is close to FP results
and surpasses 3-bit AWQ by a large margin.
in Natural Language Processing (NLP), marking
a significant milestone in the AI revolution. Their
unprecedented capabilities evolved from a mas-
sive memory footprint ( e.g., billion-scale parame-
ters), which constrains the widespread application
of LLMs on resource-limited devices. Several com-
pression schemes are thus proposed to reduce the
memory demands of LLMs, which can be roughly
categorized into weight quantization (Frantar et al.,
2022; Lin et al., 2023), network pruning (Sun et al.,
2023; Ma et al., 2023; He et al., 2022), knowledge
distillation (Gu et al., 2023; Zhong et al., 2024) and
low-rank factorization (Xu et al., 2023; Yuan et al.,
2023). Among these methods, weight quantization
is highly effective and practical since it achieves the
best trade-off between the performance and the cost
of the compression process. Nevertheless, although
many works (Shao et al., 2023; Shang et al., 2023)
attempt to quantize LLMs to ultra-low-bit ( e.g., 2-
bit), their performance is unsatisfactory and falls
far short of industrial application requirements.
Ultra-low-bit quantization ( ≤4 bits), as an ex-
tremely efficient form of quantization, enjoys over
8×memory compression ratio. Despite these spe-
cialized weight-only quantization schemes achiev-
ing savings in storage consumption, they still can-arXiv:2402.11960v1 [cs.LG] 19 Feb 2024 |
2210.13382.pdf | Published as a conference paper at ICLR 2023
EMERGENT WORLD REPRESENTATIONS : EXPLORING A
SEQUENCE MODEL TRAINED ON A SYNTHETIC TASK
Kenneth Li∗
Harvard UniversityAspen K. Hopkins
Massachusetts Institute of TechnologyDavid Bau
Northeastern University
Fernanda Vi ´egas
Harvard UniversityHanspeter Pfister
Harvard UniversityMartin Wattenberg
Harvard University
ABSTRACT
Language models show a surprising range of capabilities, but the source of their
apparent competence is unclear. Do these networks just memorize a collection
of surface statistics, or do they rely on internal representations of the process
that generates the sequences they see? We investigate this question in a synthetic
setting by applying a variant of the GPT model to the task of predicting legal
moves in a simple board game, Othello. Although the network has no a priori
knowledge of the game or its rules, we uncover evidence of an emergent nonlinear
internal representation of the board state. Interventional experiments indicate this
representation can be used to control the output of the network. By leveraging
these intervention techniques, we produce “latent saliency maps” that help explain
predictions.1
1 I NTRODUCTION
Recent language models have shown an intriguing range of capabilities. Networks trained on a simple
“next-word” prediction task are apparently capable of many other things, such as solving logic puzzles
or writing basic code.2Yet how this type of performance emerges from sequence predictions remains
a subject of current debate.
Some have suggested that training on a sequence modeling task is inherently limiting. The arguments
range from philosophical (Bender & Koller, 2020) to mathematical (Merrill et al., 2021). A common
theme is that seemingly good performance might result from memorizing “surface statistics,” i.e.,
a long list of correlations that do not reflect a causal model of the process generating the sequence.
This issue is of practical concern, since relying on spurious correlations may lead to problems on
out-of-distribution data (Bender et al., 2021; Floridi & Chiriatti, 2020).
On the other hand, some tantalizing clues suggest language models may do more than collect spurious
correlations, instead building interpretable world models —that is, understandable models of the
process producing the sequences they are trained on. Recent evidence suggests language models
can develop internal representations for very simple concepts, such as color, direction Abdou et al.
(2021); Patel & Pavlick (2022), or tracking boolean states during synthetic tasks (Li et al., 2021) (see
Related Work (section 6) for more detail).
A promising approach to studying the emergence of world models is used by Toshniwal et al. (2021),
which explores language models trained on chess move sequences. The idea is to analyze the
behavior of a standard language modeling architecture in a well-understood, constrained setting. The
paper finds that these models learn to predict legal chess moves with high accuracy. Furthermore,
by analyzing predicted moves, the paper shows that the model appears to track the board state.
The authors stop short, however, of exploring the form of any internal representations. Such an
∗Correspondence to keli@g.harvard.edu
1Codes at https://github.com/likenneth/othello_world
2See Srivastava et al. (2022) for an encyclopedic list of examples.
1arXiv:2210.13382v4 [cs.LG] 27 Feb 2023 |
1809.04281.pdf | MUSIC TRANSFORMER :
GENERATING MUSIC WITH LONG -TERM STRUCTURE
Cheng-Zhi Anna Huang∗Ashish Vaswani Jakob Uszkoreit Noam Shazeer
Ian Simon Curtis Hawthorne Andrew M. Dai Matthew D. Hoffman
Monica Dinculescu Douglas Eck
Google Brain
ABSTRACT
Music relies heavily on repetition to build structure and meaning. Self-reference
occurs on multiple timescales, from motifs to phrases to reusing of entire sections
of music, such as in pieces with ABA structure. The Transformer (Vaswani
et al., 2017), a sequence model based on self-attention, has achieved compelling
results in many generation tasks that require maintaining long-range coherence.
This suggests that self-attention might also be well-suited to modeling music.
In musical composition and performance, however, relative timing is critically
important. Existing approaches for representing relative positional information
in the Transformer modulate attention based on pairwise distance (Shaw et al.,
2018). This is impractical for long sequences such as musical compositions since
their memory complexity for intermediate relative information is quadratic in the
sequence length. We propose an algorithm that reduces their intermediate memory
requirement to linear in the sequence length. This enables us to demonstrate that a
Transformer with our modified relative attention mechanism can generate minute-
long compositions (thousands of steps, four times the length modeled in Oore et al.
(2018)) with compelling structure, generate continuations that coherently elaborate
on a given motif, and in a seq2seq setup generate accompaniments conditioned on
melodies1. We evaluate the Transformer with our relative attention mechanism on
two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art
results on the latter.
1 I NTRODUCTION
A musical piece often consists of recurring elements at various levels, from motifs to phrases to
sections such as verse-chorus. To generate a coherent piece, a model needs to reference elements
that came before, sometimes in the distant past, repeating, varying, and further developing them to
create contrast and surprise. Intuitively, self-attention (Parikh et al., 2016) appears to be a good match
for this task. Self-attention over its own previous outputs allows an autoregressive model to access
any part of the previously generated output at every step of generation. By contrast, recurrent neural
networks have to learn to proactively store elements to be referenced in a fixed size state or memory,
potentially making training much more difficult. We believe that repeating self-attention in multiple,
successive layers of a Transformer decoder (Vaswani et al., 2017) helps capture the multiple levels at
which self-referential phenomena exist in music.
In its original formulation, the Transformer relies on absolute position representations, using either
positional sinusoids or learned position embeddings that are added to the per-position input repre-
sentations. Recurrent and convolutional neural networks instead model position in relative terms:
RNNs through their recurrence over the positions in their input, and CNNs by applying kernels that
effectively choose which parameters to apply based on the relative position of the covered input
representations.
∗Google AI Resident. Correspondence to: Cheng-Zhi Anna Huang <annahuang@google.com>
1Samples are available for listening at
https://storage.googleapis.com/music-transformer/index.html
1arXiv:1809.04281v3 [cs.LG] 12 Dec 2018 |
NeurIPS-2022-training-language-models-to-follow-instructions-with-human-feedback-Paper-Conference.pdf | Training language models to follow instructions
with human feedback
Long Ouyang∗Jeff Wu∗Xu Jiang∗Diogo Almeida∗Carroll L. Wainwright∗
Pamela Mishkin∗Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray
John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens
Amanda Askell†Peter Welinder Paul Christiano∗†
Jan Leike∗Ryan Lowe∗
OpenAI
Abstract
Making language models bigger does not inherently make them better at following
a user’s intent. For example, large language models can generate outputs that are
untruthful, toxic, or simply not helpful to the user. In other words, these models are
notaligned with their users. In this paper, we show an avenue for aligning language
models with user intent on a wide range of tasks by fine-tuning with human
feedback. Starting with a set of labeler-written prompts and prompts submitted
through a language model API, we collect a dataset of labeler demonstrations of
the desired model behavior, which we use to fine-tune GPT-3 using supervised
learning. We then collect a dataset of rankings of model outputs, which we use to
further fine-tune this supervised model using reinforcement learning from human
feedback. We call the resulting models InstructGPT . In human evaluations on
our prompt distribution, outputs from the 1.3B parameter InstructGPT model are
preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters.
Moreover, InstructGPT models show improvements in truthfulness and reductions
in toxic output generation while having minimal performance regressions on public
NLP datasets. Even though InstructGPT still makes simple mistakes, our results
show that fine-tuning with human feedback is a promising direction for aligning
language models with human intent.
1 Introduction
Large language models (LMs) can be prompted to perform a range of natural language process-
ing (NLP) tasks, given some examples of the task as input. However, these models often express
unintended behaviors such as making up facts, generating biased or toxic text, or simply not following
user instructions (Bender et al., 2021; Bommasani et al., 2021; Kenton et al., 2021; Weidinger et al.,
2021; Tamkin et al., 2021; Gehman et al., 2020). This is because the language modeling objective
∗Primary authors. This was a joint project of the OpenAI Alignment team. RL and JL are the team leads.
Corresponding author: lowe@openai.com .
†Work done while at OpenAI. Current affiliations: AA: Anthropic; PC: Alignment Research Center.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). |
2305.12132.pdf | Can Public Large Language Models Help Private Cross-device
Federated Learning?
Boxin Wang3∗, Yibo Jacky Zhang4, Yuan Cao2, Bo Li3, H. Brendan McMahan1,
Sewoong Oh1, Zheng Xu1, Manzil Zaheer2
1Google Research,2Google Deepmind,3UIUC,4Stanford
Abstract
We study (differentially) private federated
learning (FL) of language models. The lan-
guage models in cross-device FL are relatively
small, which can be trained with meaning-
ful formal user-level differential privacy (DP)
guarantees when massive parallelism in train-
ing is enabled by the participation of a mod-
erate size of users. Recently, public data has
been used to improve privacy-utility trade-offs
for both large and small language models. In
this work, we provide a systematic study of us-
ing large-scale public data and LLMs to help
differentially private training of on-device FL
models, and further improve the privacy-utility
tradeoff by techniques of distillation. More-
over, we propose a novel distribution match-
ing algorithm with theoretical grounding to
sample public data close to private data distri-
bution, which significantly improves the sam-
ple efficiency of (pre-)training on public data.
The proposed method is efficient and effective
for training private model by taking advantage
of public data, especially for customized on-
device architectures that do not have ready-to-
use pre-trained models.
1 Introduction
Federated Learning (FL) (McMahan et al., 2017,
2018; Kairouz et al., 2019) is designed to collabo-
ratively train a global model on decentralized data
across user clients while protecting data privacy.
FL emerged as an effective privacy-preserving so-
lution of training (language) models, as rich text
data are generated by users, which may contain sen-
sitive and personal information. After McMahan
et al. (2017) proposed to train on-device recurrent
neural network models, FL has been widely used
in various natural language processing applications
and products, including next-word prediction (Hard
∗Part of the work was done while Boxin Wang
was an intern at Google. Correspondence to: Boxin
Wang boxinw2@illinois.edu and Zheng Xu
xuzheng@google.com .et al., 2018), keyword spotting (Hard et al., 2020),
and out-of-vocabulary word discovery (Chen et al.,
2019).
To further protect user privacy, Differential Pri-
vacy (DP) (Dwork et al., 2006; Dwork, 2011;
Dwork and Roth, 2014; McMahan et al., 2018)
is introduced to provide formal privacy guarantees
of models trained by federated learning. DP for
deep learning explicitly adds random noise with
bounded sensitivity to a training process ( e.g., DP-
SGD (Abadi et al., 2016)), ensuring a quantifiable
similarity in output model distributions when the
training dataset changes. When combining DP
with FL, a variant of DP-SGD called DP-FedAvg
(McMahan et al., 2018)) is applied to guarantee
user-level DP (Dwork, 2010). Current research pri-
marily focuses on applying user-level DP to small
on-device models with fewer than 10 million pa-
rameters (McMahan et al., 2018; Kairouz et al.,
2021; Ramaswamy et al., 2020). The model size
is limited due to challenges such as significant DP
noise required to preserve privacy (Li et al., 2021)
and the communication costs in cross-device FL.
Recent advances in large language models
(LLMs) (Thoppilan et al., 2022; Radford et al.,
2019; Brown et al., 2020; Devlin et al., 2019; Raffel
et al., 2020) have revolutionized natural language
processing (NLP) and achieved unprecedented per-
formance on various tasks such as text generation,
machine translation, and sentiment analysis. How-
ever, their success comes at a cost of requiring mas-
sive amounts of computational resources, making
them difficult to deploy on resource-constrained
devices such as smartphones, tablets, or other edge
devices. Additionally, there are concerns regarding
the user privacy in various aspects such as memoriz-
ing personal information in training, and exposing
private query in inference.
Recent work explore incorporating public infor-
mation to improve privacy-utility trade-off in ap-
plying DP for (large) LMs (Yu et al., 2022; Li et al.,arXiv:2305.12132v1 [cs.LG] 20 May 2023 |
2201.02867v3.pdf | Deep Generative Modeling for Volume
Reconstruction in Cryo-Electron Microscopy
Claire Donnat1+, Axel Levy2,3, Fr´ed´eric Poitevin3, Ellen Zhong4, and Nina Miolane5*+
1University of Chicago, Department of Statistics, Chicago, Illinois, USA
2Stanford University, Department of Electrical Engineering, Stanford, CA, USA
3LCLS, SLAC National Accelerator Laboratory, Menlo Park, CA, USA
4Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Lab, Boston, MA, USA
5University of California Santa Barbara, Department of Electrical & Computer Engineering, Santa Barbara, CA, USA
*ninamiolane@ucsb.edu
+these authors contributed equally to this work
ABSTRACT
Recent breakthroughs in high-resolution imaging of biomolecules in solution with cryo-electron microscopy (cryo-EM) have
unlocked new doors for the reconstruction of molecular volumes, thereby promising further advances in biology, chemistry,
and pharmacological research. Recent next-generation volume reconstruction algorithms that combine generative modeling
with end-to-end unsupervised deep learning techniques have shown promising preliminary results, but still face considerable
technical and theoretical hurdles when applied to experimental cryo-EM images. In light of the proliferation of such methods, we
propose here a critical review of recent advances in the field of deep generative modeling for cryo-EM volume reconstruction .
The present review aims to (i) unify and compare these new methods using a consistent statistical framework, (ii) present
them using a terminology familiar to machine learning researchers and computational biologists with no specific background in
cryo-EM, and (iii) provide the necessary perspective on current advances to highlight their relative strengths and weaknesses,
along with outstanding bottlenecks and avenues for improvements in the field. This review might also raise the interest of
computer vision practitioners, as it highlights significant limits of deep generative models in low signal-to-noise regimes —
therefore emphasizing a need for new theoretical and methodological developments.
Introduction
Electron beam
Particles:
biomolecules
“flash frozen”
in solution
2D projections
Figure 1. Acquisition of 2D cryo-EM images (2D projections)
from 3D biomolecular volumes.High-resolution reconstruction of molecular volumes from single par-
ticle images has the potential to facilitate new breakthroughs in our
ability to understand fundamental biological mechanisms and engineer
macromolecular function7, 52. In this context, cryo-electron microscopy
(cryo-EM) has fostered a revolution in structural biology by allowing the
imaging of biomolecules in solution at atomic resolution8, 9. However,
the estimation of these molecules’ 3-dimensional (3D) volume from
cryo-EM data continues to pose a formidable challenge. In this setting,
observations are limited to the raw 2D projections of molecules (also
called particles) relative to an incoming electron beam, while their 3D ori-
entation and position (jointly called poses) are unknown — see Figure 1.
Reconstructing molecular volumes therefore also requires recovering a
number of hidden variables such as each particle’s 3D orientation. The
difficulty of this task is further compounded by a combination of factors, including the variability in the shape of any given
molecule (also referred to as structural “heterogeneity”), the non-linear physics of the data acquisition process, as well as
extremely low signal-to-noise ratios — concepts formalized in the image formation model below.
Image Formation Model. The process of image formation in cryo-EM involves several physical phenomena, including
pairwise interactions between atoms, interactions between the electron beam and the molecule’s electrostatic potential, or
microscope effects. We refer the reader to Dill et al.10, Kohl and Reimer11, and Vulovic et al.12for in-depth descriptions of
these phenomena. Nonetheless, in most cases12, 13, each image Xiin a dataset of nimages of single particles can be modeled as
a random sample from the following generative model:
Xi=PSF i∗(ti◦Π2D◦Ri)(V(i))+εi, with i=1···n. (1)arXiv:2201.02867v3 [eess.IV] 26 May 2022 |
2304.02034.pdf | Effective Theory of Transformers at Initialization
Emily Dinan,∗Sho Yaida,†and Susan Zhang‡
Meta AI
Meta Platforms, Inc.§
We perform an effective-theory analysis of forward–backward signal propagation in wide
and deep Transformers, i.e., residual neural networks with multi-head self-attention blocks
and multilayer perceptron blocks. This analysis suggests particular width scalings of initial-
ization and training hyperparameters for these models. We then take up such suggestions,
training Vision and Language Transformers in practical setups.
∗Electronic address: edinan@meta.com
†Electronic address: shoyaida@meta.com
‡Electronic address: susanz@meta.com
§The author ordering was determined by the hypothetical coin toss that 100%-respects the alphabetical ordering.arXiv:2304.02034v1 [cs.LG] 4 Apr 2023 |
2307.12950.pdf | RLCD: R EINFORCEMENT LEARNING FROM CONTRAST
DISTILLATION FOR LANGUAGE MODEL ALIGNMENT
Kevin Yang1,2Dan Klein1Asli Celikyilmaz2Nanyun Peng3Yuandong Tian2
1UC Berkeley,2Meta AI,3UCLA
{yangk,klein}@berkeley.edu,{aslic,yuandong}@meta.com,violetpeng@cs.ucla.edu
ABSTRACT
We propose Reinforcement Learning from Contrast Distillation (RLCD), a method
for aligning language models to follow natural language principles without using
human feedback. RLCD trains a preference model using simulated preference
pairs that contain both a high-quality and low-quality example, generated using
contrasting positive and negative prompts. The preference model is then used to
improve a base unaligned language model via reinforcement learning. Empirically,
RLCD outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al.,
2022) baselines across three diverse alignment tasks—harmlessness, helpfulness,
and story outline generation—and on both 7B and 30B model scales for preference
data simulation.
1 I NTRODUCTION
Reinforcement Learning from Human Feedback (RLHF) has recently been used to great effect to
align pretrained large language models (LLMs) to human preferences, optimizing for desirable
qualities like harmlessness and helpfulness (Bai et al., 2022a) and achieving state-of-the-art results
across a variety of natural language tasks (OpenAI, 2023).
A standard RLHF procedure fine-tunes an initial unaligned LLM using an RL algorithm such as
PPO (Schulman et al., 2017), optimizing the LLM to align with human preferences. RLHF is thus
critically dependent on a reward model derived from human-labeled preferences, typically pairwise
preferences on LLM outputs (o1, o2)generated from a shared prompt p.
However, collecting human pairwise preference data, especially high-quality data, may be expensive
and time consuming at scale. To address this problem, approaches have been proposed to obtain
labels without human annotation, such as Reinforcement Learning from AI Feedback (RLAIF) and
context distillation.
RLAIF approaches (e.g., Bai et al. (2022b)) simulate human pairwise preferences by scoring o1and
o2with an LLM (Figure 1 center); the scoring LLM is often the same as the one used to generate
the original pairs (o1, o2). Of course, the resulting LLM pairwise preferences will be somewhat
noisier compared to human labels. However, this problem is exacerbated by using the same prompt
pto generate both o1ando2, causing o1ando2to often be of very similar quality and thus hard
to differentiate (e.g., Table 1). Consequently, training signal can be overwhelmed by label noise,
yielding lower-quality preference data.
Meanwhile, context distillation methods (e.g., Sun et al. (2023)) create more training signal by
modifying the initial prompt p. The modified prompt p+typically contains additional context
encouraging a directional attribute change in the output o+(Figure 1 right). However, context
distillation methods only generate a single output o+per prompt p+, which is then used for supervised
fine-tuning, losing the pairwise preferences which help RLHF-style approaches to derive signal from
the contrast between outputs. Multiple works have observed that RL approaches using preference
models for pairwise preferences can substantially improve over supervised fine-tuning by itself when
aligning LLMs (Ouyang et al., 2022; Dubois et al., 2023).
Therefore, while both RLAIF and context distillation approaches have already been successfully
applied in practice to align language models, we posit that it may be even more effective to combine
1arXiv:2307.12950v1 [cs.CL] 24 Jul 2023 |
2206.14486.pdf | Beyond neural scaling laws:
beating power law scaling via data pruning
Ben Sorscher∗ ∗1Robert Geirhos∗2Shashank Shekhar3
Surya Ganguli1,3§Ari S. Morcos3§
∗equal contribution
1Department of Applied Physics, Stanford University
2University of Tübingen
3Meta AI (FAIR)
§Joint senior authors
Abstract
Widely observed neural scaling laws, in which error falls off as a power of the
training set size, model size, or both, have driven substantial performance im-
provements in deep learning. However, these improvements through scaling alone
require considerable costs in compute and energy. Here we focus on the scaling of
error with dataset size and show how in theory we can break beyond power law
scaling and potentially even reduce it to exponential scaling instead if we have
access to a high-quality data pruning metric that ranks the order in which training
examples should be discarded to achieve any pruned dataset size. We then test
this improved scaling prediction with pruned dataset size empirically, and indeed
observe better than power law scaling in practice on ResNets trained on CIFAR-10,
SVHN, and ImageNet. Next, given the importance of finding high-quality pruning
metrics, we perform the first large-scale benchmarking study of ten different data
pruning metrics on ImageNet. We find most existing high performing metrics
scale poorly to ImageNet, while the best are computationally intensive and require
labels for every image. We therefore developed a new simple, cheap and scalable
self-supervised pruning metric that demonstrates comparable performance to the
best supervised metrics. Overall, our work suggests that the discovery of good
data-pruning metrics may provide a viable path forward to substantially improved
neural scaling laws, thereby reducing the resource costs of modern deep learning.
1 Introduction
Empirically observed neural scaling laws [ 1,2,3,4,5,6,7,8] in many domains of machine learning,
including vision, language, and speech, demonstrate that test error often falls off as a power law with
either the amount of training data, model size, or compute. Such power law scaling has motivated
significant societal investments in data collection, compute, and associated energy consumption.
However, power law scaling is extremely weak and unsustainable. For example, a drop in error
∗work done during an internship at Meta AI (FAIR)
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2206.14486v6 [cs.LG] 21 Apr 2023 |
2305.16381.pdf | DPOK: Reinforcement Learning for
Fine-tuning Text-to-Image Diffusion Models
Ying Fan˚,1,2, Olivia Watkins3, Yuqing Du3, Hao Liu3, Moonkyung Ryu1, Craig Boutilier1,
Pieter Abbeel3,Mohammad Ghavamzadeh1,Kangwook Lee2,Kimin Lee˚,1
˚Equal technical contribution
1Google Research2University of Wisconsin-Madison3UC Berkeley
Abstract
Learning from human feedback has been shown to improve text-to-image models.
These techniques first learn a reward function that captures what humans care about
in the task and then improve the models based on the learned reward function.
Even though relatively simple approaches (e.g., rejection sampling based on reward
scores) have been investigated, fine-tuning text-to-image models with the reward
function remains challenging. In this work, we propose using online reinforcement
learning (RL) to fine-tune text-to-image models. We focus on diffusion models ,
defining the fine-tuning task as an RL problem, and updating the pre-trained
text-to-image diffusion models using policy gradient to maximize the feedback-
trained reward. Our approach, coined DPOK, integrates policy optimization with
KL regularization. We conduct an analysis of KL regularization for both RL
fine-tuning and supervised fine-tuning. In our experiments, we show that DPOK
is generally superior to supervised fine-tuning with respect to both image-text
alignment and image quality.
1 Introduction
Recent advances in diffusion models [10,36,37], together with pre-trained text encoders (e.g.,
CLIP [ 28], T5 [ 29]) have led to impressive results in text-to-image generation. Large-scale text-to-
image models, such as Imagen [ 33], Dalle-2 [ 30] and Stable Diffusion [ 31], generate high-quality,
creative images given novel text prompts. However, despite these advances, current models have
systematic weaknesses. For example, current models have a limited ability to compose multiple
objects [ 6,7,26]. They also frequently encounter difficulties when generating objects with specified
colors and counts [12, 18].
Learning from human feedback (LHF) has proven to be an effective means to overcome these
limitations [ 14,18,41,43]. Lee et al. [18] demonstrate that certain properties, such as generating
objects with specific colors, counts, and backgrounds, can be improved by learning a reward function
from human feedback, followed by fine-tuning the text-to-image model using supervised learning.
They show that simple supervised fine-tuning based on reward-weighted loss can improve the reward
scores, leading to better image-text alignment. However, supervised fine-tuning often induces a
deterioration in image quality (e.g., over-saturated or non-photorealistic images). This is likely due to
the model being fine-tuned on a fixed dataset that is generated by a pre-trained model (Figure 1(a)).
In this work, we explore using online reinforcement learning (RL) for fine-tuning text-to-image
diffusion models (Figure 1(b)). We show that optimizing the expected reward of a diffusion model’s
image output is equivalent to performing policy gradient on a multi-step diffusion model under certain
regularity assumptions. We also incorporate Kullback–Leibler (KL) divergence with respect to the
pre-trained model as regularization in an online manner, treating this as an implicit reward.
Preprint. Under review.arXiv:2305.16381v1 [cs.LG] 25 May 2023 |
10.1016.j.cell.2024.01.036.pdf | Article
Structure of the plant plastid-encoded RNA
polymerase
Graphical abstract
Highlights
dStructure of the chloroplast transcription complex
dFifteen nuclear-encoded subunits encase the plastid-
encoded polymerase
dSubunits PAP1 and PAP2 interact with the DNA and themRNA, respectively
dStructure-guided insights into enzymatic activities ofsubunitsAuthors
A´ngel Vergara-Cruces, Ishika Pramanick,
David Pearce, Vinod K. Vogirala,Matthew J. Byrne, Jason K.K. Low,Michael W. Webster
Correspondence
michael.webster@jic.ac.uk
In brief
Structural characterization of thechloroplast RNA polymerase thattranscribes photosynthetic genesprovides insight into its composition,assembly, and evolution.
Vergara-Cruces et al., 2024, Cell 187, 1145–1159
February 29, 2024 Crown Copyright ª2024 Published by Elsevier Inc.
https://doi.org/10.1016/j.cell.2024.01.036 ll
|
99_on_recovering_higher_order_int.pdf | ONRECOVERING HIGHER -ORDER INTERACTIONS
FROM PROTEIN LANGUAGE MODELS
Darin Tsui & Amirali Aghazadeh
School of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA 30332, USA
{darint,amiralia }@gatech.edu
ABSTRACT
Protein language models leverage evolutionary information to perform state-of-
the-art 3D structure and zero-shot variant prediction. Yet, extracting and explain-
ingallthe mutational interactions that govern model predictions remains diffi-
cult as it requires querying the entire amino acid space for nsites using 20nse-
quences, which is computationally expensive even for moderate values of n(e.g.,
n∼10). Although approaches to lower the sample complexity exist, they of-
ten limit the interpretability of the model to just single and pairwise interactions.
Recently, computationally scalable algorithms relying on the assumption of spar-
sity in the Fourier domain have emerged to learn interactions from experimental
data. However, extracting interactions from language models poses unique chal-
lenges: it’s unclear if sparsity is always present or if it is the only metric needed
to assess the utility of Fourier algorithms. Herein, we develop a framework to
do a systematic Fourier analysis of the protein language model ESM2 applied on
three proteins—green fluorescent protein (GFP), tumor protein P53 (TP53), and
G domain B1 (GB1)—across various sites for 228 experiments. We demonstrate
that ESM2 is dominated by three regions in the sparsity-ruggedness plane, two
of which are better suited for sparse Fourier transforms. Validations on two sam-
ple proteins demonstrate recovery of all interactions with R2= 0.72in the more
sparse region and R2= 0.66in the more dense region, using only 7 million out
of2010∼1013ESM2 samples, reducing the computational time by a stagger-
ing factor of 15,000. All codes and data are available on our GitHub repository
https://github.com/amirgroup-codes/InteractionRecovery.
1 I NTRODUCTION
Recent advances in transformer-based deep learning models have leveraged evolutionary informa-
tion to learn biological patterns in protein sequences. These models, encompassing up to 15 billion
learnable parameters, are trained on amino acid sequences stored in databases such as UniProt (Lin
et al., 2023; Consortium, 2015). In particular, masked language models have been demonstrated
to achieve state-of-the-art performance in zero-shot variant effect and protein structure prediction
without the need for explicit training (Meier et al., 2021; Brandes et al., 2023). Hence, it’s widely
believed that protein language models encapsulate representations that reflect the fundamental rules
of biology and physics (Rives et al., 2021; Rao et al., 2020). However, further applications of protein
language models, e.g., for knowledge discovery, are hindered due to the challenge of interpreting
the biological interactions that underlie their predictions.
In principle, if we wanted to learn the structural impact of variants underlying nmutational sites in a
protein, referred to as the region’s landscape, we could query these language models on all possible
20nmutational combinations (for all 20 standard amino acids). However, computational challenges
would make such an endeavor nearly unrunnable at a large scale. For instance, on four NVIDIA
RTX A6000s, each sample takes about 0.01 seconds to compute. It would take 20n×0.01 = 32000
seconds, or around nine hours, to compute all possible combinations for n= 5. However, even just
increasing the length to n= 8would make the entire space take 194 years to complete.
1 |
langegabelriedmiller2011chapter.pdf | Batch Reinforcement Learning
Sascha Lange, Thomas Gabel, and Martin Riedmiller
Abstract Batch reinforcement learning is a subfield of dynamic programming-based
reinforcement learning. Originally defined as the task of learning the best possible
policy from a fixed set of a priori-known transition samples, the (batch) algorithms
developed in this field can be easily adapted to the classical online case, where the
agent interacts with the environment while learning. Due to the efficient use of col-
lected data and the stability of the learning process, this research area has attracted
a lot of attention recently. In this chapter, we introduce the basic principles and the
theory behind batch reinforcement learning, describe the most important algorithms,
exemplarily discuss ongoing research within this field, and briefly survey real-world
applications of batch reinforcement learning.
1 Introduction
Batch reinforcement learning is a subfield of dynamic programming (DP) based re-
inforcement learning (RL) that has vastly grown in importance during the last years.
Historically, the term ‘batch RL’ is used to describe a reinforcement learning setting,
where the complete amount of learning experience—usually a set of transitions sam-
pled from the system—is fixed and given a priori (Ernst et al, 2005a). The task of
the learning system then is to derive a solution—usually an optimal policy—out of
this given batch of samples.
In the following, we will relax this assumption of an a priori fixed set of training
experience. The crucial benefit of batch algorithms lies in the way they handle a
batch of transitions and get the best out of it, rather than in the fact that this set is
fixed. From this perspective, batch RL algorithms are characterized by two basic
constituents: all observed transitions are stored and updates occur synchronously on
Sascha Lange, Thomas Gabel, Martin Riedmiller
Albert-Ludwigs-Universtit ¨at Freiburg, Faculty of Engineering, Georges-K ¨ohler-Allee 079, D-
79110 Freiburg, Germany, e-mail: [slange,tgabel,riedmiller]@informatik.uni-freiburg.de
1 |
2210.15097.pdf | Contrastive Decoding: Open-ended Text Generation as Optimization
Xiang Lisa Li1, Ari Holtzman2, Daniel Fried3, Percy Liang1, Jason Eisner4,
Tatsunori Hashimoto1, Luke Zettlemoyer2,5, Mike Lewis5
Stanford University1, University of Washington2, Carnegie Mellon University3,
Johns Hopkins University4, FAIR5
xlisali@stanford.edu ,ahai@cs.washington.edu ,dfried@cs.cmu.edu ,
pliang@stanford.edu ,jason@cs.jhu.edu ,thashim@stanford.edu ,
lsz@cs.washington.edu ,mikelewis@meta.com
Abstract
Given a language model (LM), maximum
probability is a poor decoding objective for
open-ended generation, because it produces
short and repetitive text. On the other hand,
sampling can often produce incoherent text
that drifts from the original topics. We propose
contrastive decoding (CD), a reliable decoding
approach that optimizes a contrastive objective
subject to a plausibility constraint. The
contrastive objective returns the difference
between the likelihood under a large LM
(called the expert, e.g. OPT-13B) and a small
LM (called the amateur, e.g. OPT-125M),
and the constraint ensures that the outputs are
plausible. CD is inspired by the fact that the
failures of larger LMs (e.g., repetition, inco-
herence) are even more prevalent in smaller
LMs, and that this difference signals which
texts should be preferred. CD requires zero
additional training, and produces higher quality
text than decoding from the larger LM alone.
It also works across model scales (OPT-13B
and GPT2-1.5B) and significantly outperforms
four strong decoding algorithms (e.g., nucleus,
top-k) in automatic and human evaluations
across wikipedia, news and story domains.1
1 Introduction
Open-ended text generation aims to craft fluent and
coherent textual continuations of given prompts,
laying foundations for various downstream applic-
ations such as writing assistance and story gen-
eration (Brown et al., 2020). The canonical ap-
proaches often sample from large pre-trained lan-
guage models (Holtzman et al., 2020; Fan et al.,
2018; Radford et al., 2019), but the generated text
is prone to incoherence and topic drift as unlucky
sampling choices compound over long sequences
(Eikema and Aziz, 2020; Maynez et al., 2020). On
the other hand, searching for the most likely se-
1Code is available at https://github.com/
XiangLi1999/ContrastiveDecoding.git
Figure 1: Contrastive decoding exploits the contrasts
between expert and amateur LM of different sizes by
choosing tokens that maximize their log-likelihood
difference. CD produces high-quality text that amplifies
the good expert behavior and diminishes the undesired
amateur behavior.
quences often results in short, repetitive and tedi-
ous text (Holtzman et al., 2020), indicating that
maximizing probability is a wrong decoding ob-
jective.
We propose a new search-based approach,
contrastive decoding (CD), that can generate fluent
and lexically diverse text without compromising
coherence. As shown in Figure 1, contrastive
decoding takes an off-the-shelf large language
model such as OPT-13B (that we call the expert)
and an off-the-shelf smaller language model such
as OPT-125M (that we call the amateur). CD
searches for text that maximizes the difference
between expert log-probabilities and amateur
log-probabilities, subject to plausibility constraints
which restrict the search space to tokens with
sufficiently high probability under the expert LM.
Contrastive Decoding works because many fail-
ure modes of language models (short, repetitive, ir-
relevant or uninteresting strings) are more commonarXiv:2210.15097v2 [cs.CL] 10 Jul 2023 |
3639-the-effects-of-reward-misspeci.pdf | THEEFFECTS OF REWARD MISSPECIFICATION :
MAPPING AND MITIGATING MISALIGNED MODELS
Alexander Pan
CaltechKush Bhatia
UC BerkeleyJacob Steinhardt
UC Berkeley
ABSTRACT
Reward hacking—where RL agents exploit gaps in misspecified reward
functions—has been widely observed, but not yet systematically studied. To un-
derstand how reward hacking arises, we construct four RL environments with
misspecified rewards. We investigate reward hacking as a function of agent ca-
pabilities: model capacity, action space resolution, observation space noise, and
training time. More capable agents often exploit reward misspecifications, achiev-
ing higher proxy reward and lower true reward than less capable agents. Moreover,
we find instances of phase transitions : capability thresholds at which the agent’s
behavior qualitatively shifts, leading to a sharp decrease in the true reward. Such
phase transitions pose challenges to monitoring the safety of ML systems. To ad-
dress this, we propose an anomaly detection task for aberrant policies and offer
several baseline detectors.
1 I NTRODUCTION
As reinforcement learning agents are trained with better algorithms, more data, and larger policy
models, they are at increased risk of overfitting their objectives (Russell, 2019). Reward hacking ,
or the gaming of misspecified reward functions by RL agents, has appeared in a variety of con-
texts, such as game playing (Ibarz et al., 2018), text summarization (Paulus et al., 2018), and au-
tonomous driving (Knox et al., 2021). These examples show that better algorithms and models are
not enough; for human-centered applications such as healthcare (Yu et al., 2019), economics (Trott
et al., 2021) and robotics (Kober et al., 2013), RL algorithms must be safe and aligned with human
objectives (Bommasani et al., 2021; Hubinger et al., 2019).
Reward misspecifications occur because real-world tasks have numerous, often conflicting desider-
ata. In practice, reward designers resort to optimizing a proxy reward that is either more readily
measured or more easily optimized than the true reward. For example, consider a recommender
system optimizing for users’ subjective well-being (SWB). Because SWB is difficult to measure,
engineers rely on more tangible metrics such as click-through rates or watch-time. Optimizing for
misspecified proxies led YouTube to overemphasize watch-time and harm user satisfaction (Stray,
2020), as well as to recommended extreme political content to users (Ribeiro et al., 2020).
Addressing reward hacking is a first step towards developing human-aligned RL agents and one goal
of ML safety (Hendrycks et al., 2021a). However, there has been little systematic work investigating
when or how it tends to occur, or how to detect it before it runs awry. To remedy this, we study
the problem of reward hacking across four diverse environments: traffic control (Wu et al., 2021),
COVID response (Kompella et al., 2020), blood glucose monitoring (Fox et al., 2020), and the Atari
game Riverraid (Brockman et al., 2016). Within these environments, we construct nine misspecified
proxy reward functions (Section 3).
Using our environments, we study how increasing optimization power affects reward hacking, by
training RL agents with varying resources such as model size, training time, action space resolution,
and observation space noise (Section 4). We find that more powerful agents often attain higher proxy
reward but lower true reward, as illustrated in Figure 1. Since the trend in ML is to increase resources
exponentially each year (Littman et al., 2021), this suggests that reward hacking will become more
pronounced in the future in the absence of countermeasures.
1 |
2401.12187.pdf | WARM: On the Benefits of Weight Averaged
Reward Models
Alexandre Ramé, Nino Vieillard, Léonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, Johan Ferret
Google DeepMind
Aligning large language models (LLMs) with human preferences through reinforcement learning (RLHF)
can lead to reward hacking, where LLMs exploit failures in the reward model (RM) to achieve seemingly
high rewards without meeting the underlying objectives. We identify two primary challenges when
designing RMs to mitigate reward hacking: distribution shifts during the RL process and inconsistencies
in human preferences. As a solution, we propose Weight Averaged Reward Models ( WARM), first fine-
tuning multiple RMs, then averaging them in the weight space. This strategy follows the observation
that fine-tuned weights remain linearly mode connected when sharing the same pre-training. By
averaging weights, WARMimproves efficiency compared to the traditional ensembling of predictions,
while improving reliability under distribution shifts and robustness to preference inconsistencies. Our
experiments on summarization tasks, using best-of- 𝑁and RL methods, shows that WARMimproves the
overall quality and alignment of LLM predictions; for example, a policy RL fine-tuned with WARMhas a
79.4% win rate against a policy RL fine-tuned with a single RM.
Keywords: Alignment, RLHF, Reward Modeling, Model Merging
1. Introduction
Reward modeling. Conversational assistants such as Gemini [ 1] or GPT-4 [ 2] have revolutionized the
AI community and beyond. These LLMs are capable of completing novel and intricate tasks, including
mathematics, coding, and tool use [3]. These advancements are underpinned by a systematic three
stage training procedure: pre-training by next token prediction [ 4,5,6], supervised fine-tuning (SFT)
to learn to follow instructions [ 7,8,9], and ultimately, reinforcement learning (RL) to maximize a
reward encapsulating the desired behaviors [ 10]. However, defining such rewards for real-world tasks
is non-trivial [ 11]. In reinforcement learning from human feedback (RLHF) [ 12,13,14,15], rewards
are reward models (RMs), trained on binary preference datasets to emulate human judgment. The
enhancement of LLM capabilities from RL is strongly tied to the quality of the RMs [16].
Reward hacking. Particularly insidious in RLHF [ 17,18] is thereward hacking issue [19,20,21,22]
(a.k.a. reward overoptimization), arising from reward misspecification [23,24] between the proxy
RM and actual human preferences. While optimizing for the RM initially provides improvements, in
later stages the policy (i.e., the LLM being trained) usually learns to exploit loopholes in the RM and
achieves high rewards without truly fulfilling the intended objectives, as illustrated in Figure 1(b).
Thisrewardhackingphenomenonposesnumerousissues. First, itdegradesperformances, manifesting
as linguistically flawed [ 25] or unnecessarily verbose [ 26] outputs, which do not reflect true human
preferences. Second, it complicates checkpoint selection due to the unreliability of the proxy RM,
echoing Goodhart’s Law [ 27]: “when a measure becomes a target, it ceases to be a good measure”.
Third, it can engender sycophancy [ 28,29] or amplify social biases, reflecting the limited and skewed
demographics of feedback providers [ 30,31]. Lastly and most critically, misalignment [ 32,33] due
to reward hacking can escalate into safety risks [ 19,34,35], in particular given the rapid integration
of LLMs in everyday life and critical decision-making. Such concerns underscore the need to mitigate
reward hacking to ensure the beneficial and safe deployment of LLMs.
Corresponding author: alexandrerame@google.comarXiv:2401.12187v1 [cs.LG] 22 Jan 2024 |
2305.16183.pdf | Passive learning of active causal strategies in agents
and language models
Andrew K. Lampinen
Google DeepMind
London, UK
lampinen@deepmind.comStephanie C. Y. Chan
Google DeepMind
London, UK
scychan@deepmind.comIshita Dasgupta
Google DeepMind
London, UK
idg@deepmind.com
Andrew J. Nam
Stanford University
Stanford, CA
ajhnam@stanford.eduJane X. Wang
Google DeepMind
London, UK
wangjane@deepmind.com
Abstract
What can be learned about causality and experimentation from passive data? This
question is salient given recent successes of passively-trained language models
in interactive domains such as tool use. Passive learning is inherently limited.
However, we show that purely passive learning can in fact allow an agent to learn
generalizable strategies for determining and using causal structures, as long as the
agent can intervene at test time. We formally illustrate that learning a strategy
of first experimenting, then seeking goals, can allow generalization from passive
learning in principle. We then show empirically that agents trained via imitation
on expert data can indeed generalize at test time to infer and use causal links
which are never present in the training data; these agents can also generalize
experimentation strategies to novel variable sets never observed in training. We
then show that strategies for causal intervention and exploitation can be generalized
from passive data even in a more complex environment with high-dimensional
observations, with the support of natural language explanations. Explanations
can even allow passive learners to generalize out-of-distribution from perfectly-
confounded training data. Finally, we show that language models, trained only
on passive next-word prediction, can generalize causal intervention strategies
from a few-shot prompt containing examples of experimentation, together with
explanations and reasoning. These results highlight the surprising power of passive
learning of active causal strategies, and may help to understand the behaviors and
capabilities of language models.
1 Introduction
Learning from passive observational data only allows learning correlational, not causal, structure.
This observation is sometimes cited as a fundamental limitation of current machine learning research
[52,53,34]. However, reinforcement learning (RL) agents can intervene on their environment, and
are therefore not entirely limited. Indeed, various works have shown that RL agents can (meta-)learn
to intervene on the environment to discover and exploit its causal structure [43, 13, 36, 15, 25].
However, these prior works leave open the possibility that an agent could passively learn a generaliz-
able strategy for discovering and exploiting causal structure. While it is certainly necessary for an
agent to intervene on the world at test time to discover causal structure, it may be possible for the
agent to learn such a strategy from purely passive, offline data . Metaphorically, we ask “could an
Preprint. Under review.arXiv:2305.16183v1 [cs.LG] 25 May 2023 |
2001.08361.pdf | Scaling Laws for Neural Language Models
Jared Kaplan∗
Johns Hopkins University, OpenAI
jaredk@jhu.eduSam McCandlish∗
OpenAI
sam@openai.com
Tom Henighan
OpenAI
henighan@openai.comTom B. Brown
OpenAI
tom@openai.comBenjamin Chess
OpenAI
bchess@openai.comRewon Child
OpenAI
rewon@openai.com
Scott Gray
OpenAI
scott@openai.comAlec Radford
OpenAI
alec@openai.comJeffrey Wu
OpenAI
jeffwu@openai.comDario Amodei
OpenAI
damodei@openai.com
Abstract
We study empirical scaling laws for language model performance on the cross-entropy loss.
The loss scales as a power-law with model size, dataset size, and the amount of compute
used for training, with some trends spanning more than seven orders of magnitude. Other
architectural details such as network width or depth have minimal effects within a wide
range. Simple equations govern the dependence of overfitting on model/dataset size and the
dependence of training speed on model size. These relationships allow us to determine the
optimal allocation of a fixed compute budget. Larger models are significantly more sample-
efficient, such that optimally compute-efficient training involves training very large models
on a relatively modest amount of data and stopping significantly before convergence.
∗Equal contribution.
Contributions: Jared Kaplan and Sam McCandlish led the research. Tom Henighan contributed the LSTM ex-
periments. Tom Brown, Rewon Child, and Scott Gray, and Alec Radford developed the optimized Transformer
implementation. Jeff Wu, Benjamin Chess, and Alec Radford developed the text datasets. Dario Amodei provided
guidance throughout the project.arXiv:2001.08361v1 [cs.LG] 23 Jan 2020 |
10.1038.s41467-023-38539-w.pdf | Article https://doi.org/10.1038/s41467-023-38539-w
A method for restoring signals and revealing
individual macromolecule states incryo-ET, REST
Haonan Zhang1,2,3,Y a nL i1,3,Y a n a nL i u1,2, Dongyu Li1,2,L i nW a n g1, Kai Song1,
Keyan Bao1& Ping Zhu1,2
Cryo-electron tomography (cryo-ET) is widely used to explore the 3D density
of biomacromolecules. However, the heavy noise and missing wedge effectprevent directly visualizing and analyzing the 3D reconstructions. Here, weintroduced REST, a deep learning stra tegy-based method to establish the
relationship between low-quality and high-quality density and transfer theknowledge to restore signals in cryo-ET. Test results on the simulated and realcryo-ET datasets show that REST perform sw e l li nd e n o i s i n ga n dc o m p e n s a t i n g
the missing wedge information. The application in dynamic nucleosomes,presenting either in the form of individ ual particles or in the context of cryo-
FIB nuclei section, indicates that REST has the capability to reveal different
conformations of target macromolecu les without subtomogram averaging.
Moreover, REST noticeably improves the reliability of particle picking. Theseadvantages enable REST to be a powerfu l tool for the strai ghtforward inter-
pretation of target macromolecules by vi sual inspection of the density and of a
broad range of other applications in cryo-ET, such as segmentation, particlepicking, and subto mogram averaging.
Cryo-ET has emerged as a powerful method which could record the 3D
information of the biological macromolecules; however, many chal-lenges still remain to be addressed
1,2. First, the noise level of the
tomogram is very high due to the radiation sensitivity of the samples,
hence the low-dose electron tomography hinders human eyes toidentify the features in it
3. Second, during the data collection, tilt-series
images can only be collected within a tilt angular range of approxi-mately ±70° because of the limitation of the specimen holder. Thiscould lead to incomplete 3D information in the Fourier space, resultingin a so-called missing wedge in the tomogram. The effect of themissing wedge is clearly visible in the 3D Fourier transform of the beamdirection. The most obvious artefact caused by a missing wedge is theanisotropic resolution, in which objects appear elongated in thedirection of the beam axis, i.e., in the Z direction
4. The EM density in
the 3D and 2D slices related to the Z-plane are distorted as a result ofthis elongation. Therefore, most of 3D segmentation was unable to
entail in Z direction and render a highlight extended structure.
To address these challenges in cryo-ET, a variety of methods have
been proposed to recover the information and produce high contrast
tomograms5. During the data collection, dual-axis tomography, in
which the tilt series are collected using two perpendicular axes, couldbe applied
6. However, this method is limited by the use of a higher
electron dose, which may damage the biological specimen7.I no t h e r
studies that have focused on the data processing procedures, a seriesof algorithms, including the algebraic reconstruction technique(ART)
8,s i m u l t a n e o u sA R T( S A R T )9and simultaneous iterative recon-
struction technique (SIRT)10, have been proposed to improve the
quality of tomograms. These methods, which are mainly based onmathematic calculations, reduce the differences between the calcu-lated projections of the reconstructed tomogram and the tilt series. ByReceived: 4 August 2022
Accepted: 8 May 2023
Check for updates
1National Laboratory of Biomacromolecules, CAS Center for Excellence in Bio macromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing
100101, China.2University of Chinese Academy of Sciences, Beijing 100049, China.3These authors contributed equally: Haonan Zhang, Yan Li.
e-mail: zhup@ibp.ac.cn
Nature Communications | (2023) 14:2937 11234567890():,;
1234567890():,; |
1801.10198.pdf | Published as a conference paper at ICLR 2018
GENERATING WIKIPEDIA BY SUMMARIZING LONG
SEQUENCES
Peter J. Liu∗, Mohammad Saleh∗,
Etienne Pot†, Ben Goodrich, Ryan Sepassi, Łukasz Kaiser, Noam Shazeer
Google Brain
Mountain View, CA
{peterjliu,msaleh,epot,bgoodrich,rsepassi,lukaszkaiser,noam }@google.com
ABSTRACT
We show that generating English Wikipedia articles can be approached as a multi-
document summarization of source documents. We use extractive summarization
to coarsely identify salient information and a neural abstractive model to generate
the article. For the abstractive model, we introduce a decoder-only architecture
that can scalably attend to very long sequences, much longer than typical encoder-
decoder architectures used in sequence transduction. We show that this model can
generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia
articles. When given reference documents, we show it can extract relevant factual
information as reflected in perplexity, ROUGE scores and human evaluations.
1 I NTRODUCTION
The sequence-to-sequence framework has demonstrated success in natural-language sequence trans-
duction tasks such as machine translation. More recently, neural techniques have been applied to do
single-document, abstractive (paraphrasing) text summarization of news articles (Rush et al. (2015),
Nallapati et al. (2016)). In this prior work, the input to supervised models ranged from the first sen-
tence to the entire text of an article, and they are trained end-to-end to predict reference summaries.
Doing this end-to-end requires a significant number of parallel article-summary pairs since language
understanding is a pre-requisite to generate fluent summaries.
In contrast, we consider the task of multi-document summarization, where the input is a collection
of related documents from which a summary is distilled. Prior work has focused on extractive
summarization, which select sentences or phrases from the input to form the summaries, rather
than generating new text. There has been limited application of abstractive neural methods and one
possible reason is the paucity of large, labeled datasets.
In this work, we consider English Wikipedia as a supervised machine learning task for multi-
document summarization where the input is comprised of a Wikipedia topic (title of article) and
a collection of non-Wikipedia reference documents, and the target is the Wikipedia article text. We
describe the first attempt to abstractively generate the first section, or lead, of Wikipedia articles con-
ditioned on reference text. In addition to running strong baseline models on the task, we modify the
Transformer architecture (Vaswani et al., 2017) to only consist of a decoder, which performs better
in the case of longer input sequences compared to recurrent neural network (RNN) and Transformer
encoder-decoder models. Finally we show our modeling improvements allow us to generate entire
Wikipedia articles.
∗Joint first-authors. Ordered randomly.
†Work done as a member of the Google Brain Residency (g.co/brainresidency)
1arXiv:1801.10198v1 [cs.CL] 30 Jan 2018 |
10.1126.science.abo7201.pdf | RESEARCH ARTICLE SUMMARY◥
CORONAVIRUS
Open science discovery of potent noncovalent
SARS-CoV-2 main protease inhibitors
Melissa L. Boby †, Daren Fearon †, Matteo Ferla †, Mihajlo Filep †, Lizbé Koekemoer †,
Matthew C. Robinson †, The COVID Moonshot Consortium, John D. Chodera *, Alpha A. Lee *,
Nir London *, Annette von Delft *, Frank von Delft *
INTRODUCTION: COVID-19 became a global pan-
demic partially as a result of the lack of easily
deployable, broad-spectrum oral antivirals,
which complicated its containment. Even en-demically, and with effect ive vaccinations, it will
continue to cause acute disease, death, and long-
term sequelae globally unless there are acces-
sible treatments. COVID-19 is not an isolated
event but instead is the latest example of a viral
pandemic threat to human health. Therefore,
antiviral discovery and development should be
a key pillar of pandemic preparedness efforts.
RATIONALE: One route to accelerate antiviral
drug discovery is the establishment of open
knowledge bases, the development of effective
technology infrastructures, and the discovery
of multiple potent antivi rals suitable as start-
ing points for the development of therapeu-
tics. In this work, we report the results of the
COVID Moonshot —a fully open science, crowd-
sourced, and structure-enabled drug discovery
campaign —against the severe acute respiratory
syndrome coronavirus 2 (SARS-CoV-2) main
protease (Mpro). This collaboration may serve
as a roadmap for the potential development of
future antivirals.
RESULTS: On the basis of the results of a crys-
tallographic fragment screen, we crowdsourceddesign ideas to progress from fragment to
lead compounds. The crowdsourcing strat-
egy yielded several key compounds along the
optimization trajectory, including the startingcompound of what became the primary lead
series. Three additional chemically distinct
lead series were also explored, spanning a di-
versity of chemotypes.
T h ec o l l a b o r a t i v ea n dh i g h l ya u t o m a t e dn a t u r e
of the COVID Moonshot Consortium resulted in
>18,000 compound designs, >2400 synthesized
compounds, >490 ligand-bound x-ray structures,
>22,000 alchemical free-energy calculations,
and >10,000 biochemical measurements —all
of which were made publicly available in real
time. The recently approved antiviral ensitrelvir
was identified in part bas ed on crystallographic
data from the COVID Moonshot Consortium.
This campaign led to the discovery of a po-
tent [median inhibitory concentration (IC
50)=
37 ± 2 nM] and differentiated (noncovalent and
nonpeptidic) lead compou nd that also exhibited
potent cellular activity, with a median effective
concentration (EC 50) of 64 nM in A549-ACE2-
TMPRSS2 cells and 126 nM in HeLa-ACE2 cells
without measurable cytotoxicity. Although the
pharmacokinetics of th er e p o r t e dc o m p o u n di s
not yet optimal for therapeutic development, it
is a promising starting point for further antiviral
discovery and development.CONCLUSION: T h es u c c e s so ft h eC O V I DM o o n -
shot project in producing potent antivirals,
building open knowledge bases, accelerating ex-
ternal discovery efforts, and functioning as a
useful information-exchange hub is an example
of the potential effectiveness of open science
antiviral discovery programs. The open science,
patent-free nature of the project enabled a large
number of collaborators to provide in-kind sup-
port, including synthesis, assays, and in vitro and
in vivo experiments. By making all data imme-
diately available and ensuring that all compounds
are purchasable from Enamine without the need
for materials transfer agreements, we aim to ac-celerate research globally along parallel tracks.
In the process, we generated a detailed map
of the structural plasticity of Mpro, extensive
structure-activity relationships for multiple
chemotypes, and a wealth of biochemical activ-
ity data to spur further research into antivirals
and discovery methodologies. We hope that this
can serve as an alternative model for antiviral
discovery and future pandemic preparedness.
Further, the project also showcases the role of
machine learning, computational chemistry, and
high-throughput structural biology as force mul-
tipliers in drug design. Art ificial intelligence and
machine learning algorithms help accelerate
chemical synthesis while balancing multiple com-
peting molecular properties. The design-make-test-
analyze cycle was accelerated by these algorithms
combined with planetary-scale biomolecular sim-
ulations of protein-ligand interactions and rapid
structure determination.▪RESEARCH
The list of author affiliations is available in the full article online.
*Corresponding author. Email: john.chodera@choderalab.org(J.D.C.); alpha.lee@postera.ai (A.A.L.); nir.london@weizmann.
ac.il (N.L.); annette.vondelft@cmd.ox.ac.uk (A.v.D.);
frank.von-delft@diamond.ac.uk (F.v.D.)
†These authors contributed equally to this work.
Cite this article as M. L. Boby et al .,Science 382, eabo7201
(2023). DOI: 10.1126/science.abo7201
READ THE FULL ARTICLE AT
https://doi.org/10.1126/science.abo7201
CrowdsourcingMulti-institute
collaboration
Accelerated design-make-test cyclesRoute prediction
Alchemicalfree-energycalculations
>18,000
designs>2400
synthesizedHigh-throughput
crystallography
High-throughput
assays
>490
structures>10,000
measurements+Open
dataMAT-POS-e194df51-1
Oralhalf-life: 1.4 h37 nM
64 nM
NNH
O
S
OO
CN
ClNCOVID Moonshot
The COVID Moonshot Consortium. An open science, crowdsourced drug discovery campaign against the SARS-CoV-2 Mpro led to a potent, noncovalent, and
nonpeptidic inhibitor scaffold with lead-like properties. We generated copious structural, biochemical, and pharmacological data that were shar ed rapidly and openly,
creating a rich, open, and intellectual property –free knowledge base for future anticoronavirus drug discovery.CREDIT: ICONS MADE BY FREEPIK AND GOOD WARE FROM WWW.FLATICON.COM
Boby et al.,Science 382, 663 (2023) 10 November 2023 1o f1
Downloaded from https://www.science.org on November 18, 2023
|
2306.16410.pdf | Towards Language Models That Can See:
Computer Vision Through the LENS
of Natural Language
William Berrios†Gautam Mittal†§Tristan Thrush†§
Douwe Kiela†§Amanpreet Singh†
†Contextual AI;§Stanford University
Abstract
We propose LENS
, a modular approach for tackling computer vision problems by leveraging
the power of large language models (LLMs). Our system uses a language model to reason over
outputs from a set of independent and highly descriptive vision modules that provide exhaustive
information about an image. We evaluate the approach on pure computer vision settings such
as zero- and few-shot object recognition, as well as on vision and language problems. LENS
can be applied to any off-the-shelf LLM and we find that the LLMs with LENS perform highly
competitively with much bigger and much more sophisticated systems, without any multimodal
training whatsoever. We open-source our code at https://github.com/ContextualAI/lens
and provide an interactive demo1.
Pretrained and frozen Trained from scratch
(a) Multimodal Pretraining (b) No Multimodal PretrainingArchitectureData
Source
LENS (Ours) No additional
pre-training data
Flamingo
Frozen LLM
LM BlockXATTN
LayerXATTN
LayerSurfing
Q: What is the
dog doing?Image
EncoderOutput Text
LM Block
Perceiver
M3W
43M webpages ~2B samples
image-video
text pairs
BLIP-2Q- FormerFC LayerSurfing
Q: What is the
dog doing?Image
Encoder
Frozen LLMOutput Text
COCO Visual Genome
CC12M SBU LAION-400M
115M images + synthetic captions
Old-Style PretrainingSurfingOutput Text
Q: What is the
dog doing?Image
Encoder
Text
Encoder
Cross - Modality
Encoder
COCO Visual Genome
VQA GQA Visual7W ...
Millions of paired image/text samples
Visual
DescriptorsAttributesObjects CaptionsFrozen LLM
Q: What is the
dog doing?SurfingOutput Text
Figure 1: Comparison of approaches for aligning visual and language modalities: (a) Multimodal
pretraining using a paired or web dataset, and (b) LENS
, a pretraining-free method that can
be applied to any off-the-shelf LLM without the need for additional multimodal datasets. Unlike
LENS, prior methods are computationally intensive and require joint alignment pretraining on large
multimodal datasets to perform visual tasks.
1https://lens.contextual.ai/
Correspondence to lens@contextual.ai.arXiv:2306.16410v1 [cs.CL] 28 Jun 2023 |
2005.00341.pdf | Jukebox: A Generative Model for Music
Prafulla Dhariwal* 1Heewoo Jun* 1Christine Payne* 1Jong Wook Kim1Alec Radford1Ilya Sutskever1
Abstract
We introduce Jukebox, a model that generates
music with singing in the raw audio domain. We
tackle the long context of raw audio using a multi-
scale VQ-V AE to compress it to discrete codes,
and modeling those using autoregressive Trans-
formers. We show that the combined model at
scale can generate high-fidelity and diverse songs
with coherence up to multiple minutes. We can
condition on artist and genre to steer the musical
and vocal style, and on unaligned lyrics to make
the singing more controllable. We are releasing
thousands of non cherry-picked samples, along
with model weights and code.
1. Introduction
Music is an integral part of human culture, existing from the
earliest periods of human civilization and evolving into a
wide diversity of forms. It evokes a unique human spirit in
its creation, and the question of whether computers can ever
capture this creative process has fascinated computer scien-
tists for decades. We have had algorithms generating piano
sheet music (Hiller Jr & Isaacson, 1957; Moorer, 1972;
Hadjeres et al., 2017; Huang et al., 2017), digital vocoders
generating a singer’s voice (Bonada & Serra, 2007; Saino
et al., 2006; Blaauw & Bonada, 2017) and also synthesizers
producing timbres for various musical instruments (Engel
et al., 2017; 2019). Each captures a specific aspect of music
generation: melody, composition, timbre, and the human
voice singing. However, a single system to do it all remains
elusive.
The field of generative models has made tremendous
progress in the last few years. One of the aims of gen-
erative modeling is to capture the salient aspects of the data
and to generate new instances indistinguishable from the
true data The hypothesis is that by learning to produce the
data we can learn the best features of the data1. We are
surrounded by highly complex distributions in the visual,
audio, and text domain, and in recent years we have devel-
*Equal contribution1OpenAI, San Francisco. Correspondence
to: <jukebox@openai.com>.oped advances in text generation (Radford et al.), speech
generation (Xie et al., 2017) and image generation (Brock
et al., 2019; Razavi et al., 2019). The rate of progress in
this field has been rapid, where only a few years ago we
had algorithms producing blurry faces (Kingma & Welling,
2014; Goodfellow et al., 2014) but now we now can gener-
ate high-resolution faces indistinguishable from real ones
(Zhang et al., 2019b).
Generative models have been applied to the music genera-
tion task too. Earlier models generated music symbolically
in the form of a pianoroll, which specifies the timing, pitch,
velocity, and instrument of each note to be played. (Yang
et al., 2017; Dong et al., 2018; Huang et al., 2019a; Payne,
2019; Roberts et al., 2018; Wu et al., 2019). The symbolic
approach makes the modeling problem easier by working
on the problem in the lower-dimensional space. However, it
constrains the music that can be generated to being a specific
sequence of notes and a fixed set of instruments to render
with. In parallel, researchers have been pursuing the non-
symbolic approach, where they try to produce music directly
as a piece of audio. This makes the problem more challeng-
ing, as the space of raw audio is extremely high dimensional
with a high amount of information content to model. There
has been some success, with models producing piano pieces
either in the raw audio domain (Oord et al., 2016; Mehri
et al., 2017; Yamamoto et al., 2020) or in the spectrogram
domain (Vasquez & Lewis, 2019). The key bottleneck is
that modeling the raw audio directly introduces extremely
long-range dependencies, making it computationally chal-
lenging to learn the high-level semantics of music. A way to
reduce the difficulty is to learn a lower-dimensional encod-
ing of the audio with the goal of losing the less important
information but retaining most of the musical information.
This approach has demonstrated some success in generat-
ing short instrumental pieces restricted to a set of a few
instruments (Oord et al., 2017; Dieleman et al., 2018).
In this work, we show that we can use state-of-the-art deep
generative models to produce a single system capable of gen-
erating diverse high-fidelity music in the raw audio domain,
with long-range coherence spanning multiple minutes. Our
approach uses a hierarchical VQ-V AE architecture (Razavi
1Richard Feynmann famously said, “What I cannot create, I
do not understand” |
1905.01969v4.pdf | Published as a conference paper at ICLR 2020
Poly-encoders :architectures and pre -training
strategies for fast and accurate multi -sentence scoring
Samuel Humeau∗, Kurt Shuster∗, Marie-Anne Lachaux, Jason Weston
Facebook AI Research
{samuelhumeau,kshuster,malachaux,jase }@fb.com
Abstract
The use of deep pre-trained transformers has led to remarkable progress in a num-
ber of applications (Devlin et al., 2019). For tasks that make pairwise compar-
isons between sequences, matching a given input with a corresponding label, two
approaches are common: Cross-encoders performing full self-attention over the
pair and Bi-encoders encoding the pair separately. The former often performs
better, but is too slow for practical use. In this work, we develop a new trans-
former architecture, the Poly-encoder , that learns global rather than token level
self-attention features. We perform a detailed comparison of all three approaches,
including what pre-training and fine-tuning strategies work best. We show our
models achieve state-of-the-art results on four tasks; that Poly-encoders are faster
than Cross-encoders and more accurate than Bi-encoders; and that the best results
are obtained by pre-training on large datasets similar to the downstream tasks.
1 I ntroduction
Recently, substantial improvements to state-of-the-art benchmarks on a variety of language under-
standing tasks have been achieved through the use of deep pre-trained language models followed by
fine-tuning (Devlin et al., 2019). In this work we explore improvements to this approach for the class
of tasks that require multi-sentence scoring: given an input context, score a set of candidate labels,
a setup common in retrieval and dialogue tasks, amongst others. Performance in such tasks has to
be measured via two axes: prediction quality and prediction speed, as scoring many candidates can
be prohibitively slow.
The current state-of-the-art focuses on using BERT models for pre-training (Devlin et al., 2019),
which employ large text corpora on general subjects: Wikipedia and the Toronto Books Corpus
(Zhu et al., 2015). Two classes of fine-tuned architecture are typically built on top: Bi-encoders and
Cross-encoders. Cross-encoders (Wolf et al., 2019; Vig & Ramea, 2019), which perform full (cross)
self-attention over a given input and label candidate, tend to attain much higher accuracies than their
counterparts, Bi-encoders (Mazar ´e et al., 2018; Dinan et al., 2019), which perform self-attention
over the input and candidate label separately and combine them at the end for a final representa-
tion. As the representations are separate, Bi-encoders are able to cache the encoded candidates, and
reuse these representations for each input resulting in fast prediction times. Cross-encoders must
recompute the encoding for each input and label; as a result, they are prohibitively slow at test time.
In this work, we provide novel contributions that improve both the quality and speed axes over the
current state-of-the-art. We introduce the Poly-encoder, an architecture with an additional learnt at-
tention mechanism that represents more global features from which to perform self-attention, result-
ing in performance gains over Bi-encoders and large speed gains over Cross-Encoders. To pre-train
our architectures, we show that choosing abundant data more similar to our downstream task also
brings significant gains over BERT pre-training. This is true across all di fferent architecture choices
and downstream tasks we try.
We conduct experiments comparing the new approaches, in addition to analysis of what works best
for various setups of existing methods, on four existing datasets in the domains of dialogue and in-
formation retrieval (IR), with pre-training strategies based on Reddit (Mazar ´e et al., 2018) compared
∗Joint First Authors.
1arXiv:1905.01969v4 [cs.CL] 25 Mar 2020 |
2401.18079.pdf | KVQuant: Towards 10 Million Context Length LLM Inference
with KV Cache Quantization
Coleman Hooper
chooper@berkeley.edu
UC BerkeleySehoon Kim
sehoonkim@berkeley.edu
UC BerkeleyHiva Mohammadzadeh
hiva@berkeley.edu
UC Berkeley
Michael W. Mahoney
mmahoney@stat.berkeley.edu
ICSI, LBNL, UC BerkeleyYakun Sophia Shao
ysshao@berkeley.edu
UC BerkeleyKurt Keutzer
keutzer@berkeley.edu
UC Berkeley
Amir Gholami
amirgh@berkeley.edu
ICSI, UC Berkeley
ABSTRACT
LLMs are seeing growing use for applications which require large
context windows, and with these large context windows KV cache
activations surface as the dominant contributor to memory con-
sumption during inference. Quantization is a promising approach
for compressing KV cache activations; however, existing solutions
fail to represent activations accurately in sub-4-bit precision. Our
work, KVQuant, facilitates low precision KV cache quantization
by incorporating several novel methods: (i) Per-Channel Key Quan-
tization , where we adjust the dimension along which we quan-
tize the Key activations to better match the distribution; (ii) Pre-
RoPE Key Quantization , where we quantize Key activations before
the rotary positional embedding to mitigate its impact on quan-
tization; (iii) Non-Uniform KV Cache Quantization , where we de-
rive per-layer sensitivity-weighted non-uniform datatypes that
better represent the distributions; (iv) Per-Vector Dense-and-Sparse
Quantization , where we isolate outliers separately for each vec-
tor to minimize skews in quantization ranges; and (v) Q-Norm ,
where we normalize quantization centroids in order to mitigate
distribution shift, providing additional benefits for 2-bit quantiza-
tion. By applying our method to the LLaMA, LLaMA-2, and Mis-
tral models, we achieve <0.1perplexity degradation with 3-bit
quantization on both Wikitext-2 and C4, outperforming existing
approaches. Our method enables serving LLaMA-7B with a con-
text length of up to 1 million on a single A100-80GB GPU
and up to 10 million on an 8-GPU system . We develop cus-
tom CUDA kernels for KVQuant, showing that we can achieve
up to∼1.4×speedups, compared to baseline fp16 matrix-vector
multiplications, for the LLaMA-7B model. The code is available at
https://github.com/SqueezeAILab/KVQuant/.
1 INTRODUCTION
Large language models (LLMs) have revolutionized many natural
language processing (NLP) tasks. In order to improve the capabili-
ties of LLMs, there is significant interest in increasing the context
lengths of LLMs. Longer context lengths enable new applications,
including long document summarization, retrieval for answering
questions about long documents, extended multi-turn applications
[4], and code analysis. To support this pull from applications, therehave been significant recent advances in long-context length mod-
els in industry [1, 27], as well as in academia [4].
Given the importance of LLM workloads, there is strong motiva-
tion to improve their inference efficiency. LLM inference with large
context lengths can be incredibly resource-intensive; serving LLMs
requires high-end GPUs, and the largest LLMs require costly multi-
GPU inference setups. When analyzing the computational nature
of generative inference with LLMs, it becomes quickly apparent
that, for relatively small batch sizes, the computation is memory
bound [18]. With the growing divergence between computational
speeds and memory speeds, this problem is only going to get worse
over time [ 13]. This makes reducing the memory bottleneck preem-
inently important. Further analysis shows that the memory bottle-
neck is strongly related to context size. For short sequence lengths,
the dominant contributor to memory consumption is the weight ma-
trices, and therefore the optimal strategy is to minimize the model
size in order to reduce memory consumption as well as bandwidth
requirements [ 18,19]. However, for long sequence lengths, the
main bottleneck is the memory requirements for caching Key and
Value (KV) activations throughout inference. In particular, the size
of the KV cache can become the dominant contributor to memory
footprint, even for a 32K context limit (see Table 1), making it chal-
lenging to perform long context length inference. This challenge is
further exacerbated when one considers batched inference.
It is therefore crucial to develop methods for compressing the KV
cache to enable efficient long-sequence length inference. Existing
approaches lead to unacceptable accuracy degradation due to the
outlier structures in KV cache activations as well as suboptimal bit
allocation with existing uniform and non-uniform approaches. In
this work, we perform an extensive analysis of KV cache activa-
tions in recent LLMs, revealing patterns which can be exploited to
enable ultra-low precision quantization with minimal accuracy loss.
In particular, we make the following contributions (summarized
in Figure 1):
•We find that the Key matrices exhibit structured outliers in spe-
cific channels before applying RoPE. However, the outlier channel
magnitudes become less consistent after applying RoPE, posing
a distinct challenge for low precision quantization. Based on
these observations, we use per-channel quantization for Keys,arXiv:2401.18079v2 [cs.LG] 7 Feb 2024 |
2305.15717.pdf | The False Promise of Imitating Proprietary LLMs
Arnav Gudibande∗
UC Berkeley
arnavg@berkeley.eduEric Wallace∗
UC Berkeley
ericwallace@berkeley.eduCharlie Snell∗
UC Berkeley
csnell22@berkeley.edu
Xinyang Geng
UC Berkeley
young.geng@berkeley.eduHao Liu
UC Berkeley
hao.liu@berkeley.eduPieter Abbeel
UC Berkeley
pabbeel@berkeley.edu
Sergey Levine
UC Berkeley
svlevine@berkeley.eduDawn Song
UC Berkeley
dawnsong@berkeley.edu
Abstract
An emerging method to cheaply improve a weaker language model is to finetune
it on outputs from a stronger model, such as a proprietary system like ChatGPT
(e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the
proprietary model’s capabilities using a weaker open-source model. In this work,
we critically analyze this approach. We first finetune a series of LMs that imitate
ChatGPT using varying base model sizes (1.5B–13B), data sources, and imitation
data amounts (0.3M–150M tokens). We then evaluate the models using crowd
raters and canonical NLP benchmarks. Initially, we were surprised by the output
quality of our imitation models—they appear far better at following instructions,
and crowd workers rate their outputs as competitive with ChatGPT. However, when
conducting more targeted automatic evaluations, we find that imitation models
close little to none of the gap from the base LM to ChatGPT on tasks that are
not heavily supported in the imitation data. We show that these performance
discrepancies may slip past human raters because imitation models are adept at
mimicking ChatGPT’s style but not its factuality . Overall, we conclude that model
imitation is a false promise: there exists a substantial capabilities gap between open
and closed LMs that, with current methods, can only be bridged using an unwieldy
amount of imitation data or by using more capable base LMs. In turn, we argue
that the highest leverage action for improving open-source models is to tackle the
difficult challenge of developing better base LMs, rather than taking the shortcut of
imitating proprietary systems.
1 Introduction
The recent release of powerful language models (LMs) such as ChatGPT (OpenAI, 2022),
Bard (Pichai, 2023), and Claude (AnthropicAI, 2023) might herald a future where the best AI
systems are provided primarily as a fee-based API by large companies. At the same time, open-source
LMs are becoming increasingly accurate, with models like LLaMA and FLAN-T5 providing many
of the same basic capabilities as their commercial counterparts, albeit at a lower level of perfor-
mance (Touvron et al., 2023; Chung et al., 2022). This presents an important question, whose answer
will have profound future implications: will the most powerful LMs be closed-source or will they be
freely distributed for anyone to use, modify, and extend? Both possibilities have important pros and
cons, and implications on policy, corporate strategy, and the future of scientific inquiry.
∗Equal Contribution.
Preprint. Under review.arXiv:2305.15717v1 [cs.CL] 25 May 2023 |
2306.02707.pdf | Orca: Progressive Learning from Complex
Explanation Traces of GPT-4
Subhabrata Mukherjee∗†, Arindam Mitra∗
Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, Ahmed Awadallah
Microsoft Research
Abstract
Recent research has focused on enhancing the capability of smaller models
through imitation learning, drawing on the outputs generated by large
foundation models (LFMs). A number of issues impact the quality of these
models, ranging from limited imitation signals from shallow LFM outputs;
small scale homogeneous training data; and most notably a lack of rigorous
evaluation resulting in overestimating the small model’s capability as they
tend to learn to imitate the style, but not the reasoning process of LFMs . To
address these challenges, we develop Orca, a 13-billion parameter model
that learns to imitate the reasoning process of LFMs. Orca learns from
rich signals from GPT-4 including explanation traces; step-by-step thought
processes; and other complex instructions, guided by teacher assistance from
ChatGPT. To promote this progressive learning, we tap into large-scale and
diverse imitation data with judicious sampling and selection. Orca surpasses
conventional state-of-the-art instruction-tuned models such as Vicuna-13B
by more than 100% in complex zero-shot reasoning benchmarks like Big-
Bench Hard (BBH) and 42%on AGIEval. Moreover, Orca reaches parity
with ChatGPT on the BBH benchmark and shows competitive performance
(4pts gap with optimized system message) in professional and academic
examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot
settings without CoT; while trailing behind GPT-4. Our research indicates
that learning from step-by-step explanations, whether these are generated
by humans or more advanced AI models, is a promising direction to improve
model capabilities and skills.
∗Co-primary authors. Author contributions listed at the end of the paper.
†Correspondence to subhabrata.mukherjee@microsoft.com
We are working with our legal team to publicly release a diff of the model weights in accordance
with LLaMA’s release policy to be published at https://aka.ms/orca-lm .
Work in progress.arXiv:2306.02707v1 [cs.CL] 5 Jun 2023 |
109_how_well_do_generative_protein.pdf | HOW WELL DO GENERATIVE PROTEIN MODELS GENERATE ?
Han Spinner
Department of Systems Biology
Harvard Medical SchoolAaron W. Kollasch
Department of Systems Biology
Harvard Medical SchoolDebora S. Marks
Department of Systems Biology
Harvard Medical School
ABSTRACT
Protein design relies critically on the generation of plausible sequences. Yet, the efficacy
of many common model architectures from simple interpretable models, like position-
specific scoring matrix (PSSM) and direct couplings analysis (DCA), to newer and less
interpretable models, like variational autoencoders (V AEs), autoregressive large language
models (AR-LLMs) and flow matching (FM), for sequence sampling remains uncertain.
While some models offer unique sequence generation methods, issues such as mode col-
lapse, generation of nonsensical repeats, and protein truncations persist. Trusted methods
like Gibbs sampling are often preferred for their reliability, but can be computationally
expensive. This paper addresses the need to evaluate the performance and limitations of
different generation methods from protein models, considering dependencies on multiple
sequence alignment (MSA) depth and available sequence diversity. We propose rigorous
evaluation methods and metrics to assess sequence generation, aiming to guide design de-
cisions and inform the development of future model and sampling techniques for protein
design applications.
1 I NTRODUCTION
Using machine learning to design proteins is useless unless we can generate plausible sequences, regardless
of training data or model type. Many different approaches to protein design to achieve different goals have
been quite successful (Shin et al., 2021; Madani et al., 2023; Lian et al., 2022; Hawkins-Hooker et al.,
2021), and all of these projects have hinged on generating sequences that ‘make sense’. In almost all protein
engineering and protein design quests, we want to create proteins that fold and function. However, conditions
that encourage stability, dynamic movements, tolerance to stressors, proper expression levels, etc, are often
specific protein-to-protein and project-to-project. In order to increase efficacy of these studies we must ask
the simple question: How well do generative protein models generate?
Newer model architectures, such as variational autoencoders (V AE) and autoregressive large language mod-
els (AR-LLM), have shown some promise for function and structure prediction Frazer et al. (2021); Hsu
et al. (2022); Notin et al. (2022). And the importance of comparing to simpler, more interpretable models
has also been noted Zhang et al. (2024). Often, However, there are no theoretical guarantees that models
successful for structure or fitness predictions are also guaranteed to be better for sampling new sequences.
These models’ architectures enable unique ways of generating: for instance, sampling sequences from a
learned latent space from a V AE and ancestral sampling for AR-LLMs. But often, there are malignancies
that come from these generation methods that go ignored.
When drawing sequences from the latent space of a V AE, it is common to observe issues with the diversity
of sequences that are generated: (a) mode collapse, (b) posterior collapse, (c) even distribution of sequence
diversity across the length of the protein, and/or (d) low quality sequences that have mutated active site or
other key residues (Figure 1a). Examples of malignancies from ancestral sampling from AR-LLMs include:
1 |
Pursuing-structural-biology-in-China-cell.pdf | Leading Edge
Conversations
Pursuing structural biology in China
In November 2023, structural biologists from different countries and different disciplines gathered at the Cell
Symposium: Structural biology from the nanoscale to cellular mesoscale to discuss recent breakthroughs,including structures of proteins and macromolecular complexes in a cellular context as well as virus struc-tures obtained by using different techniques. At the symposium, Cell editor Jia Cheng and Karin Ku ¨ hnel,
editor-in-chief of Structure , spoke with Drs. Beili Wu, Mingjie Zhang, and Zihe Rao about their experiences
doing structural biology research in China and about their perspectives for the future. An edited transcriptof the conversation is presented below, and the full conversation is available with the article online.
Jia Cheng: It’s my great pleasure to have Dr. Mingjie Zhang,
Dr. Beili Wu [and later Dr. Zihe Rao] join this conversation. I
would like to start with the first question. Could you tell us abouthow each of you got interested in structural biology or structural
neuroscience?Beili Wu: I got interested in structural biology after I joined
Professor Rao’s lab in Tsinghua University as a PhD student. I
was impressed by the beauty of protein crystals. It’s like themost beautiful jewelry that I can grow by myself. Later, I was
fascinated by the logic of structures because everything
Figure 1(L to R) Jia Cheng, Karin Ku ¨ hnel, Zihe Rao, Mingjie Zhang, Beili Wu
ll
Cell 187, February 1, 2024 ª2024 Elsevier Inc. 513 |
HyvO00-icatut.pdf | Indep enden t Comp onen t Analysis/: A T utorialAap o Hyv /ärinen and Erkki OjaHelsinki Univ ersit y of T ec hnologyLab oratory of Computer and Information ScienceP /.O/. Bo x /5/4/0/0/, FIN/-/0/2/0/1/5 Esp o o/, Finlandaapo/.hyvarinen/@hut/.fi/, erkki/.oja/@hut/.fihttp/:////www/.cis/.hut/.fi//pro ject s//ic a//A v ersion of this pap er will app ear in Neur al Networkswith the title /Indep enden t Comp onen tA n a l ysis/: Algorithms and Applications/April /1/9/9/9/1 Motiv ationImagine that y ou are in a ro om where t w op eople are sp eaking sim ultaneously /. Y ou ha v et w om icrophones/,whic hy ou hold in di/eren tl o c a t ions/. The microphones giv ey ou t w o recorded time signals/, whic hw e coulddenote b y x/1
/( t /) and x/2
/( t /) /, with x/1
and x/2
the amplitudes/, and t the time index/. Eac ho f these recordedsignals is a w eigh ted sum of the sp eec hs ignals emitted b yt h et w os p eak ers/, whic hw ed enote b y s/1
/( t /) ands/2
/( t /) /. W e could express this as a linear equation/:x/1
/( t /)/= a/1/1
s/1
/+ a/1/2
s/2
/(/1/)x/2
/( t /)/= a/2/1
s/1
/+ a/2/2
s/2
/(/2/)where a/1/1
/;;a/1/2
/;;a/2/1
/, and a/2/2
are some parameters that dep end on the distances of the microphones fromthe sp eak ers/. It w ould b e v ery useful if y ou could no w estimate the t w oo r i ginal sp eec h signals s/1
/( t /) ands/2
/( t /) /, using only the recorded signals x/1
/( t /) and x/2
/( t /) /. This is called the c o cktail/-p arty pr oblem /. F or thetime b eing/, w eo m i ta n y time dela ys or other extra factors from our simpli/ed mixing mo del/.As an illustration/, consider the w a v eforms in Fig/. /1 and Fig/. /2/. These are/, of course/, not realistic sp eec hsignals/, but su/ce for this illustration/. The original sp eec hs ignals could lo ok something lik et hose in Fig/. /1and the mixed signals could lo ok lik et h o s ei n Fig/. /2/. The problem is to reco v er the data in Fig/. /1 usingonly the data in Fig/. /2/.A ctually /,i f w e knew the parameters aij
/,w e could solv et he linear equation in /(/1/) b y classical metho ds/.The p oin ti s /, ho w ev er/, that if y ou don/'t kno wt h e aij
/, the problem is considerably more di/cult/.One approac ht o solving this problem w ould b e to use some information on the statistical prop erties ofthe signals si
/( t /) to estimate the aii
/. A ctually /, and p erhaps surprisingly /,i tt urns out that it is enough toassume that s/1
/( t /) and s/2
/( t /) /,a t e a c ht i m e i nstan t t /,a r e statistic al ly indep endent /. This is not an unrealisticassumption in man yc ases/, and it need not b e exactly true in practice/. The recen tly dev elop ed tec hniqueof Indep enden tC omp onen tA nalysis/, or ICA/, can b e used to estimate the aij
based on the information oftheir indep endence/, whic ha l l o ws us to separate the t w oo riginal source signals s/1
/( t /) and s/2
/( t /) from theirmixtures x/1
/( t /) and x/2
/( t /) /. Fig/. /3 giv es the t w os ignals estimated b yt he ICA metho d/. As can b e seen/, theseare v ery close to the original source signals /(their signs are rev ersed/, but this has no signi/cance/./)Indep enden t comp onen ta nalysis w as originally dev elop ed to deal with problems that are closely relatedto the co c ktail/-part yp r o blem/. Since the recen ti ncrease of in terest in ICA/, it has b ecome clear that thisprinciple has a lot of other in teresting applications as w ell/./1 |
2211.06738.pdf | arXiv:2211.06738v1 [cs.AI] 12 Nov 2022Formalizing the presumption of independence
Paul Christiano, Eric Neyman, Mark Xu
Alignment Research Center
Abstract
Mathematical proof aims to deliver confident conclusions, but a ver y similar process of
deduction can be used to make uncertain estimates that are open t o revision. A key ingredient
in such reasoning is the use of a “default” estimate of E[XY] =E[X]E[Y] in the absence of any
specific information about the correlation between XandY, which we call the presumption of
independence . Reasoning based on this heuristic is commonplace, intuitively compellin g, and
often quite successful—but completely informal.
In this paper we introduce the concept of a heuristic estimator as a potential formalization of
this type of defeasible reasoning. We introduce a set of intuitively de sirable coherence properties
for heuristic estimators that are not satisfied by any existing cand idates. Then we present our
main open problem: is there a heuristic estimator that formalizes intu itively valid applications
of the presumption of independence without also accepting spuriou s arguments?
Many formally-specified questions are very hard to settle wi th proofs. There are famous examples
like the twin prime conjecture, but also countless more mund ane examples like how quickly the
temperature of a simulated room would change if the window we re opened.
Even when we cannot prove a theorem, we can often deductively arrive at a reasonable best guess
about the truth of a claim or the behavior of a system. We can ma ke probabilistic arguments about
the structure of the primes to estimate the density of twin pr imes, or about small molecules moving
randomly in order to estimate the rate of heat transfer.
This reasoning requires making best guesses about quantiti es that we can’t calculate exactly. We
can often do this using the presumption of independence : when trying to estimate E[XY] without
any knowledge about the relationship between XandY, we can use E[X]E[Y] as a default guess
rather than remaining completely agnostic. For example, we can provisionally treat “ xis prime”
and “x+2 is prime”as independent, or treat the velocities of differe nt air molecules as uncorrelated.
This principle is sufficient to make plausible estimates abou t a very wide range of mathematical
quantities. But it is not clear how to formalize this kind of d efeasible reasoning, nor is it clear how
to generalize our default guess to the situation where we hav e arbitrary partial information about
howXandYare related.
Heuristic reasoning using the presumption of independence is distinct from running experiments
or Monte Carlo simulations. We are not merely observing a lot of twin primes and inferring that
there are probably infinitely many of them, or running simula tions of a room and observing how
quickly the temperature changes—we have found a good reason that our answer should be right
unless there is additional structure that we’ve overlooked which changes the answer.
1 |
1805.00899.pdf | AI safety via debate
Geoffrey Irving∗Paul Christiano
OpenAIDario Amodei
Abstract
To make AI systems broadly useful for challenging real-world tasks, we need them to learn
complexhumangoalsandpreferences. Oneapproachtospecifyingcomplexgoalsaskshumansto
judge during training which agent behaviors are safe and useful, but this approach can fail if the
task is too complicated for a human to directly judge. To help address this concern, we propose
training agents via self play on a zero sum debategame. Given a question or proposed action,
two agents take turns making short statements up to a limit, then a human judges which of the
agents gave the most true, useful information. In an analogy to complexity theory, debate with
optimal play can answer any question in PSPACE given polynomial time judges (direct judging
answers only NPquestions). In practice, whether debate works involves empirical questions
about humans and the tasks we want AIs to perform, plus theoretical questions about the
meaning of AI alignment. We report results on an initial MNIST experiment where agents
compete to convince a sparse classifier, boosting the classifier’s accuracy from 59.4% to 88.9%
given 6 pixels and from 48.2% to 85.2% given 4 pixels. Finally, we discuss theoretical and
practical aspects of the debate model, focusing on potential weaknesses as the model scales up,
and we propose future human and computer experiments to test these properties.
1 Introduction
Learning to align an agent’s actions with the values and preferences of humans is a key challenge in
ensuring that advanced AI systems remain safe [Russell et al., 2016]. Subtle problems in alignment
can lead to unexpected and potentially unsafe behavior [Amodei et al., 2016], and we expect this
problem to get worse as systems become more capable. Alignment is a training-time problem: it
is difficult to retroactively fix the behavior and incentives of trained unaligned agents. Alignment
likely requires interaction with humans during training, but care is required in choosing the precise
form of the interaction as supervising the agent may itself be a challenging cognitive task.
For some tasks it is harder to bring behavior in line with human goals than for others. In simple
cases, humans can directly demonstrate the behavior—this is the case of supervised learning or
imitation learning, for example classifying an image or using a robotic gripper to pick up a block.
For these tasks alignment with human preferences can in principle be achieved by imitating the
human, and is implicit in existing ML approaches (although issues of bias in the training data still
arise, see e.g. Mitchell and Shadlen [2018]). Taking a step up in alignment difficulty, some tasks are
too difficult for a human to perform, but a human can still judge the quality of behavior or answers
once shown to them—for example a robot doing a backflip in an unnatural action space. This is
the case of human preference-based reinforcement learning [Christiano et al., 2017]. We can make
∗Corresponding author: irving@openai.com
1arXiv:1805.00899v2 [stat.ML] 22 Oct 2018 |
2401.10020.pdf | Self-Rewarding Language Models
Weizhe Yuan1,2Richard Yuanzhe Pang1,2Kyunghyun Cho2
Xian Li1Sainbayar Sukhbaatar1Jing Xu1Jason Weston1,2
1Meta2NYU
Abstract
We posit that to achieve superhuman agents, future models require super-
human feedback in order to provide an adequate training signal. Current
approaches commonly train reward models from human preferences, which
may then be bottlenecked by human performance level, and secondly these
separate frozen reward models cannot then learn to improve during LLM
training. In this work, we study Self-Rewarding Language Models , where the
language model itself is used via LLM-as-a-Judge prompting to provide its
own rewards during training. We show that during Iterative DPO training
that not only does instruction following ability improve, but also the ability
to provide high-quality rewards to itself. Fine-tuning Llama 2 70B on three
iterations of our approach yields a model that outperforms many existing
systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini
Pro, and GPT-4 0613. While there is much left still to explore, this work
opens the door to the possibility of models that can continually improve in
both axes.
1 Introduction
Aligning Large Language Models (LLMs) using human preference data can vastly improve
the instruction following performance of pretrained models [Ouyang et al., 2022, Bai et al.,
2022a]. The standard approach of Reinforcement Learning from Human Feedback (RLHF)
learns a reward model from these human preferences. The reward model is then frozen and
used to train the LLM using RL, e.g., via PPO [Schulman et al., 2017]. A recent alternative
is to avoid training the reward model at all, and directly use human preferences to train the
LLM, as in Direct Preference Optimization [DPO; Rafailov et al., 2023]. In both cases, the
approach is bottlenecked by the size and quality of the human preference data, and in the
case of RLHF the quality of the frozen reward model trained from them as well.
In this work, we instead propose to train a self-improving reward model that, rather than
being frozen, is continually updating during LLM alignment, in order to avoid this bottleneck.
The key to such an approach is to develop an agent that possesses all the abilities desired
during training, rather than separating them out into distinct models such as a reward
model and a language model. In the same way that pretraining and multitasking training of
instruction following tasks allow task transfer by training on many tasks at once [Collobert
and Weston, 2008, Radford et al., 2019, Ouyang et al., 2022], incorporating the reward
model into that same system allows task transfer between the reward modeling task and the
instruction following tasks.
We thus introduce Self-Rewarding Language Models , that both (i) act as instruction following
models generating responses for given prompts; and (ii) can generate and evaluate new
instruction following examples to add to their own training set. We train these models
using an Iterative DPO framework similar to that recently introduced in Xu et al. [2023].arXiv:2401.10020v2 [cs.CL] 8 Feb 2024 |
2401.12192.pdf | Text Embedding Inversion Attacks on Multilingual Language Models
Yiyi Chen Heather Lent Johannes Bjerva
Department of Computer Science, Aalborg University, Denmark
{yiyic, hcle, jbjerva}@cs.aau.dk
Abstract
Representing textual information as real-
numbered embeddings has become the norm in
NLP. Moreover, with the rise of public interest
in large language models (LLMs), Embeddings
as a Service (EaaS) has rapidly gained traction
as a business model. This is not without out-
standing security risks, as previous research
has demonstrated that sensitive data can be re-
constructed from embeddings, even without
knowledge of the underlying model that gen-
erated them. However, such work is limited
by its sole focus on English, leaving all other
languages vulnerable to attacks by malicious
actors. To this end, this work investigates LLM
security from the perspective of multilingual
embedding inversion. Concretely, we define the
problem of black-box multilingual and cross-
lingual inversion attacks, with special attention
to a cross-domain scenario. Our findings re-
veal that multilingual models are potentially
more vulnerable to inversion attacks than their
monolingual counterparts. This stems from
the reduced data requirements for achieving
comparable inversion performance in settings
where the underlying language is not known a-
priori. To our knowledge, this work is the first
to delve into multilinguality within the context
of inversion attacks, and our findings highlight
the need for further investigation and enhanced
defenses in the area of NLP Security.
1 Introduction
Industrial applications of Natural Language Pro-
cessing (NLP) typically utilize Large Language
Models (LLMs) and frequently rely on vector
databases via frameworks such as Embeddings as a
Service (EaaS). In this context, rather than storing
data as strings, high quality sentence embeddings
are stored in a remote database instead. This allows
end-users to efficiently search across these con-
densed representations, which are seemingly im-
pervious to privacy breaches. However, while such
EaaS workflows have previously been assumed tobe secure, recent work has demonstrated that ac-
cess to the embeddings is no more safe than raw
text, as models can learn to decode these embed-
dings (Song and Raghunathan, 2020; Morris et al.,
2023; Zhou et al., 2023). As such, there is a sub-
stantial threat to privacy if malicious actors are able
to eavesdrop on communication channels between
EaaS providors and customers, and access the em-
beddings in the process.
Decoding the content of these embeddings can
be done via inversion attacks . After gaining access
to embeddings and the black-box embedder via the
EaaS API, the malicious actor can train an external
model, which approximates the inversion function
that reconstructs the text from the embeddings. Pre-
vious work has proven has demonstrated that an
exact match for data recreation can be obtained in
specific settings, albeit with the limitation of assum-
ing monolingual English models and embeddings
(Morris et al., 2023).
In a real-world scenario however, an eavesdrop-
per may not necessarily know the language of the
text encoded within the embedding. For instance,
a Spanish EaaS provider might host its data in Ger-
many, for a French-speaking company. Thus in this
work we investigate three research questions: (i)
To what extent are inversion attacks feasible in a
multilingual setting?; (ii) Are attacks feasible and
effective when the language is unknown a-priori?;
(iii) Does cross-lingual transfer allow information
to be leaked across the languages included in a
multilingual model?
Contributions In this work, we define the prob-
lem of black-box multilingual and cross-lingual
inversion attacks, with special attention to a cross-
domain scenario. While previous research has suc-
ceeded in reconstruction of tokens with bag-of-
words approach (Song and Raghunathan, 2020)
and sequences with informative words (Li et al.,
2023), Morris et al. (2023) has proven the potentialarXiv:2401.12192v1 [cs.CL] 22 Jan 2024 |
2211.07793.pdf | EXTREME GENERATIVE IMAGE COMPRESSION BY LEARNING
TEXT EMBEDDING FROM DIFFUSION MODELS
A P REPRINT
Zhihong Pan, Xin Zhou, Hao Tian
Baidu Research (USA)
ABSTRACT
Transferring large amount of high resolution images over limited bandwidth is an important but very
challenging task. Compressing images using extremely low bitrates (<0.1 bpp) has been studied but
it often results in low quality images of heavy artifacts due to the strong constraint in the number
of bits available for the compressed data. It is often said that a picture is worth a thousand words
but on the other hand, language is very powerful in capturing the essence of an image using short
descriptions. With the recent success of diffusion models for text-to-image generation, we propose a
generative image compression method that demonstrates the potential of saving an image as a short
text embedding which in turn can be used to generate high-fidelity images which is equivalent to
the original one perceptually. For a given image, its corresponding text embedding is learned using
the same optimization process as the text-to-image diffusion model itself, using a learnable text
embedding as input after bypassing the original transformer. The optimization is applied together with
a learning compression model to achieve extreme compression of low bitrates <0.1 bpp. Based on
our experiments measured by a comprehensive set of image quality metrics, our method outperforms
the other state-of-the-art deep learning methods in terms of both perceptual quality and diversity.
1 Introduction
With the increasing amount of image streams available for broad range of applications, lossy image compression
is a very useful technique for efficient image storage and transmission. Over the years, various engineered codes
such as JPEG [ 30], JPEG2000 [ 52], and the more recent BPG[ 4] have been proposed to compress single images but
their performance have saturated overall. More recently, deep learning based image compression methods have been
studied [ 3,36,7]. These models are generally trained in an end-to-end fashion to minimize a rate-distortion object
R+λD. HereRrepresents the entropy of latent representations which is estimated by an entropy model, Dis the
difference between the original image and the compressed one, and λdetermines the desired trade-off between rate and
distortion. When λis small, the optimization gives higher priority to compression rate so the resulted bitrate (evaluated
as bits-per-pixel, bpp) is low. Consequently, the compressed image has lower quality due to higher Dloss term. With
accuracy metrics like mean squared error (MSE) and multi-scale structural similarity (MS-SSIM) are often used for D,
the low quality compressed images are usually blurry. For extremely low bitrates (<0.1 bpp), both engineered codecs
and deep learning compression models are subject to very poor perceptual qualities.
To tackle this problem, some recent methods [ 61,63,29,35] aim to restore less blurry image from highly compressed
latent representations at the cost of accuracy. These model adopt generative adversarial networks (GAN) [ 19] to fully or
partially replace the accuracy metrics in Dwith discrimination loss so they can generate sharp and realistic images
even at very low bitrates. For the challenging task of extremely low bitrates, GAN is further exploited in more recent
studies [ 2,11,25] to restore sharp images with minimized distortion and visual artifacts. However, they all inherit the
drawback of unstable training from GAN, making it difficult to tune the training process for large datasets. In this
paper, we propose the first generative image compression method with extremely low bitrates using denoising diffusion
models. As it utilizes an existing text-to-image model which is already trained with a gigantic dataset, it is applicable to
any type of image with no need of further tuning.
Similar to GAN a few years back, denoising diffusion models [ 53,22,54] are gaining popularity increasingly for
their advantages in generating images with high qualities in both fidelity and diversity without disadvantage of
unstable training like GAN. In addition to unconditional image generation, diffusion models have also empowered the
breakthrough developments in diffusion-based text-to-image generation models [ 47,38,43,49] which are able to createarXiv:2211.07793v1 [eess.IV] 14 Nov 2022 |
gu-dissertation-augmented.pdf | MODELING SEQUENCES WITH STRUCTURED STATE SPACES
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF DEPARTMENT OF COMPUTER
SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Albert Gu
June 2023 |
2108.05540.pdf | Unsupervised Corpus Aware Language Model Pre-training
for Dense Passage Retrieval
Luyu Gao and Jamie Callan
Language Technologies Institute
Carnegie Mellon University
{luyug, callan}@cs.cmu.edu
Abstract
Recent research demonstrates the effective-
ness of using fine-tuned language mod-
els (LM) for dense retrieval. However, dense
retrievers are hard to train, typically requiring
heavily engineered fine-tuning pipelines to re-
alize their full potential. In this paper, we iden-
tify and address two underlying problems of
dense retrievers: i) fragility to training data
noise and ii) requiring large batches to robustly
learn the embedding space. We use the re-
cently proposed Condenser pre-training archi-
tecture, which learns to condense information
into the dense vector through LM pre-training.
On top of it, we propose coCondenser, which
adds an unsupervised corpus-level contrastive
loss to warm up the passage embedding space.
Retrieval experiments on MS-MARCO, Natu-
ral Question, and Trivia QA datasets show that
coCondenser removes the need for heavy data
engineering such as augmentation, synthesis,
or filtering, as well as the need for large batch
training. It shows comparable performance
to RocketQA, a state-of-the-art, heavily engi-
neered system, using simple small batch fine-
tuning.1
1 Introduction
Building upon the advancements of pre-trained lan-
guage models (LM; Devlin et al. (2019); Liu et al.
(2019)), dense retrieval has become an effective
paradigm for text retrieval (Lee et al., 2019; Chang
et al., 2020; Karpukhin et al., 2020; Qu et al., 2021).
Recent research has however found that fine-tuning
dense retrievers to realize their capacity requires
carefully designed fine-tuning techniques. Early
works include iterative negative mining (Xiong
et al., 2021) and multi-vector representations (Luan
et al., 2020). The recent RocketQA system (Qu
et al., 2021) significantly improves the performance
1Our code is available at https://github.com/
luyug/Condenserof a dense retriever by designing an optimized fine-
tuning pipeline that includes i) denoising hard neg-
atives, which corrects mislabeling, and ii) large
batch training. While this is very effective, the en-
tire pipeline is very heavy in computation and not
feasible for people who do not have tremendous
hardware resources, especially those in academia.
In this paper, we ask, instead of directly using the
pipeline, can we take the insights of RocketQA to
perform language model pre-training such that the
pre-trained model can be easily fine-tuned on any
target query set.
Concretely, we ask what the optimized training
in RocketQA solves. We hypothesize that typi-
cal LMs are sensitive to mislabeling, which can
cause detrimental updates to the model weights.
Denoising can effectively remove the bad samples
and their updates. On the other hand, for most
LMs, the CLS vectors are either trained with a
simple task (Devlin et al., 2019) or not explicitly
trained at all (Liu et al., 2019). These vectors are
far from being able to form an embedding space
of passages (Lee et al., 2019). The large training
batches in RocketQA help the LM to stably learn
to form the full embedding space. To this end,
we want to pre-train an LM such that it is locally
noise-resistant and has a well-structured global em-
bedding space. For noise resistance, we borrow
the Condenser pre-training architecture (Gao and
Callan, 2021), which performs language model pre-
training actively conditioned on the CLS vector. It
produces an information-rich CLS representation
that can robustly condense an input sequence. We
then introduce a simple corpus level contrastive
learning objective: given a target corpus of docu-
ments to retrieve from, at each training step sample
text span pairs from a batch of documents and train
the model such that the CLS embeddings of two
spans from the same document are close and spans
from different documents are far apart. Combin-
ing the two, we propose coCondenser pre-training,arXiv:2108.05540v1 [cs.IR] 12 Aug 2021 |
1501.05014.pdf | Experimental Simulation of Closed Timelike Curves
Martin Ringbauer1,2∗, Matthew A. Broome1,2, Casey R. Myers1, Andrew G. White1,2and Timothy C. Ralph2
1Centre for Engineered Quantum Systems,2Centre for Quantum Computer and Communication Technology,
School of Mathematics and Physics, University of Queensland, Brisbane, QLD 4072, Australia
Closed timelike curves are among the most controversial features of modern physics. As legitimate
solutions to Einstein’s field equations, they allow for time travel, which instinctively seems para-
doxical. However, in the quantum regime these paradoxes can be resolved leaving closed timelike
curves consistent with relativity. The study of these systems therefore provides valuable insight into
non-linearities and the emergence of causal structures in quantum mechanics—essential for any for-
mulation of a quantum theory of gravity. Here we experimentally simulate the non-linear behaviour
of a qubit interacting unitarily with an older version of itself, addressing some of the fascinating
effects that arise in systems traversing a closed timelike curve. These include perfect discrimination
of non-orthogonal states and, most intriguingly, the ability to distinguish nominally equivalent ways
of preparing pure quantum states. Finally, we examine the dependence of these effects on the initial
qubit state, the form of the unitary interaction, and the influence of decoherence.
INTRODUCTION
One aspect of general relativity that has long intrigued
physicists is the relative ease with which one can find so-
lutions to Einstein’s field equations that contain closed
timelike curves (CTCs)—causal loops in space-time that
return to the same point in space and time [1–3].
Driven by apparent inconsistencies—like the grandfa-
ther paradox—there have been numerous efforts, such as
Novikov’s self-consistency principle [4] to reconcile them
or Hawking’s chronology protection conjecture [5], to dis-
prove the existence of CTCs. While none of these clas-
sical hypotheses could be verified so far, the situation
is particularly interesting in the quantum realm. In his
seminal 1991 paper Deutsch showed for quantum sys-
tems traversing CTCs there always exist unique solu-
tions, which do not allow superluminal signalling [6, 7].
Quantum mechanics therefore allows for causality viola-
tion without paradoxes whilst remaining consistent with
relativity.
Advances in the field of Deutsch CTCs have shown
some very surprising and counter-intuitive results, such
as the solution of NP-complete problems in polynomial
time [8], unambiguous discrimination of any set of non-
orthogonal states [9], perfect universal quantum state
cloning [10, 11] and the violation of Heisenberg’s uncer-
tainty principle [12]. The extraordinary claims of what
one could achieve given access to a quantum system
traversing a CTC have been disputed in the literature,
with critics pointing out apparent inconsistencies in the
theory such as the information paradox or the linearity
trap [13, 14]. However, it has been shown that the theory
can be formulated in such a way that these inconsisten-
cies are resolved [7, 15].
∗Electronic address: m.ringbauer@uq.edu.auModern experimental quantum simulation allows one
to ask meaningful questions that provide insights into the
behaviour of complex quantum systems. Initial results
have been obtained in various areas of quantum mechan-
ics [16–18] and in particular in the field of relativistic
quantum information [19–23]. This recent experimental
success, coupled with the growing interest for the study of
non-linear extensions to quantum mechanics, motivates
the question of whether the fundamentally non-linear dy-
namics and the unique behaviour arising from CTCs can
be simulated experimentally.
In this article we use photonic systems to simulate the
quantum evolution through a Deutsch CTC. We demon-
strate how the CTC-traversing qubit adapts to changes
in the input state |ψ⟩, and unitary interaction Uto en-
sure physical consistency according to Deutsch’s consis-
tency relation [6]. We observe non-linear evolution in
the circuit suggested by Bacon [8] and enhanced distin-
guishability of two non-orthogonal states after the action
of an optimised version of a circuit proposed by Brun et
al. [9]. Using the self-consistent formulation of Ref. [7] we
then move beyond the simplest implementations and find
a striking difference in the behaviour of the system for
direct as opposed to entanglement-assisted state prepa-
ration. Finally, we explore the system’s sensitivity to
decoherence.
U U
FIG. 1: Model of a quantum state |ψ⟩interacting with
an older version of itself. This situation can equivalently
be interpreted as a chronology-respecting qubit interacting
with a qubit trapped in a CTC. The CTC in general consists
of a causal worldline with its past and future ends connected
via a wormhole (indicated by black triangles).arXiv:1501.05014v1 [quant-ph] 20 Jan 2015 |
2310.18168.pdf | PERSONAS AS A WAY TO MODEL TRUTHFULNESS IN
LANGUAGE MODELS
Nitish Joshi1∗Javier Rando2∗Abulhair Saparov1Najoung Kim3He He1
1New York University2ETH Zurich3Boston University
{nitish}@nyu.edu {jrando}@ethz.ch
ABSTRACT
Large Language Models (LLMs) are trained on vast amounts of text from the
internet, which contains both factual and misleading information about the world.
Can language models discern truth from falsehood in this contradicting data?
Expanding on the view that LLMs can model different agents producing the corpora,
we hypothesize that they can cluster truthful text by modeling a truthful persona : a
group of agents that are likely to produce truthful text and share similar features.
For example, trustworthy sources like Wikipedia and Science usually use formal
writing styles and make consistent claims. By modeling this persona, LLMs can
generalize truthfulness beyond the specific contexts in which each agent generated
the training text. For example, the model can infer that the agent “Wikipedia”
will behave truthfully on topics that were only generated by “Science” because
they share a persona. We first show evidence for the persona hypothesis via two
observations: (1) we can probe whether a model’s answer will be truthful before it
is generated; (2) finetuning a model on a set of facts improves its truthfulness on
unseen topics. Next, using arithmetics as a synthetic environment, we show that
language models can separate true and false statements, and generalize truthfulness
across agents; but only if agents in the training data share a truthful generative
process that enables the creation of a truthful persona. Overall, our findings suggest
that models can exploit hierarchical structures in the data to learn abstract concepts
like truthfulness.
1 I NTRODUCTION
Large Language Models (LLMs) are pretrained on increasing amounts of data from the internet
(Brown et al., 2020; Chowdhery et al., 2022)—a noisy, and mostly uncurated corpus—which contains
both truthful statements about the world and untruthful statements such as misconceptions and
conspiracy theories. The false claims in the data pose a risk of misinformation as they can be
propogated by the model (Lin et al., 2021). Intriguingly, recent work shows that the truth value of a
statement can be elicited from its embeddings (Burns et al., 2022; Li et al., 2023). This motivates
the main question of this work: what mechanism do LLMs use to distinguish truth from falsehood
despite noise in the data?
Consider two contradicting statements: "people with type A blood are ambitious" (false) and "blood
type does not imply any personality traits" (true). When asked about the relation between blood type
and personality, the classic view of language models suggests that it will generate the most frequent
statement, regardless of whether it is true. However, we observe that slight changes in the question
can steer the model to produce any of the two (Figure 1). This suggests that frequency alone is not
sufficient to explain model behavior. Andreas (2022) hypothesizes that LLMs can infer the agent who
produced the context and generate continuations according to the agent’s goals and beliefs. In this
example, given the question "What personality does someone with type A blood have?" with a false
presupposition (Kim et al., 2022), the model may infer that the agent who asks the question believes
that blood type influences personality, and thus generate an answer following this (false) belief. If the
∗equal contribution
1arXiv:2310.18168v2 [cs.CL] 30 Oct 2023 |
1712.03346.pdf | Variational auto-encoding of protein sequences
Sam Sinai∗
Harvard University
samsinai@g.harvard.eduEric Kelsic†‡
Harvard Medical School
eric kelsic@hms.harvard.edu
George M. Church§†‡
Harvard Medical School
church labadmin@hms.harvard.eduMartin A. Nowak∗‡¶
Harvard University
martin nowak@harvard.edu
Abstract
Proteins are responsible for the most diverse set of functions in biology. The abil-
ity to extract information from protein sequences and to predict the effects of mu-
tations is extremely valuable in many domains of biology and medicine. However
the mapping between protein sequence and function is complex and poorly under-
stood. Here we present an embedding of natural protein sequences using a Vari-
ational Auto-Encoder and use it to predict how mutations affect protein function.
We use this unsupervised approach to cluster natural variants and learn interac-
tions between sets of positions within a protein. This approach generally performs
better than baseline methods that consider no interactions within sequences, and
in some cases better than the state-of-the-art approaches that use the inverse-Potts
model. This generative model can be used to computationally guide exploration of
protein sequence space and to better inform rational and automatic protein design.
1 Introduction
Protein engineering is of increasing importance in modern therapeutics. Designing novel proteins
that perform a particular function is challenging as the number of functional proteins compared to
all possible protein sequences is miniscule. This renders naive experimental search for desirable
variants intractable. Hence, a computational heuristic that can narrow the experimental search space
(virtual screening) is extremely valuable.
While a variety of energy-based models for protein folding have been used in the past decades, re-
cent advances in machine learning, particularly in the domain of generative models, have opened up
new avenues for computational protein design. Rich databases of protein sequences that document
functional proteins found in living organisms provide us with ample training data. The majority
of these datasets lack labels (indicators of their performance) however, which prompts for an un-
supervised learning approach. As these sequences arise from closely related living organisms, it is
reasonable to assume that they are functional (and also similar in their functionality).
Given the sparse, unstructured, and discrete space that protein sequences exist in, it is prudent to
anchor the search for functional sequences on a known protein with the desired functionality. Start-
ing from that sequence of interest, we can search public databases of sequence variants from related
∗Program for Evolutionary Dynamics, Department of Organismic and Evolutionary Biology
†Wyss Institute
‡To whom correspondence should be directed
§Department of Genetics
¶Department of Mathematics
1arXiv:1712.03346v3 [q-bio.QM] 3 Jan 2018 |
2309.16797.pdf | PROMPTBREEDER :
SELF-REFERENTIAL SELF-IMPROVEMENT
VIAPROMPT EVOLUTION
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rockt ¨aschel
Google DeepMind
{chrisantha,dylski,henrykm,osindero,rocktaschel }@google.com
ABSTRACT
Popular prompt strategies like Chain-of-Thought Prompting can dramatically im-
prove the reasoning abilities of Large Language Models (LLMs) in various do-
mains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present P ROMPTBREEDER , a general-purpose self-referential self-
improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, evalu-
ates them for fitness on a training set, and repeats this process over multiple gen-
erations to evolve task-prompts. Crucially, the mutation of these task-prompts is
governed by mutation-prompts that the LLM generates and improves throughout
evolution in a self-referential way. That is, Promptbreeder is not just improving
task-prompts, but it is also improving the mutation-prompts that improve these
task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such
as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arith-
metic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is
able to evolve intricate task-prompts for the challenging problem of hate speech
classification.
1 I NTRODUCTION
Prompting is central to the downstream performance of foundation models. For example, different
prompt strategies1can have a significant impact on a model’s reasoning abilities (Wei et al., 2022;
Nye et al., 2021; Zhou et al., 2022; Wang et al., 2022; Zhou et al., 2023; Wang et al., 2023b), multi-
modal processing abilities (Yang et al., 2023b; Wang et al., 2023d), or tool use abilities (Yao et al.,
2022; Schick et al., 2023). Furthermore, prompting can improve model distillation (Wang et al.,
2023c; Hsieh et al., 2023) and it can be used to simulate agentic behavior (Wang et al., 2023a; Park
et al., 2023; Wu et al., 2023). However, these prompt strategies are manually engineered. Since the
specific way a prompt is phrased can have a dramatic effect on its utility (Madaan & Yazdanbakhsh,
2022), it raises the question of whether prompt engineering can be automated. Automatic Prompt
Engineer (APE, Zhou et al., 2023) attempts to address this by generating an initial distribution of
prompts using another prompt that infers the problem from a number of input-output examples from
the dataset. However, Zhou et al. found “diminishing returns to further selection rounds as the qual-
ity seems to stabilize after three rounds”, and consequently abandoned the use of an iterative APE.
We propose a solution to the problem of diminishing returns via a diversity maintaining evolutionary
algorithm for self-referential self-improvement of prompts for LLMs.
Schmidhuber (1990) notes that the “program of a neural network is its weight matrix”. Con-
sequently, this “program” can be changed in a self-referential way by the neural network it-
self (Schmidhuber, 1993; Irie et al., 2022). Such a neural network that improves itself, as well
as improving the way it improves itself, might be an important stepping stone towards open-ended
self-referential self-improvement of AIs (Schmidhuber, 2003). However, self-improvement via self-
referential weight matrices is costly as it requires additional parameters that modify all of the model’s
1See Appendix A for definitions of terminology.
1arXiv:2309.16797v1 [cs.CL] 28 Sep 2023 |
2404.12253v1.pdf | Toward Self-Improvement of LLMs via Imagination,
Searching, and Criticizing
Ye Tian∗, Baolin Peng∗, Linfeng Song∗, Lifeng Jin, Dian Yu, Haitao Mi†, Dong Yu
Tencent AI Lab, Bellevue, WA
{yaptian,baolinpeng,lfsong,lifengjin,yudian,haitaomi}@global.tencent.com
Abstract
Despite the impressive capabilities of Large Language Models (LLMs) on various
tasks, they still struggle with scenarios that involves complex reasoning and plan-
ning. Recent work proposed advanced prompting techniques and the necessity of
fine-tuning with high-quality data to augment LLMs’ reasoning abilities. However,
these approaches are inherently constrained by data availability and quality. In light
of this, self-correction and self-learning emerge as viable solutions, employing
strategies that allow LLMs to refine their outputs and learn from self-assessed
rewards. Yet, the efficacy of LLMs in self-refining its response, particularly in
complex reasoning and planning task, remains dubious. In this paper, we introduce
ALPHA LLM for the self-improvements of LLMs, which integrates Monte Carlo
Tree Search (MCTS) with LLMs to establish a self-improving loop, thereby enhanc-
ing the capabilities of LLMs without additional annotations. Drawing inspiration
from the success of AlphaGo, ALPHA LLM addresses the unique challenges of
combining MCTS with LLM for self-improvement, including data scarcity, the
vastness search spaces of language tasks, and the subjective nature of feedback
in language tasks. ALPHA LLM is comprised of prompt synthesis component, an
efficient MCTS approach tailored for language tasks, and a trio of critic models for
precise feedback. Our experimental results in mathematical reasoning tasks demon-
strate that ALPHA LLM significantly enhances the performance of LLMs without
additional annotations, showing the potential for self-improvement in LLMs.
1 Introduction
LLMs, trained on trillions of tokens with billions of parameters have shown unparalleled capabilities
in a wide range of natural language processing tasks (Touvron et al., 2023b; Team et al., 2023;
OpenAI, 2023). Nevertheless, they continue to face challenges in scenarios requiring complex
reasoning and strategic planning (Valmeekam et al., 2022; Stechly et al., 2024). While advanced
prompting approaches such as Chain, Tree, Graph-of-Thought (Wei et al., 2022; Yao et al., 2024;
Besta et al., 2024; Ding et al., 2023), which generate intermediate steps in the reasoning process
demonstrate large improvements on reasoning capability of LLMs, it remains essential to fine-tune
LLMs using a substantial volume of high-quality, supervised data to fundamentally improve the
model performance (Nye et al., 2021; Lewkowycz et al., 2022; Chung et al., 2022). This methodology
is inherently limited by the scope and quality of data that humans can provide.
Considering existing challenges, the concept of self-correction and self-learning have been proposed
as promising solutions (Madaan et al., 2024; Saunders et al., 2022; Chen et al., 2024). Within
these framework, LLMs typically operate by employing two main strategies: 1) they continuously
refine their responses based on the feedback of their past responses, and 2) they extensively sample
∗Equal Contribution; †Corresponding Author
Work in progress.arXiv:2404.12253v1 [cs.CL] 18 Apr 2024 |
2005.10242.pdf | Understanding Contrastive Representation Learning through
Alignment and Uniformity on the Hypersphere
Tongzhou Wang1Phillip Isola1
Abstract
Contrastive representation learning has been out-
standingly successful in practice. In this work,
we identify two key properties related to the con-
trastive loss: (1) alignment (closeness) of features
from positive pairs, and (2) uniformity of the in-
duced distribution of the (normalized) features on
the hypersphere. We prove that, asymptotically,
the contrastive loss optimizes these properties,
and analyze their positive effects on downstream
tasks. Empirically, we introduce an optimizable
metric to quantify each property. Extensive exper-
iments on standard vision and language datasets
confirm the strong agreement between both met-
rics and downstream task performance. Directly
optimizing for these two metrics leads to repre-
sentations with comparable or better performance
at downstream tasks than contrastive learning.
Project Page: ssnl.github.io/hypersphere .
Code: github.com/SsnL/align uniform .
github.com/SsnL/moco align uniform .
1. Introduction
A vast number of recent empirical works learn representa-
tions with a unit ℓ2norm constraint, effectively restricting
the output space to the unit hypersphere (Parkhi et al., 2015;
Schroff et al., 2015; Liu et al., 2017; Hasnat et al., 2017;
Wang et al., 2017; Bojanowski & Joulin, 2017; Mettes et al.,
2019; Hou et al., 2019; Davidson et al., 2018; Xu & Durrett,
2018), including many unsupervised contrastive represen-
tation learning methods (Wu et al., 2018; Bachman et al.,
2019; Tian et al., 2019; He et al., 2019; Chen et al., 2020a).
Intuitively, having the features live on the unit hypersphere
leads to several desirable traits. Fixed-norm vectors are
known to improve training stability in modern machine
learning where dot products are ubiquitous (Xu & Durrett,
1MIT Computer Science & Artificial Intelligence Lab (CSAIL).
Correspondence to: Tongzhou Wang <tongzhou@mit.edu >.
Proceedings of the 37thInternational Conference on Machine
Learning , Online, PMLR 119, 2020. Copyright 2020 by the au-
thor(s).
Alignment:Similar samples have similar featuresAlignment: Similar samples have similar features.
(Figure inspired by Tian et al. (2019).)
Feature Density
Uniformity: Preserve maximal information
Uniformity: Preserve maximal information.
Figure 1: Illustration of alignment and uniformity of fea-
ture distributions on the output unit hypersphere. STL-10
(Coates et al., 2011) images are used for demonstration.
2018; Wang et al., 2017). Moreover, if features of a class are
sufficiently well clustered, they are linearly separable with
the rest of feature space (see Figure 2), a common criterion
used to evaluate representation quality.
While the unit hypersphere is a popular choice of feature
space, not all encoders that map onto it are created equal.
Recent works argue that representations should addition-
ally be invariant to unnecessary details, and preserve as
much information as possible (Oord et al., 2018; Tian et al.,
2019; Hjelm et al., 2018; Bachman et al., 2019). Let us
call these two properties alignment anduniformity (see
Figure 1). Alignment favors encoders that assign similararXiv:2005.10242v10 [cs.LG] 15 Aug 2022 |
Improving-Memory-Search-through-Model-Based-Cue-Selection.pdf | IMPROVING MEMORY SEARCH 1
.
Improving Memory Search
through Model-Based Cue Selection
Charlotte A. Cornell1, Kenneth A. Norman2, Thomas L. Griffiths2,3, and Qiong Zhang1,4
1Psychology Department, Rutgers University–New Brunswick
2Psychology Department, Princeton University
3Computer Science Department, Princeton University
4Computer Science Department, Rutgers University–New Brunswick
Author Note
This work was supported by a start-up fund awarded to Q.Z. by Rutgers
University–New Brunsiwck and the National Science Foundation (BCS-2316716) awarded
to Q.Z.. Correspondence concerning this article should be addressed to Qiong Zhang
<qiong.z@rutgers.edu> |
tr00-004.pdf | /CC /D6/CP/CX/D2/CX/D2/CV /C8/D6/D3 /CS/D9/CR/D8/D7 /D3/CU /BX/DC/D4 /CT/D6/D8/D7 /CQ /DD /C5/CX/D2/CX/D1/CX/DE/CX/D2/CV /BV/D3/D2 /D8/D6/CP/D7/D8/CX/DA /CT/BW/CX/DA /CT/D6/CV/CT/D2/CR/CT/BZ/BV/C6/CD /CC/CA /BE/BC/BC/BC/B9/BC/BC/BG/BZ/CT/D3/AB/D6/CT/DD /BX/BA /C0/CX/D2 /D8/D3/D2/BZ/CP/D8/D7/CQ /DD /BV/D3/D1/D4/D9/D8/CP/D8/CX/D3/D2/CP/D0 /C6/CT/D9/D6/D3/D7/CR/CX/CT/D2/CR/CT /CD/D2/CX/D8/CD/D2/CX/DA /CT/D6/D7/CX/D8 /DD /BV/D3/D0/D0/CT/CV/CT /C4/D3/D2/CS/D3/D2/BD/BJ /C9/D9/CT/CT/D2 /CB/D5/D9/CP/D6/CT/B8 /C4/D3/D2/CS/D3/D2 /CF /BV/BD/C6 /BF/BT/CA/B8 /CD/BA/C3/BA/CW/D8/D8/D4/BM/BB/BB/DB/DB/DB/BA/CV/CP/D8/D7/CQ/DD /BA/D9 /CR/D0/BA /CP/CR /BA/D9/CZ /BB/BT/CQ/D7/D8/D6/CP/CR/D8/C1/D8 /CX/D7 /D4 /D3/D7/D7/CX/CQ/D0/CT /D8/D3 /CR/D3/D1 /CQ/CX/D2/CT /D1 /D9/D0/D8/CX/D4/D0/CT /D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D7/D8/CX/CR /D1/D3 /CS/CT/D0/D7 /D3/CU /D8/CW/CT /D7/CP/D1/CT /CS/CP/D8/CP /CQ /DD/D1 /D9/D0/D8/CX/D4/D0/DD/CX/D2/CV/D8/CW/CT/CX/D6 /D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D8 /DD /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2/D7 /D8/D3/CV/CT/D8/CW/CT/D6 /CP/D2/CS /D8/CW/CT/D2 /D6/CT/D2/D3/D6/D1/CP/D0/CX/DE/CX/D2/CV/BA /CC/CW/CX/D7 /CX/D7 /CP /DA /CT/D6/DD /CTÆ/CR/CX/CT/D2 /D8/DB /CP /DD /D8/D3 /D1/D3 /CS/CT/D0 /CW/CX/CV/CW/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /CS/CP/D8/CP /DB/CW/CX/CR /CW /D7/CX/D1 /D9/D0/D8/CP/D2/CT/D3/D9/D7/D0/DD /D7/CP/D8/CX/D7/AC/CT/D7 /D1/CP/D2 /DD /CS/CX/AB/CT/D6/CT/D2 /D8 /D0/D3 /DB/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /CR/D3/D2/D7/D8/D6/CP/CX/D2 /D8/D7 /CQ /CT/CR/CP/D9/D7/CT /CT/CP/CR /CW /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /CT/DC/D4 /CT/D6/D8 /D1/D3 /CS/CT/D0 /CR/CP/D2 /CU/D3 /CR/D9/D7 /D3/D2 /CV/CX/DA/CX/D2/CV /CW/CX/CV/CW/D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D8 /DD/D8 /D3/CS /CP /D8 /CP/DA /CT/CR/D8/D3/D6/D7 /D8/CW/CP/D8 /D7/CP/D8/CX/D7/CU/DD /CY/D9/D7/D8 /D3/D2/CT /D3/CU /D8/CW/CT /CR/D3/D2/D7/D8/D6/CP/CX/D2 /D8/D7/BA /BW/CP/D8/CP /DA /CT/CR/D8/D3/D6/D7 /D8/CW/CP/D8 /D7/CP/D8/CX/D7/CU/DD/D8/CW/CX/D7 /D3/D2/CT /CR/D3/D2/D7/D8/D6/CP/CX/D2 /D8 /CQ/D9/D8 /DA/CX/D3/D0/CP/D8/CT /D3/D8/CW/CT/D6 /CR/D3/D2/D7/D8/D6/CP/CX/D2 /D8/D7 /DB/CX/D0/D0 /CQ /CT /D6/D9/D0/CT/CS /D3/D9/D8 /CQ /DD /D8/CW/CT/CX/D6 /D0/D3 /DB /D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D8 /DD/D9/D2/CS/CT/D6 /D8/CW/CT /D3/D8/CW/CT/D6 /CT/DC/D4 /CT/D6/D8/D7/BA /CC /D6/CP/CX/D2/CX/D2/CV /CP /D4/D6/D3 /CS/D9/CR/D8 /D3/CU /CT/DC/D4 /CT/D6/D8/D7 /CP/D4/D4 /CT/CP/D6/D7 /CS/CXÆ/CR/D9/D0/D8 /CQ /CT/CR/CP/D9/D7/CT/B8 /CX/D2 /CP/CS/CS/CX/D8/CX/D3/D2/D8/D3 /D1/CP/DC/CX/D1/CX/DE/CX/D2/CV /D8/CW/CT /D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D8 /DD /D8/CW/CP/D8 /CT/CP/CR /CW /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /CT/DC/D4 /CT/D6/D8 /CP/D7/D7/CX/CV/D2/D7 /D8/D3 /D8/CW/CT /D3/CQ/D7/CT/D6/DA /CT/CS /CS/CP/D8/CP/B8 /CX/D8/CX/D7 /D2/CT/CR/CT/D7/D7/CP/D6/DD /D8/D3 /D1/CP/CZ /CT /D8/CW/CT /CT/DC/D4 /CT/D6/D8/D7 /CQ /CT /CP/D7 /CS/CX/AB/CT/D6/CT/D2 /D8 /CP/D7 /D4 /D3/D7/D7/CX/CQ/D0/CT/BA /CC/CW/CX/D7 /CT/D2/D7/D9/D6/CT/D7 /D8/CW/CP/D8 /D8/CW/CT /D4/D6/D3 /CS/D9/CR/D8/D3/CU /D8/CW/CT/CX/D6 /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2/D7 /CX/D7 /D7/D1/CP/D0/D0 /DB/CW/CX/CR /CW/CP /D0 /D0 /D3 /DB/D7 /D8/CW/CT /D6/CT/D2/D3/D6/D1/CP/D0/CX/DE/CP/D8/CX/D3/D2 /D8/D3 /D1/CP/CV/D2/CX/CU/DD /D8/CW/CT /D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D8 /DD/D3/CU /D8/CW/CT /CS/CP/D8/CP /D9/D2/CS/CT/D6 /D8/CW/CT /D4/D6/D3 /CS/D9/CR/D8 /D3/CU /CT/DC/D4 /CT/D6/D8/D7 /D1/D3 /CS/CT/D0/BA /BY /D3/D6/D8/D9/D2/CP/D8/CT/D0/DD /B8 /CX/CU /D8/CW/CT /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /CT/DC/D4 /CT/D6/D8/D7 /CP/D6/CT/D8/D6/CP/CR/D8/CP/CQ/D0/CT /D8/CW/CT/D6/CT /CX/D7 /CP/D2 /CTÆ/CR/CX/CT/D2 /D8/DB /CP /DD /D8/D3 /D8/D6/CP/CX/D2 /CP /D4/D6/D3 /CS/D9/CR/D8 /D3/CU /CT/DC/D4 /CT/D6/D8/D7/BA/BD /C1/D2 /D8/D6/D3 /CS/D9/CR/D8/CX/D3/D2/C7/D2/CT /DB /CP /DD /D3/CU /D1/D3 /CS/CT/D0/CX/D2/CV /CP /CR/D3/D1/D4/D0/CX/CR/CP/D8/CT/CS/B8 /CW/CX/CV/CW/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /CS/CP/D8/CP /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2 /CX/D7 /D8/D3 /D9/D7/CT /CP /D0/CP/D6/CV/CT /D2 /D9/D1 /CQ/CT /D6/D3/CU /D6/CT/D0/CP/D8/CX/DA /CT/D0/DD /D7/CX/D1/D4/D0/CT /D4/D6/D3/CQ/CP/CQ/CX/D0/CX/D7/D8/CX/CR /D1/D3 /CS/CT/D0/D7 /CP/D2/CS /D8/D3 /D7/D3/D1/CT/CW/D3 /DB/CR /D3 /D1 /CQ/CX/D2/CT /D8/CW/CT /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2/D7 /D7/D4 /CT/CR/CX/AC/CT/CS /CQ /DD/CT/CP/CR /CW/D1 /D3 /CS /CT /D0 /BA /BT /DB /CT/D0/D0/B9/CZ/D2/D3 /DB/D2 /CT/DC/CP/D1/D4/D0/CT /D3/CU /D8/CW/CX/D7 /CP/D4/D4/D6/D3/CP/CR /CW/CX /D7 /CP /D1/CX/DC/D8/D9/D6/CT /D3/CU /BZ/CP/D9/D7/D7/CX/CP/D2/D7 /CX/D2 /DB/CW/CX/CR /CW /CT/CP/CR /CW/D7/CX/D1/D4/D0/CT /D1/D3 /CS/CT/D0 /CX/D7 /CP /BZ/CP/D9/D7/D7/CX/CP/D2 /CP/D2/CS /D8/CW/CT /CR/D3/D1 /CQ/CX/D2/CP/D8/CX/D3/D2 /D6/D9/D0/CT /CR/D3/D2/D7/CX/D7/D8/D7 /D3/CU /D8/CP/CZ/CX/D2/CV /CP /DB /CT/CX/CV/CW /D8/CT/CS /CP/D6/CX/D8/CW/D1/CT/D8/CX/CR/D1/CT/CP/D2 /D3/CU /D8/CW/CT /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2/D7/BA /CC/CW/CX/D7 /CX/D7 /CT/D5/D9/CX/DA /CP/D0/CT/D2 /D8/D8 /D3/CP /D7 /D7 /D9 /D1 /CX /D2 /CV /CP /D2/D3 /DA /CT/D6/CP/D0/D0 /CV/CT/D2/CT/D6/CP/D8/CX/DA /CT/D1 /D3 /CS /CT /D0/CX/D2 /DB/CW/CX/CR /CW/CT /CP /CR /CW/CS /CP /D8 /CP/DA /CT/CR/D8/D3/D6 /CX/D7 /CV/CT/D2/CT/D6/CP/D8/CT/CS /CQ /DD/AC /D6 /D7 /D8 /CR /CW/D3 /D3/D7/CX/D2/CV /D3/D2/CT /D3/CU /D8/CW/CT /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /CV/CT/D2/CT/D6/CP/D8/CX/DA /CT /D1/D3 /CS/CT/D0/D7/CP/D2/CS /D8/CW/CT/D2 /CP/D0/D0/D3 /DB/CX/D2/CV /D8/CW/CP/D8 /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /D1/D3 /CS/CT/D0 /D8/D3 /CV/CT/D2/CT/D6/CP/D8/CT /D8/CW/CT /CS/CP/D8/CP /DA /CT/CR/D8/D3/D6/BA /BV/D3/D1 /CQ/CX/D2/CX/D2/CV /D1/D3 /CS/CT/D0/D7 /CQ /DD/CU/D3/D6/D1/CX/D2/CV /CP /D1/CX/DC/D8/D9/D6/CT /CX/D7 /CP/D8/D8/D6/CP/CR/D8/CX/DA /CT /CU/D3/D6 /D7/CT/DA /CT/D6/CP/D0 /D6/CT/CP/D7/D3/D2/D7/BA /C1/D8 /CX/D7 /CT/CP/D7/DD /D8/D3 /AC/D8 /D1/CX/DC/D8/D9/D6/CT/D7 /D3/CU /D8/D6/CP/CR/D8/CP/CQ/D0/CT /D1/D3 /CS/CT/D0/D7/D8/D3 /CS/CP/D8/CP /D9/D7/CX/D2/CV /BX/C5 /D3/D6 /CV/D6/CP/CS/CX/CT/D2 /D8 /CP/D7/CR/CT/D2 /D8 /CP/D2/CS/B8 /CX/CU /D8/CW/CT /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /D1/D3 /CS/CT/D0/D7 /CS/CX/AB/CT/D6 /CP/D0 /D3 /D8 /B8 /D8/CW/CT /D1/CX/DC/D8/D9/D6/CT /CX/D7/D0/CX/CZ /CT/D0/DD /D8/D3 /CQ/CT /CP /CQ /CT/D8/D8/CT/D6 /AC/D8 /D8/D3 /D8/CW/CT /D8/D6/D9/CT /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2 /D3/CU /D8/CW/CT /CS/CP/D8/CP /D8/CW/CP/D2 /CP /D6/CP/D2/CS/D3/D1 /CR /CW/D3/CX/CR/CT /CP/D1/D3/D2/CV /D8/CW/CT/CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /D1/D3 /CS/CT/D0/D7/BA /C1/D2/CS/CT/CT/CS/B8 /CX/CU /D7/D9Æ/CR/CX/CT/D2 /D8/D0/DD /D1/CP/D2 /DD /D1/D3 /CS/CT/D0/D7 /CP/D6/CT /CX/D2/CR/D0/D9/CS/CT/CS /CX/D2 /D8/CW/CT /D1/CX/DC/D8/D9/D6/CT/B8 /CX/D8 /CX/D7 /D4 /D3/D7/D7/CX/CQ/D0/CT/D8/D3 /CP/D4/D4/D6/D3 /DC/CX/D1/CP/D8/CT /CR/D3/D1/D4/D0/CX/CR/CP/D8/CT/CS /D7/D1/D3 /D3/D8/CW /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2/D7 /CP/D6/CQ/CX/D8/D6/CP/D6/CX/D0/DD /CP/CR/CR/D9/D6/CP/D8/CT/D0/DD /BA/CD/D2/CU/D3/D6/D8/D9/D2/CP/D8/CT/D0/DD /B8 /D1/CX/DC/D8/D9/D6/CT /D1/D3 /CS/CT/D0/D7 /CP/D6/CT /DA /CT/D6/DD /CX/D2/CTÆ/CR/CX/CT/D2 /D8 /CX/D2 /CW/CX/CV/CW/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /D7/D4/CP/CR/CT/D7/BA /BV/D3/D2/D7/CX/CS/CT/D6/B8 /CU/D3/D6/CT/DC/CP/D1/D4/D0/CT/B8 /D8/CW/CT /D1/CP/D2/CX/CU/D3/D0/CS /D3/CU /CU/CP/CR/CT /CX/D1/CP/CV/CT/D7/BA /C1/D8 /D8/CP/CZ /CT/D7 /CP/CQ /D3/D9/D8 /BF/BH /D6/CT/CP/D0 /D2 /D9/D1 /CQ /CT/D6/D7 /D8/D3 /D7/D4 /CT/CR/CX/CU/DD /D8/CW/CT /D7/CW/CP/D4 /CT/B8/D4 /D3/D7/CT/B8 /CT/DC/D4/D6/CT/D7/D7/CX/D3/D2 /CP/D2/CS /CX/D0/D0/D9/D1/CX/D2/CP/D8/CX/D3/D2 /D3/CU /CP /CU/CP/CR/CT /CP/D2/CS/B8 /D9/D2/CS/CT/D6 /CV/D3 /D3 /CS /DA/CX/CT/DB/CX/D2/CV /CR/D3/D2/CS/CX/D8/CX/D3/D2/D7/B8 /D3/D9/D6 /D4 /CT/D6/CR/CT/D4/D8/D9/CP/D0/D7/DD/D7/D8/CT/D1/D7 /D4/D6/D3 /CS/D9/CR/CT /CP /D7/CW/CP/D6/D4 /D4 /D3/D7/D8/CT/D6/CX/D3/D6 /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2 /D3/D2 /D8/CW/CX/D7 /BF/BH/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /D1/CP/D2/CX/CU/D3/D0/CS/BA /CC/CW/CX/D7 /CR/CP/D2/D2/D3/D8/CQ /CT /CS/D3/D2/CT /D9/D7/CX/D2/CV /CP /D1/CX/DC/D8/D9/D6/CT /D3/CU /D1/D3 /CS/CT/D0/D7 /CT/CP/CR /CW/D3 /CU/DB /CW /CX /CR /CW /CX/D7 /D8/D9/D2/CT/CS /CX/D2 /D8/CW/CT /BF/BH/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /D7/D4/CP/CR/CT /CQ /CT/CR/CP/D9/D7/CT/D8/CW/CT /D4 /D3/D7/D8/CT/D6/CX/D3/D6 /CS/CX/D7/D8/D6/CX/CQ/D9/D8/CX/D3/D2 /CR/CP/D2/D2/D3/D8 /CQ /CT /D7/CW/CP/D6/D4 /CT/D6 /D8/CW/CP/D2 /D8/CW/CT /CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /D1/D3 /CS/CT/D0/D7 /CX/D2 /D8/CW/CT /D1/CX/DC/D8/D9/D6/CT /CP/D2/CS /D8/CW/CT/CX/D2/CS/CX/DA/CX/CS/D9/CP/D0 /D1/D3 /CS/CT/D0/D7 /D1 /D9/D7/D8 /CQ /CT /CQ/D6/D3/CP/CS/D0/DD /D8/D9/D2/CT/CS /D8/D3 /CP/D0/D0/D3 /DB/D8 /CW /CT /D1 /D8 /D3/CR /D3 /DA /CT/D6 /D8/CW/CT /BF/BH/B9/CS/CX/D1/CT/D2/D7/CX/D3/D2/CP/D0 /D7/D4/CP/CR/CT/BA |
2212.04356.pdf | Robust Speech Recognition via Large-Scale Weak Supervision
Alec Radford* 1Jong Wook Kim* 1Tao Xu1Greg Brockman1Christine McLeavey1Ilya Sutskever1
Abstract
We study the capabilities of speech processing
systems trained simply to predict large amounts of
transcripts of audio on the internet. When scaled
to 680,000 hours of multilingual and multitask
supervision, the resulting models generalize well
to standard benchmarks and are often competitive
with prior fully supervised results but in a zero-
shot transfer setting without the need for any fine-
tuning. When compared to humans, the models
approach their accuracy and robustness. We are
releasing models and inference code to serve as
a foundation for further work on robust speech
processing.
1. Introduction
Progress in speech recognition has been energized by the
development of unsupervised pre-training techniques exem-
plified by Wav2Vec 2.0 (Baevski et al., 2020). Since these
methods learn directly from raw audio without the need for
human labels, they can productively use large datasets of un-
labeled speech and have been quickly scaled up to 1,000,000
hours of training data (Zhang et al., 2021), far more than the
1,000 or so hours typical of an academic supervised dataset.
When fine-tuned on standard benchmarks, this approach
has improved the state of the art, especially in a low-data
setting.
These pre-trained audio encoders learn high-quality repre-
sentations of speech, but because they are purely unsuper-
vised they lack an equivalently performant decoder mapping
those representations to usable outputs, necessitating a fine-
tuning stage in order to actually perform a task such as
speech recognition1. This unfortunately limits their use-
fulness and impact as fine-tuning can still be a complex
process requiring a skilled practitioner. There is an addi-
tional risk with requiring fine-tuning. Machine learning
*Equal contribution1OpenAI, San Francisco, CA 94110, USA.
Correspondence to: Alec Radford <alec@openai.com >, Jong
Wook Kim <jongwook@openai.com >.
1Baevski et al. (2021) is an exciting exception - having devel-
oped a fully unsupervised speech recognition systemmethods are exceedingly adept at finding patterns within a
training dataset which boost performance on held-out data
from the same dataset. However, some of these patterns are
brittle and spurious and don’t generalize to other datasets
and distributions. In a particularly disturbing example, Rad-
ford et al. (2021) documented a 9.2% increase in object
classification accuracy when fine-tuning a computer vision
model on the ImageNet dataset (Russakovsky et al., 2015)
without observing any improvement in average accuracy
when classifying the same objects on seven other natural
image datasets. A model that achieves “superhuman” per-
formance when trained on a dataset can still make many
basic errors when evaluated on another, possibly precisely
because it is exploiting those dataset-specific quirks that
humans are oblivious to (Geirhos et al., 2020).
This suggests that while unsupervised pre-training has im-
proved the quality of audio encoders dramatically, the lack
of an equivalently high-quality pre-trained decoder, com-
bined with a recommended protocol of dataset-specific fine-
tuning, is a crucial weakness which limits their usefulness
and robustness. The goal of a speech recognition system
should be to work reliably “out of the box” in a broad range
of environments without requiring supervised fine-tuning of
a decoder for every deployment distribution.
As demonstrated by Narayanan et al. (2018), Likhomanenko
et al. (2020), and Chan et al. (2021) speech recognition sys-
tems that are pre-trained in a supervised fashion across many
datasets/domains exhibit higher robustness and generalize
much more effectively to held-out datasets than models
trained on a single source. These works achieve this by
combining as many existing high-quality speech recogni-
tion datasets as possible. However, there is still only a
moderate amount of this data easily available. SpeechStew
(Chan et al., 2021) mixes together 7 pre-existing datasets
totalling 5,140 hours of supervision. While not insignifi-
cant, this is still tiny compared to the previously mentioned
1,000,000 hours of unlabeled speech data utilized in Zhang
et al. (2021).
Recognizing the limiting size of existing high-quality super-
vised datasets, recent efforts have created larger datasets for
speech recognition. By relaxing the requirement of gold-
standard human-validated transcripts, Chen et al. (2021) and
Galvez et al. (2021) make use of sophisticated automatedarXiv:2212.04356v1 [eess.AS] 6 Dec 2022 |
Rombach-High-Resolution-Image-Synthesis-With-Latent-Diffusion-Models-CVPR-2022-paper.pdf | High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach1∗Andreas Blattmann1∗Dominik Lorenz1Patrick Esser
Bj¨orn Ommer1
1Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany
Runway ML
https://github.com/CompVis/latent-diffusion
Abstract
By decomposing the image formation process into a se-
quential application of denoising autoencoders, diffusion
models (DMs) achieve state-of-the-art synthesis results on
image data and beyond. Additionally, their formulation al-
lows for a guiding mechanism to control the image gen-
eration process without retraining. However, since these
models typically operate directly in pixel space, optimiza-
tion of powerful DMs often consumes hundreds of GPU
days and inference is expensive due to sequential evalu-
ations. To enable DM training on limited computational
resources while retaining their quality and flexibility, we
apply them in the latent space of powerful pretrained au-
toencoders. In contrast to previous work, training diffusion
models on such a representation allows for the first time
to reach a near-optimal point between complexity reduc-
tion and detail preservation, greatly boosting visual fidelity.
By introducing cross-attention layers into the model archi-
tecture, we turn diffusion models into powerful and flexi-
ble generators for general conditioning inputs such as text
or bounding boxes and high-resolution synthesis becomes
possible in a convolutional manner. Our latent diffusion
models (LDMs) achieve new state of the art scores for im-
age inpainting and class-conditional image synthesis and
highly competitive performance on various tasks, includ-
ing unconditional image generation, text-to-image synthe-
sis, and super-resolution, while significantly reducing com-
putational requirements compared to pixel-based DMs.
1. Introduction
Image synthesis is one of the computer vision fields with
the most spectacular recent development, but also among
those with the greatest computational demands. Espe-
cially high-resolution synthesis of complex, natural scenes
is presently dominated by scaling up likelihood-based mod-
els, potentially containing billions of parameters in autore-
gressive (AR) transformers [ 64,65]. In contrast, the promis-
ing results of GANs [ 3,26,39] have been revealed to be
mostly confined to data with comparably limited variability
as their adversarial learning procedure does not easily scale
to modeling complex, multi-modal distributions. Recently,
diffusion models [ 79], which are built from a hierarchy of
denoising autoencoders, have shown to achieve impressive
∗The first two authors contributed equally to this work.Inputours (f= 4)
PSNR:27.4R-FID:0.58DALL-E ( f= 8)
PSNR:22.8R-FID:32.01VQGAN ( f= 16 )
PSNR:19.9R-FID:4.98
Figure 1. Boosting the upper bound on achievable quality with
less agressive downsampling. Since diffusion models offer excel-
lent inductive biases for spatial data, we do not need the heavy spa-
tial downsampling of related generative models in latent space, but
can still greatly reduce the dimensionality of the data via suitable
autoencoding models, see Sec. 3. Images are from the DIV2K [ 1]
validation set, evaluated at 5122px. We denote the spatial down-
sampling factor by f. Reconstruction FIDs [ 28] and PSNR are
calculated on ImageNet-val. [ 12]; see also Tab. 8.
results in image synthesis [ 29,82] and beyond [ 7,44,47,56],
and define the state-of-the-art in class-conditional image
synthesis [ 15,30] and super-resolution [ 70]. Moreover, even
unconditional DMs can readily be applied to tasks such
as inpainting and colorization [ 82] or stroke-based syn-
thesis [ 52], in contrast to other types of generative mod-
els [19,45,67]. Being likelihood-based models, they do not
exhibit mode-collapse and training instabilities as GANs
and, by heavily exploiting parameter sharing, they can
model highly complex distributions of natural images with-
out involving billions of parameters as in AR models [ 65].
Democratizing High-Resolution Image Synthesis DMs
belong to the class of likelihood-based models, whose
mode-covering behavior makes them prone to spend ex-
cessive amounts of capacity (and thus compute resources)
on modeling imperceptible details of the data [ 16,71]. Al-
though the reweighted variational objective [ 29] aims to ad-
dress this by undersampling the initial denoising steps, DMs
are still computationally demanding, since training and
evaluating such a model requires repeated function evalu-
ations (and gradient computations) in the high-dimensional
space of RGB images. As an example, training the most
powerful DMs often takes hundreds of GPU days ( e.g. 150 -
1000 V100 days in [ 15]) and repeated evaluations on a noisy
version of the input space render also inference expensive,
10684
|
2402.09668.pdf | How to Train Data-Efficient LLMs
Noveen Sachdeva1 2Benjamin Coleman1Wang-Cheng Kang1Jianmo Ni1Lichan Hong1Ed H. Chi1
James Caverlee1 3Julian McAuley2Derek Zhiyuan Cheng1
Abstract
The training of large language models (LLMs) is
expensive. In this paper, we study data-efficient
approaches for pre-training LLMs, i.e., techniques
that aim to optimize the Pareto frontier of model
quality and training resource/data consumption.
We seek to understand the tradeoffs associated
with data selection routines based on (i) expensive-
to-compute data-quality estimates, and (ii) max-
imization of coverage and diversity-based mea-
sures in the feature space. Our first technique,
ASK-LLM , leverages the zero-shot reasoning ca-
pabilities of instruction-tuned LLMs to directly
assess the quality of a training example. To tar-
get coverage, we propose DENSITY sampling,
which models the data distribution to select a
diverse sample. In our comparison of 19sam-
plers, involving hundreds of evaluation tasks and
pre-training runs, we find that ASK-LLM and
DENSITY are the best methods in their respec-
tive categories. Coverage sampling can recover
the performance of the full data, while models
trained on ASK-LLM data consistently outper-
form full-data training—even when we reject 90%
of the original dataset, while converging up to
70% faster.
1. Introduction
Large language model (LLM) pre-training is perhaps the
most data- and compute-intensive task attempted by the
machine learning community to date, with impressive capa-
bilities primarily being accomplished by training massive
transformer architectures on trillions of tokens of text (Ope-
nAI, 2023; Gemini et al., 2023; Touvron et al., 2023b).
But even these incredibly capable LLMs are subject to em-
pirical scaling laws, which predict sharply diminishing re-
turns from a linear increase in model- or data-size (Hoff-
mann et al., 2022; Kaplan et al., 2020). Power-law scaling
therefore acts as a soft limit on model quality, beyond which
1Google DeepMind2University of California, San Diego
3Texas A&M University. Correspondence to: Noveen Sachdeva
<noveen@google.com>.it is prohibitively expensive to drive performance by scal-
ing up the data or model. At the same time, Sorscher et al.
(2022)—in the context of vision pre-training—show that
we can significantly improve the power law constants in
the aforementioned scaling laws if we prioritize important
training examples using some robust notion of data quality
or impact.
A similar call for data-curation is also apparent in the context
of training LLMs, where our largest models are quickly ap-
proaching their capacity and data thresholds. LIMA (Zhou
et al., 2023) showed that LLaMA-65B (Touvron et al.,
2023a) can be better aligned with human preferences when
trained on a set of 1,000 carefully selected fine-tuning
prompts, compared to training on as much as 52,000 unfil-
tered examples. Tirumala et al. (2023) recently conducted a
large-scale data-efficient pre-training evaluation, showing
that a 6.7B OPT model (Zhang et al., 2022) can converge up
to 20% faster on data curated by a technique based on strati-
fied cluster sampling. The Phi-2 experiments also suggest
that when data curation is performed at a human-expert level
(e.g., by textbook editors), models can outperform baselines
that are up to 25x larger (Javaheripi et al., 2023).
Data curation routines can be fundamentally characterized
as selecting training samples for quality, coverage, or some
mixture of both (Figure 2). In this work, we seek to under-
stand how quality and coverage affect the data efficiency of
LLM pre-training. Our core research question is:
“Are cheap-to-compute heuristics like maximum-
coverage enough to pre-train a SoTA LLM, or
are there real benefits from costly samplers that
carefully evaluate the quality of each example?”
This question is crucial to answer because data-curation
algorithms can improve the Pareto frontier of the data-
quantity ↔model-quality tradeoff, directly addressing the
bottleneck of power-law scaling by enabling higher-quality
models to be trained using less data. Data curation also
unlocks new tradeoffs between training time, inference cost,
data collection effort, and downstream performance. For
example, if we consider the compute-constrained (single-
epoch) regime, a data-efficient LLM training routine may
reach the desired performance using only X% of the data
1arXiv:2402.09668v1 [cs.LG] 15 Feb 2024 |
mapreduce.pdf | MapReduce: Simplied Data Processing onLargeClusters
JeffreyDean andSanjay Ghema wat
jeff@google.com, sanjay@google.com
Google,Inc.
Abstract
MapReduce isaprogramming model andanassoci-
ated implementation forprocessing andgenerating large
data sets. Users specify amap function thatprocesses a
key/valuepairtogenerate asetofintermediate key/value
pairs, andareduce function thatmergesallintermediate
values associated with thesame intermediate key.Many
realworld tasks areexpressible inthismodel, asshown
inthepaper .
Programs written inthisfunctional style areautomati-
cally parallelized andexecuted onalargecluster ofcom-
modity machines. Therun-time system takescare ofthe
details ofpartitioning theinput data, scheduling thepro-
gram' sexecution across asetofmachines, handling ma-
chine failures, andmanaging therequired inter-machine
communication. This allowsprogrammers without any
experience with parallel anddistrib uted systems toeas-
ilyutilize theresources ofalargedistrib uted system.
Our implementation ofMapReduce runs onalarge
cluster ofcommodity machines andishighly scalable:
atypical MapReduce computation processes manyter-
abytes ofdata onthousands ofmachines. Programmers
ndthesystem easy touse: hundreds ofMapReduce pro-
grams havebeen implemented andupwards ofonethou-
sand MapReduce jobs areexecuted onGoogle' sclusters
everyday.
1Introduction
Overthepast veyears, theauthors andmanyothers at
Google haveimplemented hundreds ofspecial-purpose
computations that process largeamounts ofrawdata,
such ascrawled documents, web request logs, etc., to
compute various kinds ofderiveddata, such asinverted
indices, various representations ofthegraph structure
ofweb documents, summaries ofthenumber ofpages
crawled perhost, thesetofmost frequent queries inagivenday,etc. Most such computations areconceptu-
allystraightforw ard. However,theinput data isusually
largeandthecomputations havetobedistrib uted across
hundreds orthousands ofmachines inorder tonish in
areasonable amount oftime. Theissues ofhowtopar-
allelize thecomputation, distrib utethedata, andhandle
failures conspire toobscure theoriginal simple compu-
tation with largeamounts ofcomple xcode todeal with
these issues.
Asareaction tothiscomple xity,wedesigned anew
abstraction thatallowsustoexpress thesimple computa-
tions wewere trying toperform buthides themessy de-
tails ofparallelization, fault-tolerance, data distrib ution
andload balancing inalibrary .Our abstraction isin-
spired bythemap andreduce primiti vespresent inLisp
andmanyother functional languages. Werealized that
most ofourcomputations involvedapplying amap op-
eration toeach logical record inourinput inorder to
compute asetofintermediate key/value pairs, andthen
applying areduce operation toallthevalues thatshared
thesame key,inorder tocombine thederiveddata ap-
propriately .Our useofafunctional model with user-
specied map andreduce operations allowsustoparal-
lelize largecomputations easily andtousere-execution
astheprimary mechanism forfaulttolerance.
Themajor contrib utions ofthisworkareasimple and
powerful interf acethatenables automatic parallelization
anddistrib ution oflarge-scale computations, combined
with animplementation ofthisinterf acethat achie ves
high performance onlargeclusters ofcommodity PCs.
Section 2describes thebasic programming model and
givesseveralexamples. Section 3describes animple-
mentation oftheMapReduce interf acetailored towards
ourcluster -based computing environment. Section 4de-
scribes several renements oftheprogramming model
thatwehavefound useful. Section 5hasperformance
measurements ofourimplementation foravariety of
tasks. Section 6explores theuseofMapReduce within
Google including ourexperiences inusing itasthebasis
Toappear inOSDI 2004 1 |
2311.00208.pdf | Transformers as Recognizers of Formal Languages:
A Survey on Expressivity
Lena Strobl
Umeå University
lena.strobl@umu.seWilliam Merrill
New York University
willm@nyu.eduGail Weiss
EPFL
gail.weiss@epfl.ch
David Chiang
University of Notre Dame
dchiang@nd.eduDana Angluin
Yale University
dana.angluin@yale.edu
Abstract
As transformers have gained prominence
in natural language processing, some re-
searchers have investigated theoretically
what problems they can and cannot solve,
by treating problems as formal languages .
Exploring questions such as this will help
to compare transformers with other models,
and transformer variants with one another,
for various tasks. Work in this subarea has
made considerable progress in recent years.
Here, we undertake a comprehensive survey
of this work, documenting the diverse as-
sumptions that underlie different results and
providing a unified framework for harmoniz-
ing seemingly contradictory findings.
1 Introduction
Transformers (Vaswani et al., 2017) have gained
prominence in natural language processing (NLP),
both in direct applications like machine transla-
tion and in pretrained models like BERT (Devlin
et al., 2019) and GPT (Radford et al., 2018; Brown
et al., 2020; OpenAI, 2023). Consequently, some
researchers have sought to investigate their theoreti-
cal properties. Such studies can broadly be divided
into studies of expressivity andtrainability . Studies
of expressivity could be further divided into those
from the perspectives of approximation theory and
of formal language theory. The former (e.g., Yun
et al., 2020) investigates transformers as approx-
imators of various classes of functions , along the
lines of the universal approximation theorem for
feedforward neural networks (Hornik et al., 1989;
Cybenko, 1989). The latter, which is the subject
of this survey, investigates transformers as recog-
nizers of formal languages – that is, the inputs
are treated as sequences of discrete symbols, and
crucially as sequences of unbounded length.The core research question in this subarea is:
How can we characterize the expressivity of trans-
formers in relation to various formal models, such
as automata, boolean circuits or formal logic? Re-
lated questions include:
•How do transformers compare to other architec-
tures, like recurrent neural networks (RNNs), in
expressivity?
•How do transformer variants compare to one
another in expressivity?
Some further questions, which are not addressed
by the papers surveyed here but could be addressed
by future work in this subarea, include:
•What new transformer variants are suggested by
formal models?
•Do failure cases anticipated from formal models
occur in practice?
•What insights into the complexity of human lan-
guage are offered by a characterization of trans-
former expressivity?
Interpreting theoretical transformer results is
complex due to diverse assumptions. Many vari-
ants of transformers exist in practice, and even
more have been proposed in theory. Also, trans-
formers can recognize or generate languages in
various ways. These diverse assumptions lead to
varied, even seemingly contradictory, results.
This paper provides a comprehensive survey of
theoretical results on the expressive power of trans-
formers. Compared to the surveys of Ackerman
and Cybenko (2020) and Merrill (2021, 2023),
which cover convolutional neural nets (CNNs),
RNNs, and transformers, this is a narrower, but
deeper, survey on transformers only. It sets up a
unified framework for talking about transformer
variants (§4), reviews key topics related to formal
languages (§6), and systematically surveys results
in the literature, documenting their assumptions
and claims (§7) and harmonizing seemingly con-
tradictory findings. See Table 1 for a summary.
1arXiv:2311.00208v1 [cs.LG] 1 Nov 2023 |
2402.04833.pdf | Long Is More for Alignment:
A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning
Hao Zhao1Maksym Andriushchenko1Francesco Croce1Nicolas Flammarion1
Abstract
There is a consensus that instruction fine-tuning
of LLMs requires high-quality data, but what
are they? LIMA (NeurIPS 2023) and AlpaGa-
sus (ICLR 2024) are state-of-the-art methods for
selecting such high-quality examples, either via
manual curation or using GPT-3.5-Turbo as a
quality scorer. We show that the extremely sim-
ple baseline of selecting the 1,000 instructions
with longest responses from standard datasets can
consistently outperform these sophisticated meth-
ods according to GPT-4 and PaLM-2 as judges,
while remaining competitive on the Open LLM
benchmarks that test factual knowledge. We
demonstrate this for several state-of-the-art LLMs
(Llama-2-7B, Llama-2-13B, and Mistral-7B) and
datasets (Alpaca-52k and Evol-Instruct-70k). In
addition, a lightweight refinement of such long
instructions can further improve the abilities of
the fine-tuned LLMs, and allows us to obtain the
2nd highest-ranked Llama-2-7B-based model on
AlpacaEval 2.0 while training on only 1,000 ex-
amples and no extra preference data. We also con-
duct a thorough analysis of our models to ensure
that their enhanced performance is not simply due
to GPT-4’s preference for longer responses, thus
ruling out any artificial improvement. In conclu-
sion, our findings suggest that fine-tuning on the
longest instructions should be the default baseline
for any research on instruction fine-tuning.
1. Introduction
Pre-trained large language models (LLMs) need to undergo
an alignment phase (Askell et al., 2021; Bai et al., 2022a;
Ouyang et al., 2022; Wang et al., 2022; Taori et al., 2023)
to make them suitable for downstream tasks like user inter-
action or question answering. While the details may vary,
alignment often relies on supervised fine-tuning (SFT) on
1EPFL, Switzerland. Correspondence to: Hao Zhao
<hao.zhao@epfl.ch >.a dataset of instruction-response pairs to improve conver-
sational ability, followed by reinforcement learning from
either human (RLHF) (Ouyang et al., 2022) or automated
(RLAIF) (Bai et al., 2022b; Lee et al., 2023) feedback to pro-
mote the preferred style and content of replies. It is an active
research direction to study whether it is possible to achieve
satisfactory results while relying only on SFT, which would
avoid the (potentially expensive) process of collecting pref-
erence data. Taori et al. (2023) created Alpaca, an open
source dataset of 52k instruction-response pairs, and fine-
tuned on it a Llama-2-7B model to match the performance of
the closed-source text-davinci-003 model. Then, Chen et al.
(2023) introduced AlpaGasus, consisting of the 9k examples
of Alpaca which are judged of highest quality by GPT-3.5-
Turbo, to further improve the instruction-following abilities
of the fine-tuned models. The intuition that instruction fine-
tuning (IFT) might benefit from fewer demonstrations but of
higher quality has been further pursued by Zhou et al. (2023)
which manually curated LIMA, a dataset of 1k examples,
which outperforms AlpaGasus. While the quality of the
instructions seems to play a major role for IFT, it remains
unclear which are the distinguishing features of high quality
demonstrations.
In this work, we revisit the significant efforts in constructing
instruction-tuning datasets from prior work. Inspired by the
fact LIMA contains much longer examples than Alpaca and
the observation of recent works (Singhal et al., 2023; Yuan
et al., 2024) that RLHF and direct preference optimization
(DPO) (Rafailov et al., 2023) seem to mostly make the out-
puts longer, we test selecting longest responses as a simple
and inexpensive heuristic to curate a small (only 1k exam-
ples) and high-quality IFT dataset from a larger one. Sur-
prisingly, fine-tuning a Llama-2-7B (Touvron et al., 2023)
base model on the 1k longest elements of Alpaca outper-
forms both AlpaGasus and LIMA in one-to-one comparison
with different LLMs as judges and on the AlpacaEval 2.0
benchmark (see Fig. 1). Moreover, simply improving the
quality and the style of the response in Alpaca-1k-longest
with GPT-3.5-Turbo, in combination with NEFTune noise
augmentation (Jain et al., 2023), allows us to obtain the the
2nd highest-ranked Llama-2-7B-based model on AlpacaE-
val 2.0. In this case, our simple method yields models which
surpass LLMs with the same base model but fine-tuned with
1arXiv:2402.04833v1 [cs.CL] 7 Feb 2024 |
1801.05134.pdf | Understanding the Disharmony between Dropout and Batch Normalization by
Variance Shift
Xiang Li1Shuo Chen1Xiaolin Hu2Jian Yang1
Abstract
This paper first answers the question “why do
the two most powerful techniques Dropout and
Batch Normalization (BN) often lead to a worse
performance when they are combined together?”
in both theoretical and statistical aspects. The-
oretically, we find that Dropout would shift the
variance of a specific neural unit when we transfer
the state of that network from train to test. How-
ever, BN would maintain its statistical variance,
which is accumulated from the entire learning
procedure, in the test phase. The inconsistency
of that variance (we name this scheme as “vari-
ance shift”) causes the unstable numerical behav-
ior in inference that leads to more erroneous pre-
dictions finally, when applying Dropout before
BN. Thorough experiments on DenseNet, ResNet,
ResNeXt and Wide ResNet confirm our findings.
According to the uncovered mechanism, we next
explore several strategies that modifies Dropout
and try to overcome the limitations of their com-
bination by avoiding the variance shift risks.
1. Introduction
(Srivastava et al., 2014) brought Dropout as a simple way to
prevent neural networks from overfitting. It has been proved
to be significantly effective over a large range of machine
learning areas, such as image classification (Szegedy et al.,
2015), speech recognition (Hannun et al., 2014) and even
natural language processing (Kim et al., 2016). Before
the birth of Batch Normalization, it became a necessity of
almost all the state-of-the-art networks and successfully
boosted their performances against overfitting risks, despite
its amazing simplicity.
(Ioffe & Szegedy, 2015) demonstrated Batch Normaliza-
1DeepInsight@PCALab, Nanjing University of Science and
Technology, China2Tsinghua National Laboratory for Informa-
tion Science and Technology (TNList) Department of Computer
Science and Technology, Tsinghua University, China. Correspon-
dence to: Xiang Li <xiang.li.implus@njust.edu.cn >.
𝑋=𝑥 𝑋=𝑋−𝐸𝑀𝑜𝑣𝑖𝑛𝑔(𝑋)
𝑉𝑎𝑟𝑀𝑜𝑣𝑖𝑛𝑔𝑋+𝜀𝑋𝑉𝑎𝑟𝑇𝑟𝑎𝑖𝑛𝑋=1
𝑝
𝑉𝑎𝑟𝑇𝑒𝑠𝑡𝑋=1𝑉𝑎𝑟𝑀𝑜𝑣𝑖𝑛𝑔𝑋=𝐸(1
𝑝)
𝑉𝑎𝑟𝑀𝑜𝑣𝑖𝑛𝑔𝑋=𝐸(1
𝑝)
𝑥~𝒩(0,1)Train Mode
Test Mode𝑋=𝑎1
𝑝𝑥𝑋 𝑥~𝒩(0,1)𝜇=𝐸𝑋,𝜎2=𝑉𝑎𝑟𝑋,𝑋=𝑋−𝜇
𝜎2+𝜀
𝐸𝑀𝑜𝑣𝑖𝑛𝑔𝑋←𝐸(𝜇)𝑉𝑎𝑟𝑀𝑜𝑣𝑖𝑛𝑔𝑋←𝐸(𝜎2)Dropout 𝑎~Bernoulli (𝑝) BN
0 20 40 60 80 100
BN layer index on DenseNet trained on CIFAR1000.51.01.52.02.53.03.5max(real_vari
moving_vari,moving_vari
real_vari)Test Acc 77.42%, No Dropout in each bottleneck
Test Acc 68.55%, Dropout 0.5 in each bottleneckFigure 1. Up: a simplified mathematical illustration of “variance
shift”. In test mode, the neural variance of Xis different from
that in train mode caused by Dropout, yet BN attempts to regard
that variance as the popular statistic accumulated from training.
Note thatpdenotes for the Dropout retain ratio and acomes from
Bernoulli distribution which has probability pof being 1. Down:
variance shift in experimental statistics on DenseNet trained on
CIFAR100 dataset. The curves are both calculated from the same
training data . “moving vari” is the moving variance (take its mean
value instead if it’s a vector) that the i-th BN layer accumulates dur-
ing the entire learning, and “ real var i” stands for the real variance
of neural response before the i-th BN layer in inference.
tion (BN), a powerful skill that not only speeded up all the
modern architectures but also improved upon their strong
baselines by acting as regularizers. Therefore, BN has been
implemented in nearly all the recent network structures
(Szegedy et al., 2016; 2017; Howard et al., 2017; Zhang
et al., 2017) and demonstrates its great practicability and
effectiveness.
However, the above two nuclear weapons always fail to
obtain an extra reward when combined together practically.
In fact, a network even performs worse and unsatisfactorily
when it is equipped with BN and Dropout simultaneously.
(Ioffe & Szegedy, 2015) have already realized that BN elim-
inates the need for Dropout in some cases – the authors
exposed the incompatibility between them, thus conjecturedarXiv:1801.05134v1 [cs.LG] 16 Jan 2018 |
2305.13301.pdf | TRAINING DIFFUSION MODELS
WITH REINFORCEMENT LEARNING
Kevin Black∗1Michael Janner∗1Yilun Du2Ilya Kostrikov1Sergey Levine1
1University of California, Berkeley2Massachusetts Institute of Technology
{kvablack, janner, kostrikov, sergey.levine}@berkeley.edu yilundu@mit.edu
ABSTRACT
Diffusion models are a class of flexible generative models trained with an
approximation to the log-likelihood objective. However, most use cases of diffusion
models are not concerned with likelihoods, but instead with downstream objectives
such as human-perceived image quality or drug effectiveness. In this paper, we
investigate reinforcement learning methods for directly optimizing diffusion models
for such objectives. We describe how posing denoising as a multi-step decision-
making problem enables a class of policy gradient algorithms, which we refer
to as denoising diffusion policy optimization ( DDPO ), that are more effective
than alternative reward-weighted likelihood approaches. Empirically, DDPO can
adapt text-to-image diffusion models to objectives that are difficult to express via
prompting, such as image compressibility, and those derived from human feedback,
such as aesthetic quality. Finally, we show that DDPO can improve prompt-image
alignment using feedback from a vision-language model without the need for
additional data collection or human annotation. The project’s website can be found
athttp://rl-diffusion.github.io .
1 I NTRODUCTION
Diffusion probabilistic models (Sohl-Dickstein et al., 2015) have recently emerged as the de facto
standard for generative modeling in continuous domains. Their flexibility in representing complex,
high-dimensional distributions has led to the adoption of diffusion models in applications including
image and video synthesis (Ramesh et al., 2021; Saharia et al., 2022; Ho et al., 2022), drug and
material design (Xu et al., 2021; Xie et al., 2021; Schneuing et al., 2022), and continuous control
(Janner et al., 2022; Wang et al., 2022; Hansen-Estruch et al., 2023). The key idea behind diffusion
models is to iteratively transform a simple prior distribution into a target distribution by applying a
sequential denoising process. This procedure is conventionally motivated as a maximum likelihood
estimation problem, with the objective derived as a variational lower bound on the log-likelihood of
the training data.
However, most use cases of diffusion models are not directly concerned with likelihoods, but instead
with downstream objective such as human-perceived image quality or drug effectiveness. In this paper,
we consider the problem of training diffusion models to satisfy such objectives directly, as opposed to
matching a data distribution. This problem is challenging because exact likelihood computation with
diffusion models is intractable, making it difficult to apply many conventional reinforcement learning
(RL) algorithms. We instead propose to frame denoising as a multi-step decision-making task, using
the exact likelihoods at each denoising step in place of the approximate likelihoods induced by a full
denoising process. We present a policy gradient algorithm, which we refer to as denoising diffusion
policy optimization ( DDPO ), that can optimize a diffusion model for downstream tasks using only a
black-box reward function.
We apply our algorithm to the finetuning of large text-to-image diffusion models. Our initial evaluation
focuses on tasks that are difficult to specify via prompting, such as image compressibility, and those
derived from human feedback, such as aesthetic quality. However, because many reward functions
of interest are difficult to specify programmatically, finetuning procedures often rely on large-scale
human labeling efforts to obtain a reward signal (Ouyang et al., 2022). In the case of text-to-image
diffusion, we propose a method for replacing such labeling with feedback from a vision-language
model (VLM). Similar to RLAIF finetuning for language models (Bai et al., 2022b), the resulting
procedure allows for diffusion models to be adapted to reward functions that would otherwise require
1arXiv:2305.13301v3 [cs.LG] 1 Oct 2023 |
2306.04488.pdf | Rewarded soups: towards Pareto-optimal alignment
by interpolating weights fine-tuned on diverse rewards
Alexandre Rame1∗, Guillaume Couairon1,2†, Mustafa Shukor1†,
Corentin Dancette1†,Jean-Baptiste Gaya1,2†,Laure Soulier1,Matthieu Cord1,3
1Sorbonne Université, CNRS, ISIR, Paris, France2Meta AI3Valeo.ai
Abstract
Foundation models are first pre-trained on vast unsupervised datasets and then
fine-tuned on labeled data. Reinforcement learning, notably from human feedback
(RLHF), can further align the network with the intended usage. Yet the imperfec-
tions in the proxy reward may hinder the training and lead to suboptimal results ; the
diversity of objectives in real-world tasks and human opinions exacerbate the issue.
This paper proposes embracing the heterogeneity of diverse rewards by following a
multi-policy strategy. Rather than focusing on a single a priori reward, we aim for
Pareto-optimal generalization across the entire space of preferences. To this end,
we propose rewarded soup , first specializing multiple networks independently (one
for each proxy reward) and then interpolating their weights linearly. This succeeds
empirically because we show that the weights remain linearly connected when
fine-tuned on diverse rewards from a shared pre-trained initialization. We demon-
strate the effectiveness of our approach for text-to-text (summarization, Q&A,
helpful assistant, review), text-image (image captioning, text-to-image generation,
visual grounding, VQA), and control (locomotion) tasks. We hope to enhance the
alignment of deep models, and how they interact with the world in all its diversity.
1 Introduction
Foundation models [ 1] have emerged as the standard paradigm to learn neural networks’ weights.
They are typically first pre-trained through self-supervision [ 2,3,4,5] and then fine-tuned [ 6,7] via
supervised learning [ 8]. Yet, collecting labels is expensive, and thus supervision may not cover all
possibilities and fail to perfectly align [ 9,10,11] the trained network with the intended applications.
Recent works [ 12,13,14] showed that deep reinforcement learning (DRL) helps by learning from
various types of rewards. A prominent example is reinforcement learning from human feedback
(RLHF) [ 12,15,16,17], which appears as the current go-to strategy to refine large language models
(LLMs) into powerful conversational agents such as ChatGPT [ 13,18]. After pre-training on next
token prediction [ 19] using Web data, the LLMs are fine-tuned to follow instructions [ 20,21,22]
before reward maximization. This RL strategy enhances alignment by evaluating the entire generated
sentence instead of each token independently, handling the diversity of correct answers and allowing
for negative feedback [ 23]. Similar strategies have been useful in computer vision (CV) [ 14,24], for
instance to integrate human aesthetics into image generation [25, 26, 27].
Diversity of proxy rewards. RL is usually seen as more challenging than supervised training [ 28],
notably because the real reward—ideally reflecting the users’ preferences—is often not specified at
training time. Proxy rewards are therefore developed to guide the learning, either as hand-engineered
metrics [ 29,30,31] or more recently in RLHF as models trained to reflect human preferences
∗Project lead, main contributor, correspondence to alexandre.rame@isir.upmc.fr.
†Equal experimental contribution, order determined at random.
Further information and resources related to this project can be found on this website.
37th Conference on Neural Information Processing Systems (NeurIPS 2023).arXiv:2306.04488v2 [cs.LG] 16 Oct 2023 |
2210.03057.pdf | LANGUAGE MODELS ARE
MULTILINGUAL CHAIN -OF-THOUGHT REASONERS
Freda Shi1,2,∗Mirac Suzgun1,3,∗Markus Freitag1Xuezhi Wang1
Suraj Srivats4Soroush Vosoughi4Hyung Won Chung1Yi Tay1
Sebastian Ruder1Denny Zhou1Dipanjan Das1Jason Wei1
1Google Research2Toyota Technological Institute at Chicago
3Stanford University4Dartmouth College
ABSTRACT
We evaluate the reasoning abilities of large language models in multilingual
settings. We introduce the Multilingual Grade School Math (MGSM) bench-
mark, by manually translating 250 grade-school math problems from the GSM8K
dataset (Cobbe et al., 2021) into tentypologically diverse languages. We find
that the ability to solve MGSM problems via chain-of-thought prompting emerges
with increasing model scale, and that models have strikingly strong multilin-
gual reasoning abilities, even in underrepresented languages such as Bengali
and Swahili. Finally, we show that the multilingual reasoning abilities of lan-
guage models extend to other tasks such as commonsense reasoning and word-
in-context semantic judgment. The MGSM benchmark is publicly available at
https://github.com/google-research/url-nlp .
0.01% 1% 100%010203040506070
Underrepresented
languages
(SW,BN,TE,TH)High-resource
languages
(JA,ZH,RU,ES,FR,DE)English
(EN)
Frequency of language in pre-training dataset (token percentage) MGSM Accuracy (%)Translate to English with Google Translate and solve with English intermediate steps
Intermediate reasoning steps in the language of the question
Intermediate reasoning steps in English
Figure 1: Correlation between language frequency and MGSM accuracy for PaLM-540B. The
accuracy is surprisingly high, even for underrepresented languages like Swahili ( SW) and Bengali
(BN), which account for less than 0.01% of the pre-training dataset.
∗Equal contribution. Work done during internship at Google Research.
1arXiv:2210.03057v1 [cs.CL] 6 Oct 2022 |
2306.17806.pdf | Stay on topic with Classifier-Free Guidance
Guillaume V . Sanchez*
Hexaglobe
EleutherAI
gsanchez@hexaglobe.comHonglu Fan*
University of Geneva
EleutherAI
honglu.fan@unige.chAlexander Spangher*
Information Sciences Institute
University of Southern California
spangher@usc.edu
Elad Levi
Sightful
eladlevico@gmail.comPawan Sasanka Ammanamanchi
IIIT Hyderabad
Eleuther AI
pawansasanka@gmail.comStella Biderman
Booz Allen Hamilton
EleutherAI
stellabiderman@gmail.com
Abstract
Classifier-Free Guidance (CFG) [ 37] has recently emerged in text-to-image generation as a lightweight
technique to encourage prompt-adherence in generations. In this work, we demonstrate that CFG
can be used broadly as an inference-time technique in pure language modeling. We show that
CFG (1) improves the performance of Pythia, GPT-2 and LLaMA-family models across an array of
tasks: Q&A, reasoning, code generation, and machine translation, achieving SOTA on LAMBADA
with LLaMA-7B over PaLM-540B; (2) brings improvements equivalent to a model with twice the
parameter-count; (3) can stack alongside other inference-time methods like Chain-of-Thought and
Self-Consistency, yielding further improvements in difficult tasks; (4) can be used to increase the
faithfulness and coherence of assistants in challenging form-driven and content-driven prompts: in a
human evaluation we show a 75% preference for GPT4All using CFG over baseline.
1 Introduction
“Today in France , citizens were
celebrating Christmas”
“Today in France , and chickens
lay eggs” γ =0 γ =1 γ =1.5
“Today in France , citizens were
celebrating Thanksgiving”
x0x1“Today in France , citizens were celebrating
Bastille Day”
γ =0.5
Figure 1: A notional 2D projection of a textual latent space
showing how increasing the guidance weight γincreases
the importance of the prompt “Today in France,”.In recent years large language models have exhibited
strong generative capabilities to solve a diverse range of
tasks [ 26,15,71]. “Prompting” is typically used to con-
dition generation, with task instructions and context [ 64],
or a small set of examples [ 15]. However, language gener-
ation, especially with smaller models, has been shown to
struggle with issues such as hallucination [ 49], degrada-
tion [ 38] and meandering [ 76]. Various approaches have
been proposed to address this, e.g.: instruction-finetuning
[81,70] and reinforcement learning [ 56,4,6]. These tech-
niques are expensive and their compute and data cost may
not be accessible to all users. In this paper we propose an
inference time methodology which, as shown in Figure
1, gives more importance to the user intent, expressed
through the prompt. Our hypothesis in this paper is: fo-
cusing more on the prompt at inference-time will result
in generations that better align with expected behavior.
Text-to-image-generation, too, has been shown to suffer from similar problems [ 28]. Standard inference approaches can
ignore parts of the prompt-conditioning, especially with specific or uncommon prompts [ 53]. Classifier Guidance [ 28]
*These authors contributed equally to this workarXiv:2306.17806v1 [cs.CL] 30 Jun 2023 |
2310.10638v5.pdf | Published as a conference paper at ICLR 2024
IN-CONTEXT PRETRAINING : LANGUAGE MODELING
BEYOND DOCUMENT BOUNDARIES
Weijia Shi1,2Sewon Min1,2Maria Lomeli1Chunting Zhou1
Margaret Li1,2Gergely Szilvasy1Rich James1Xi Victoria Lin1
Noah A. Smith2,3Luke Zettlemoyer1,2Scott Yih1Mike Lewis1
1Meta AI2University of Washington3Allen Institute for AI
swj0419@cs.washington.edu
ABSTRACT
Large language models (LMs) are currently trained to predict tokens given doc-
ument prefixes, enabling them to directly perform long-form generation and
prompting-style tasks which can be reduced to document completion. Existing
pretraining pipelines train LMs by concatenating random sets of short documents
to create input contexts but the prior documents provide no signal for predicting the
next document. We instead present IN-CONTEXT PRETRAINING , a new approach
where language models are pretrained on a sequence of related documents, thereby
explicitly encouraging them to read and reason across document boundaries. We
can do IN-CONTEXT PRETRAINING by simply changing the document ordering
so that each context contains related documents, and directly applying existing
pretraining pipelines. However, this document sorting problem is challenging.
There are billions of documents and we would like the sort to maximize contextual
similarity for every document without repeating any data. To do this, we intro-
duce approximate algorithms for finding related documents with efficient nearest
neighbor search and constructing coherent input contexts with a graph traversal
algorithm. Our experiments show IN-CONTEXT PRETRAINING offers a simple
and scalable approach to significantly enhance LMs’ performance: we see notable
improvements in tasks that require more complex contextual reasoning, including
in-context learning (+8%), reading comprehension (+15%), faithfulness to previous
contexts (+16%), long-context reasoning (+5%), and retrieval augmentation (+9%).
1 I NTRODUCTION
Large language models (LMs) are trained to complete documents; each token is predicted given
the context provided by the prefix of the document it appears in. Such contexts can be widely
varied, especially at pretraining scale, allowing models to excel on diverse tasks such as instruction-
following (Ouyang et al., 2022), conversational interfaces (OpenAI, 2023), reading comprehen-
sion (Zhang et al., 2020), and in-context learning (Brown et al., 2020). However, recent studies
highlight that LMs sometimes struggle to understand more complex contexts: they can fail to follow
instructions accurately (McKenzie et al., 2023; Efrat & Levy, 2020; Liu & Liu, 2023), struggle with
reasoning over conditioned documents (Liu et al., 2023; Shi et al., 2023a), and exhibit high variance
in in-context learning (Zhao et al., 2021). In this paper, we present IN-CONTEXT PRETRAINING , a
new pretraining method that learns to predict tokens conditioned on a sequence of related documents,
explicitly enabling the model to read and reason about much more varied and longer contexts that go
beyond document boundaries.
Current LM training pipelines concatenate random sets of shorter documents to create longer con-
text windows. However, the prior documents provide no signal for predicting the next document,
incurring unnecessary computational overhead for tokens that do not require communication between
them (de Vries, 2023). IN-CONTEXT PRETRAINING instead reorders the pretraining data by combin-
ing several semantically related documents to create a coherent input context, thereby exposing LMs
to long relevant contexts and providing pretraining signals beyond document boundaries. We illustrate
this via an example in Figure 1: when predicting the following tokens for the phrase “ For 2022,
FIFA set the prize money at $42m, ” a previous document stating that the “ World Cup never awarded
1arXiv:2310.10638v5 [cs.CL] 9 Mar 2024 |
2305.15348.pdf | READ: Recurrent Adaptation of Large Transformers
Sid Wang John Nguyen Ke Li Carole-Jean Wu
Meta AI
{yuwang2020,ngjhn,kli26,carolejeanwu}@meta.com
Abstract
Fine-tuning large-scale Transformers has led to the explosion of many AI applica-
tions across Natural Language Processing and Computer Vision tasks. However,
fine-tuning all pre-trained model parameters becomes impractical as the model
size and number of tasks increase. Parameter-efficient transfer learning (PETL)
methods aim to address these challenges. While effective in reducing the number
of trainable parameters, PETL methods still require significant energy and compu-
tational resources to fine-tune. In this paper, we introduce REcurrent ADaption
(READ) — a lightweight and memory-efficient fine-tuning method — to overcome
the limitations of the current PETL approaches. Specifically, READ inserts a small
RNN network alongside the backbone model so that the model does not have
to back-propagate through the large backbone network. Through comprehensive
empirical evaluation of the GLUE benchmark, we demonstrate READ can achieve
a56% reduction in the training memory consumption and an 84% reduction in the
GPU energy usage while retraining high model quality compared to full-tuning.
Additionally, the model size of READ does not grow with the backbone model
size, making it a highly scalable solution for fine-tuning large Transformers.
1 Introduction
READ Adapter LoRA BitFit Prompt Full-tuning0.00.20.40.60.81.0Normalized Energy Consumption
(lower is better)
Figure 1: The normalized energy consumption rel-
ative to full-tuning on GLUE tasks.Large-scale transformers architecture have
achieved state-of-the-art results in several Nat-
ural Language Processing (NLP) tasks [ 2,5,22,
23,25,33]. Scaling up the size of these models
has been shown to confer various benefits, such
as improved model prediction performance and
sample efficiency [9, 14, 34]. The conventional
paradigm is to pre-train large-scale models on
generic web-scale data and fine-tune the models
to downstream tasks. However, fine-tuning these
models has become prohibitively expensive.
Since 2018, the model size has increased by
almost two orders of magnitude faster than GPU
memory [ 20], resulting in prohibitively high cost
to advance AI technologies [ 36]. Only a few
well-funded institutions have the resources to fine-tune these models. Parameter-efficient transfer
learning (PETL) [ 1,13,15,16,18,19,38] has emerged as a promising solution to overcome the
challenges of full fine-tuning. Parameter-efficient transfer learning techniques aim to address these
challenges by leveraging smaller and more task-specific models to efficiently adapt the pre-trained
model’s parameters to the target task. Additive (e.g., adapters): Inserting small modules into the
transformer blocks [ 13]. Soft Prompts (e.g., prefix-tuning) [ 18,19]: Small parameters concatenated
Preprint. Under review.arXiv:2305.15348v1 [cs.LG] 24 May 2023 |
2309.10668.pdf | Language Modeling Is Compression
Grégoire Delétang*1, Anian Ruoss*1, Paul-Ambroise Duquenne2, Elliot Catt1, Tim Genewein1, Christopher
Mattern1, Jordi Grau-Moya1, Li Kevin Wenliang1, Matthew Aitchison1, Laurent Orseau1, Marcus Hutter1and
Joel Veness1
*Equal contributions,1Google DeepMind,2Meta AI & Inria
It has long been established that predictive models can be transformed into lossless compressors and
vice versa. Incidentally, in recent years, the machine learning community has focused on training
increasingly large and powerful self-supervised (language) models. Since these large language models
exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this
work, we advocate for viewing the prediction problem through the lens of compression and evaluate
the compression capabilities of large (foundation) models. We show that large language models are
powerful general-purpose predictors and that the compression viewpoint provides novel insights into
scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily
on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size,
beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show
that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a
conditional generative model.
1. Introduction
Information theory and machine learning are inextricably linked and have even been referred to as
“two sides of the same coin” (MacKay, 2003). One particularly elegant connection is the essential
equivalence between probabilistic models of data and lossless compression. The source coding
theorem (Shannon, 1948) is the fundamental theorem describing this idea, i.e., the expected message
length in bits of an optimal entropy encoder is equal to the negative log2-likelihood of the statistical
model. In other words, maximizing the log2-likelihood (of the data) is equivalent to minimizing the
number of bits required per message. Indeed, lossless compression with a probabilistic model can
be achieved in a variety of different ways, including Huffman coding (Huffman, 1952), arithmetic
coding (Pasco, 1977; Rissanen, 1976), and asymmetric numeral systems (Duda, 2009).
Arithmetic coding, in particular, is known to be optimal in terms of coding length, meaning that
the overall compression performance depends on the capabilities of the probabilistic model (Fig. 1).
Incidentally,inrecentyears,largepre-trainedTransformers(Vaswanietal.,2017),so-called foundation
models(Bommasani et al., 2021), have proven to be highly successful across a wide range of predictive
tasks (Bubeck et al., 2023; Rae et al., 2021) and are thus promising candidates for use with arithmetic
coding. Indeed, Transformer-based compression with arithmetic coding has produced state-of-the-
art results both in the online (Bellard, 2021; Mao et al., 2022) and offline settings (Valmeekam
et al., 2023). In the online setting, a pseudo-randomly initialized model is directly trained on the
stream of data that is to be compressed, while the offline setting, which we consider in our work,
trains the model on an external dataset before employing it to compress a (potentially different)
data stream. Consequently, offline compression is performed in-context , with a fixed set of model
parameters. Transformers have demonstrated impressive in-context learning abilities (Brown et al.,
2020; Genewein et al., 2023; Laskin et al., 2023; Wei et al., 2022), which renders them ideally suited
for offline compression. However, as we will discuss in this work, Transformers are actually trained to
compress well, and therefore musthave good in-context learning abilities.
Corresponding authors: {gdelt, anianr}@google.comarXiv:2309.10668v1 [cs.LG] 19 Sep 2023 |
2404.16710v1.pdf | LayerSkip: Enabling Early Exit Inference and
Self-Speculative Decoding
Mostafa Elhoushi1,†,∗,Akshat Shrivastava1,†,∗,Diana Liskovich2,†,Bram Wasti2,Basil Hosmer1,
Liangzhen Lai3,Anas Mahmoud4,Bilge Acun1,Saurabh Agrawal6,Ahmed Roman7,Ahmed A Aly3,Beidi
Chen1,5,Carole Jean-Wu1
1FAIR at Meta,2GenAI at Meta,3Reality Labs at Meta,4University of Toronto,5Carnegie Mellon
University,6University of Wisconsin-Madison,7Dana-Farber Cancer Institute
∗Equal Contribution ,†Core Contributor
We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs).
First, during training we apply layer dropout, with low dropout rates for earlier layers and higher
dropout rates for later layers, and an early exit loss where all transformer layers share the same exit.
Second, during inference, we show that this training recipe increases the accuracy of early exit at
earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel
self-speculative decoding solution where we exit at early layers and verify and correct with remaining
layers of the model. Our proposed self-speculative decoding approach has less memory footprint than
other speculative decoding approaches and benefits from shared compute and activations of the draft
and verification stages. We run experiments on different Llama model sizes on different types of
training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and
finetuning on specific task. We implement our inference solution and show speedups of up to 2.16 ×
on summarization for CNN/DM documents, 1.82 ×on coding, and 2.0 ×on TOPv2 semantic parsing
task.
Date:April 26, 2024
Correspondence: Mostafa Elhoushi, Akshat Shrivastava
atmelhoushi@meta.com ,akshats@meta.com
Code:In progress
1 Introduction
Large Language Models (LLMs) have been deployed to many applications, yet their high compute and memory
requirements lead to high financial and energy costs when deployed to GPU servers Samsi et al. (2023).
Acceleration solutions do exist to deploy to commodity GPUs on laptops but they suffer from significant drop
in accuracy Zhu et al. (2023). Accelerating LLMs further to mobile or edge devices is still an active research
area Çöplü et al. (2023); Liu et al. (2024). While a large portion of LLM acceleration approaches reduce
number of non-zero weights Xia et al. (2023) (a.k.a. sparsity), number of bits per weight Xiao et al. (2023)
(a.k.a. quantization), number of heads per layer Shim et al. (2021) (a.k.a. head pruning), a smaller portion
of approaches focus on reducing number of layers Fan et al. (2020); Elbayad et al. (2020). In this paper,
we explore reducing the number of layers required for each token by exiting early during inference. Unlike
quantization or sparsity, acceleration by reducing number of layers does not require specialized hardware or
software kernels.
Moreover, a popular research trend in LLM acceleration is speculative decoding Leviathan et al. (2023); Chen
et al. (2023) that has no drop in accuracy, where a large model, referred to as the mainmodel, is accompanied
with a faster model, referred to as the draftmodel. The advantage of speculative decoding is that it leads
to faster inference compared to the main model, but requires a larger memory footprint and complexity in
implementation to maintain key-value (KV) cache in two different models. In addition to exiting early, this
paper also proposes combining exiting early with speculative decoding to propose a self-speculative decoding
approach that does not require an additional model or auxiliary layers.
1arXiv:2404.16710v1 [cs.CL] 25 Apr 2024 |
2212.14024.pdf | DEMONSTRATE –SEARCH –PREDICT :
Composing retrieval and language models for knowledge-intensive NLP
Omar Khattab1Keshav Santhanam1Xiang Lisa Li1David Hall1
Percy Liang1Christopher Potts1Matei Zaharia1
Abstract
Retrieval-augmented in-context learning has
emerged as a powerful approach for addressing
knowledge-intensive tasks using frozen language
models (LM) and retrieval models (RM). Exist-
ing work has combined these in simple “retrieve-
then-read” pipelines in which the RM retrieves
passages that are inserted into the LM prompt.
To begin to fully realize the potential of frozen
LMs and RMs, we propose DEMONSTRATE –
SEARCH –PREDICT (DSP ), a framework that re-
lies on passing natural language texts in sophisti-
cated pipelines between an LM and an RM. DSP
can express high-level programs that bootstrap
pipeline-aware demonstrations, search for rele-
vant passages, and generate grounded predictions,
systematically breaking down problems into small
transformations that the LM and RM can handle
more reliably. We have written novel DSP pro-
grams for answering questions in open-domain,
multi-hop, and conversational settings, establish-
ing in early evaluations new state-of-the-art in-
context learning results and delivering 37–120%,
8–39%, and 80–290% relative gains against the
vanilla LM (GPT-3.5), a standard retrieve-then-
read pipeline, and a contemporaneous self-ask
pipeline, respectively. We release DSP athttps:
//github.com/stanfordnlp/dsp .
1. Introduction
In-context learning adapts a frozen language model (LM) to
tasks by conditioning the LM on a textual prompt including
task instructions and a few demonstrating examples (Mc-
Cann et al., 2018; Radford et al., 2019; Brown et al., 2020).
For knowledge-intensive tasks such as question answering,
fact checking, and information-seeking dialogue, retrieval
models (RM) are increasingly used to augment prompts
1Stanford University . Correspondence to:
Omar Khattab <okhattab@cs.stanford.edu >.
Preprint .
How many storeys are in the castle David Gregory inherited?
LM:Castle Gregory has three storeys.❌Hallucinates
a fictitious castle
RM: “St. Gregory Hotel is a nine-floor boutique hotel in D.C...”
LM: St. Gregory Hotel has nine storeys.❌Retrieves a
different building
LM: “Which castle did David Gregory inherit?”
RM: “David Gregory inherited Kinnairdy Castle in 1664...”
LM: “How many storyes does Kinnairdy Castle have?”
RM: “Kinnairdy Castle is a tower house, having five storeys…”
LM: Kinnairdy Castle has fivestoreys.Vanilla LM
Retrieve-
then-Read
Multi-Hop
DSP ProgramFigure 1. A comparison between three systems based on GPT-
3.5 (text-davinci-002 ). On its own, the LM often makes false
assertions. An increasingly popular retrieve-then-read pipeline
fails when simple search can’t find an answer. In contrast, a task-
aware DSP program successfully decomposes the problem and
produces a correct response. Texts edited for presentation.
with relevant information from a large corpus (Lazaridou
et al., 2022; Press et al., 2022; Khot et al., 2022).
Recent work has shown such retrieval-augmented in-context
learning to be effective in simple “retrieve-then-read”
pipelines: a query is fed to the RM and the retrieved pas-
sages become part of a prompt that provides context for
the LM to use in its response. In this work, we argue that
the fact that both LMs and RMs consume (and generate or
retrieve) natural language texts creates an opportunity for
much more sophisticated interactions between them. Fully
realizing this would be transformative: frozen LMs and
RMs could serve as infrastructure across tasks, enabling
ML- and domain-experts alike to rapidly build grounded
AI systems at a high level of abstraction and with lower
deployment overheads and annotation costs.
Figure 1 begins to illustrate the power of retrieval-
augmented in-context learning, but also the limitations of
“retrieve-then-read” (Lazaridou et al., 2022; Izacard et al.,
2022). Our query is “How many storeys are in the castle
David Gregory inherited?” When prompted to answer this,
GPT-3.5 ( text-davinci-002 ; Ouyang et al. 2022) makes
up a fictitious castle with incorrect attributes, highlighting
the common observation that knowledge stored in LM pa-
rameters is often unreliable (Shuster et al., 2021; Ishii et al.,
2022). Introducing an RM component helps, as the LM
can ground its responses in retrieved passages, but a rigidarXiv:2212.14024v2 [cs.CL] 23 Jan 2023 |
L08_expressivity.pdf | Expressive Variational Autoencoders
John Thickstun
The Gaussian VAE parameterizes the prior r(z), conditional likelihood p(x|z), and posterior
approximation q(x|z) with with Gaussian distributions. The in-expressivity of these Gaussian
models can make it difficult to capture the distribution p(x); complaints about the “blurriness” of
the VAE may be attributable to these assumptions. Note that many papers visualize the mean
gθ(˜z) of the decoder network, rather than samples gθ(˜z) +η, which coupled with a Gaussian noise
model onXcould exacerbate blurriness.
PixelCNN and PixelVAE
One way to increase the expressivity of the VAE is to remove the conditional-independence as-
sumption from the decoder distribution p(x|z). In the standard Gaussian VAE, the components xi
ofxare conditionally independent given the latent code z:
p(x|z) =|X|∏
i=1p(xi|z) =|X|∏
i=1N(xi|µi(z),σ2). (1)
We can remove this assumption by building a fully-autoregressive model of the decoder distribution
over observations x, i.e.
p(x|z) =|X|∏
i=1p(xi|x<i,z). (2)
An auto-regressive parameterization of the conditional likelihood called PixelVAE is explored
by Gulrajani et al. [2017], based on a line of work building autoregressive models called PixelCNN
[van den Oord et al., 2016b,a, Salimans et al., 2017] that extends the NADE modeling perspective
to images. One oddity of these models is that, in order to construct an autoregressive factorization
of the like distribution over images, we need to fix a (somewhat arbitrary) ordering over pixels; the
standard choice is to order the pixels from left to right, top-to-bottom, starting with the pixel in
the upper-left corner of the image.
One might question whether the order matters; while any order leads to a valid factorization
of the joint distribution, perhaps some factorizations would be easier to learn than others? This
question was asked in the original NADE work, and the answer. There is followup work on orderless
NADE [Uria et al., 2014] that learns an ensemble of factored autoregressive models, one for each
possible ordering of pixels; by ensembling these models, it may be possible to construct a better
model than using any particular ordering. But in practice, just picking an arbitrary ordering doesn’t
seem to cause too much trouble.
Two serious problems with using autoregressive likelihoods p(x|z) are posterior collapse (dis-
cussed in the next section) and the computational expense of sampling from an autoregressive
1 |
2311.11944v1.pdf | FINANCE BENCH : A New Benchmark for Financial Question Answering
Pranab Islam1∗Anand Kannappan1Douwe Kiela2,3
Rebecca Qian1Nino Scherrer1Bertie Vidgen1
1Patronus AI2Contextual AI3Stanford University
Abstract
FINANCE BENCH is a first-of-its-kind test suite
for evaluating the performance of LLMs on
open book financial question answering (QA).
It comprises 10,231 questions about publicly
traded companies, with corresponding an-
swers and evidence strings. The questions
inFINANCE BENCH are ecologically valid and
cover a diverse set of scenarios. They are in-
tended to be clear-cut and straightforward to
answer to serve as a minimum performance
standard. We test 16 state of the art model con-
figurations (including GPT-4-Turbo, Llama2
and Claude2, with vector stores and long con-
text prompts) on a sample of 150 cases from
FINANCE BENCH , and manually review their
answers (n=2,400). The cases are available
open-source. We show that existing LLMs
have clear limitations for financial QA. Notably,
GPT-4-Turbo used with a retrieval system in-
correctly answered or refused to answer 81%
of questions. While augmentation techniques
such as using longer context window to feed in
relevant evidence improve performance, they
are unrealistic for enterprise settings due to in-
creased latency and cannot support larger fi-
nancial documents. We find that all models
examined exhibit weaknesses, such as halluci-
nations, that limit their suitability for use by
enterprises.
1 Introduction
Finance specialists routinely need to find informa-
tion about companies and industries, summarize
and analyze that information, and then reason about
it. This time-intensive and difficult work is cru-
cial for making investment decisions, developing
financial strategies, and conducting due diligence.
Large Language Models (LLMs) have the poten-
tial to augment and automate labor-intensive parts
of financial analysis because of their impressive
capabilities in natural language understanding, rea-
soning, and writing (Nori et al., 2023; Bubeck et al.,
∗Authors are ordered alphabetically
Figure 1: Incorrect model responses (using a shared
vector store) to a question in FINANCE BENCH . The
correct answer is given by the human expert.
2023). However, a key challenge blocking the fi-
nancial industry’s adoption of LLMs is that there
are few ways of evaluating models’ performance
on finance‘-specific tasks. And, without rigorous,
systematic, and measurable evaluation processes,
the industry cannot (1) understand the strengths
and weaknesses of models; (2) assess whether they
perform well enough to use in high-stakes live set-
tings; and (3) track how their capabilities change
over time.
The financial domain presents unique challenges
for LLMs. First, models need domain-specific
knowledge about financial topics and terminology,
as well as companies and industries. It is unclear
how much financial information and statistics ap-
pear in the pre-training data of models. In part to
address models’ lack of knowledge about finance,
BloombergGPT was released in March 2023 as
the first LLM specialised for the financial domain
(Wu et al., 2023). Second, models need up-to-date
financial information and to understand relevant
financial news. However, many models’ data isarXiv:2311.11944v1 [cs.CL] 20 Nov 2023 |
2403.09636.pdf | Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
Piotr Nawrot*Q VAdrian Ła ´ncucki*Q KMarcin ChochowskiQDavid TarjanQEdoardo M. PontiV
QNVIDIAKUniversity of WrocławVUniversity of Edinburgh
Abstract
Transformers have emerged as the backbone of
large language models (LLMs). However, genera-
tion remains inefficient due to the need to store in
memory a cache of key–value representations for
past tokens, whose size scales linearly with the
input sequence length and batch size. As a solu-
tion, we propose Dynamic Memory Compression
(DMC), a method for on-line key–value cache
compression at inference time. Most importantly,
the model learns to apply different compression
rates in different heads and layers. We retrofit pre-
trained LLMs such as Llama 2 (7B, 13B and 70B)
into DMC Transformers, achieving up to ~3.7 ×
throughput increase during auto-regressive infer-
ence on an NVIDIA H100 GPU. DMC is applied
via continued pre-training on a negligible percent-
age of the original data without adding any extra
parameters. We find that DMC preserves the origi-
nal downstream performance with up to 4 ×cache
compression, outperforming up-trained grouped-
query attention (GQA). GQA and DMC can be
even combined to obtain compounded gains. As a
result DMC fits longer contexts and larger batches
within any given memory budget.
1. Introduction
Transformer Large Language Models (LLMs) are the state
of the art in generative and conversational AI (Touvron et al.,
2023; Jiang et al., 2023). Their deployment, however, is
curtailed in part by their inefficiency. This is not only due
to the quadratic complexity of attention layers (Bahdanau
et al., 2014; Vaswani et al., 2017): during generation, Trans-
formers store the keys and values of past tokens in memory
to avoid recomputing them multiple times. Since this key–
value (KV) cache grows linearly with the sequence length
and batch size, generation with Transformers quickly be-
*Equal contribution.
Correspondence to: Piotr Nawrot < piotr.nawrot@ed.ac.uk >.
Key-value
cachekvt(append)(a) Regular key–value cache with items kvidepicted as boxes.
New items are always appended.
kvtαt=1 (accumulate)
weighted
averageαt=0 (append)
Key-value
cache kvt
(b) Dynamic Memory Compression (DMC) chooses whether to
accumulate or append current items, resulting in a smaller key–
value cache.
Figure 1: Key–value cache update mechanisms.
comes prohibitive due to the excessive memory load. This
issue emerges even more clearly with long-context genera-
tion (e.g., in dialogues and stories) or when serving large
numbers of user queries.
A widespread solution to increase the memory efficiency of
Transformers during inference is Grouped Query Attention
(GQA; Ainslie et al., 2023; Shazeer, 2019), which uses a
number of key and value heads inferior to the number of
query heads through parameter sharing. As an alternative,
the number of overall tokens in memory can be reduced
through token merging (Zhang et al., 2018; Liu et al., 2018;
Bolya et al., 2022) or token pruning (Anagnostidis et al.,
2023; Kim & Cho, 2020). Nevertheless, these methods
often pay the price of a severe degradation in downstream
performance. On the other hand, hardware/IO-aware (Dao
et al., 2022; Kwon et al., 2023) and sub-quadratic algorithms
for attention (Beltagy et al., 2020; Choromanski et al., 2020)
do not alleviate the memory load of the KV cache.
In our work, we aim to achieve a lossless compression of the
KV cache of LLMs, thus retaining their performance while
reducing their memory load. To this end, we propose Dy-
namic Memory Compression (DMC). As shown in Figure 1,
during every time step, DMC decides whether to append
1arXiv:2403.09636v1 [cs.CL] 14 Mar 2024 |
1610.03518v1.pdf | Transfer from Simulation to Real World through
Learning Deep Inverse Dynamics Model
Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider,
Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba
OpenAI, San Francisco, CA, USA
Abstract — Developing control policies in simulation is often
more practical and safer than directly running experiments in
the real world. This applies to policies obtained from planning
and optimization, and even more so to policies obtained from
reinforcement learning, which is often very data demanding.
However, a policy that succeeds in simulation often doesnt
work when deployed on a real robot. Nevertheless, often the
overall gist of what the policy does in simulation remains valid
in the real world. In this paper we investigate such settings,
where the sequence of states traversed in simulation remains
reasonable for the real world, even if the details of the controls
are not, as could be the case when the key differences lie in
detailed friction, contact, mass and geometry properties. During
execution, at each time step our approach computes what the
simulation-based control policy would do, but then, rather
than executing these controls on the real robot, our approach
computes what the simulation expects the resulting next state(s)
will be, and then relies on a learned deep inverse dynamics
model to decide which real-world action is most suitable to
achieve those next states. Deep models are only as good as their
training data, and we also propose an approach for data collec-
tion to (incrementally) learn the deep inverse dynamics model.
Our experiments shows our approach compares favorably with
various baselines that have been developed for dealing with
simulation to real world model discrepancy, including output
error control and Gaussian dynamics adaptation.
I. I NTRODUCTION
Many methods exist for generating control policies in
simulated environments, including methods based on motion
planning, optimization, control, and learning. However, an
important practical challenge is that often there are discrep-
ancies between simulation and the real world, which results
in policies that work well in simulation yet perform poorly
in the real world.
Significant bodies of work exist that strive to address
this challenge. One important line of work studies how to
improve simulators to better match reality, which involves
improving simulation of contact, non-rigidity, friction, as
well as improving identification of physical quantities needed
for accurate simulation such as mass, geometry, friction
coefficients, elasticity. However, despite significant progress,
discrepancies continue to exist, and more accurate simulation
can have the downside of being slower.
Another important line of work studies robustness of con-
trol policies, which could be measured through, for example,
gain and phase margins, and robust control methods exist
that can optimize for these. Optimizing for robustness means
finding control policies that apply across a wide range ofpossible real worlds, but unfortunately tends to come at the
expense of performance in the one specific real world the
system is faced with.
Adaptive methods, which is the topic of this paper, do
not use the same policy for the entire family of possible
environments, but rather try to learn about the specific real
world the system is faced with. In principle, such methods
can exploit the physics of the real world and behave in the
optimal way.
Concretely, our work considers the following problem
setting: We assume to be given a simulator and a method
for generating policies that perform well in simulation. The
goal is to leverage this to perform well in new real-world
situations. To achieve this, a training period exists during
which an adaptation mechanism can be trained to learn to
adapt from simulation to real world by collecting experience
on the real system, but without having access to the new
real-world situations that the system will be evaluated on
later.
We leverage the following intuition: Often policies found
from simulation capture the high-level gist well (e.g., overall
trajectory), but fail to accurately capture some of the lower-
level details, such as friction, stiction, backlash, hysteresis,
precise measurements, precise deformation, etc. Indeed, this
is the type of situation that motivates the work in this paper
and in which we will be evaluating our approach (as well as
baselines).
Note that while we assume that a method exists for
generating policies in simulation, our approach is agnostic
to the details of this method, which could be based on
any techniques from motion planning, optimization, control,
learning, and others, which return a policy, which could be a
model-predictive policy which uses the simulator in its inner
loop.
Our approach proceeds as follows: During execution on
a test trajectory, at each time step it computes what the
simulation-based control policy would do, but then, rather
than executing these controls on the real robot, our approach
computes what the simulation expects the next state(s) will
be, and then relies on a learned deep inverse dynamics
model to decide which real-world action is most suitable
to achieve those next states. As our experiments show, when
these inverse dynamics models are trained on sufficient data,
this results in compelling transfer from simulation to real
world, in particular with challenging dynamics involvingarXiv:1610.03518v1 [cs.RO] 11 Oct 2016 |
2302.03764.pdf | Sketchy: Memory-efficient Adaptive Regularization with Frequent Directions
Vladimir Feinberg1Xinyi Chen1 2Y. Jennifer Sun2Rohan Anil1Elad Hazan1 2
Abstract
Adaptive regularization methods that exploit more
than the diagonal entries exhibit state of the art
performance for many tasks, but can be pro-
hibitive in terms of memory and running time.
We find the spectra of the Kronecker-factored gra-
dient covariance matrix in deep learning (DL)
training tasks are concentrated on a small lead-
ing eigenspace that changes throughout training,
motivating a low-rank sketching approach. We
describe a generic method for reducing memory
and compute requirements of maintaining a ma-
trix preconditioner using the Frequent Directions
(FD) sketch. Our technique allows interpolation
between resource requirements and the degrada-
tion in regret guarantees with rank k: in the online
convex optimization (OCO) setting over dimen-
siond, we match full-matrix d2memory regret
using onlydkmemory up to additive error in the
bottomd−keigenvalues of the gradient covari-
ance. Further, we show extensions of our work
to Shampoo, placing the method on the memory-
quality Pareto frontier of several large scale bench-
marks.
1. Introduction
DL optimization commonly relies on adaptive gradient
methods, namely the Adam optimizer (Kingma & Ba, 2015).
It differs from stochastic gradient descent in that the learn-
ing rate is a structured diagonal matrix built from previous
gradients rather than a scalar. In full matrix AdaGrad (Duchi
et al., 2011) the inverse matrix square root of the sum of
outer products of previous gradients is the learning rate.
Full matrix preconditioning is impractical for modern deep
learning architectures: for instance, the ResNet-50 archi-
tecture (He et al., 2016) has over 23 million parameters,
requiring more than 2 petabytes to represent its gradient
covariance. Thus, diagonal preconditioning methods remain
1Google Research, Brain Team2Princeton University. Corre-
spondence to: Vladimir Feinberg <vladf@google.com >.
Preliminary work.popular. However, previous work has demonstrated state-
of-the-art results in some settings, such as large-batch data
parallel training, for nondiagonal forms of preconditioning
(Martens & Grosse, 2015; Gupta et al., 2018; Agarwal et al.,
2019; Chen et al., 2019; Anil et al., 2019; 2020). Further-
more, as hardware evolves, memory efficiency becomes an
increasing concern, as “logic improves much faster than
wires and SRAM, so logic is relatively free” (Jouppi et al.,
2021): from TPUv2 to TPUv3, per-chip bfloat16 oper-
ations per second improved 2.67×but memory bandwidth
only improved 1.29×. GPUs exhibit a similar pattern for
compute and memory increase, at 5×and2.2×, for V100
to A100 (Dally et al., 2021).
0 20 40 60 80 100
training completion (%)020406080100mass (%)eigenvalue mass in in top 256 of 1024 eigs
architecture
conformer
gnn
resnet
side
left
right
Figure 1: Low-rank nuclear norm relative error. We tune
ResNet-50, a Conformer, and a Graph Neural Net (GNN),
with Shampoo across three different datasets (see Sec. 5.1).
For a 2D layer with gradients G, Shampoo tracks the expo-
nential moving average of factors GG⊤andG⊤G(left and
right sides). We select a 1024×1024 covariance factor C
across all these architectures for both sides and plot the pro-
portion of spectral mass captured by the top 256eigenvalues,
i.e.,∑256
i=1λi(C)/∑1024
i=1λi(C).
Spectral investigation into the Kronecker-factored gradi-
ent covariance matrix reveals a concentrated, but changing,
spectrum (Fig. 1), suggesting the majority of the spectral
mass can be represented by a low-rank matrix, albeit rotating
over time. The Frequent Directions (FD) sketch provides
a mechanism to track the top eigenvectors without materi-
alizing the full covariance matrix (Ghashami et al., 2016).arXiv:2302.03764v1 [stat.ML] 7 Feb 2023 |
1608.04471.pdf | Stein Variational Gradient Descent: A General
Purpose Bayesian Inference Algorithm
Qiang Liu Dilin Wang
Department of Computer Science
Dartmouth College
Hanover, NH 03755
{qiang.liu, dilin.wang.gr}@dartmouth.edu
Abstract
We propose a general purpose variational inference algorithm that forms a natural
counterpart of gradient descent for optimization. Our method iteratively trans-
ports a set of particles to match the target distribution, by applying a form of
functional gradient descent that minimizes the KL divergence. Empirical studies
are performed on various real world models and datasets, on which our method is
competitive with existing state-of-the-art methods. The derivation of our method
is based on a new theoretical result that connects the derivative of KL divergence
under smooth transforms with Stein’s identity and a recently proposed kernelized
Stein discrepancy, which is of independent interest.
1 Introduction
Bayesian inference provides a powerful tool for modeling complex data and reasoning under uncer-
tainty, but casts a long standing challenge on computing intractable posterior distributions. Markov
chain Monte Carlo (MCMC) has been widely used to draw approximate posterior samples, but is
often slow and has difficulty accessing the convergence. Variational inference instead frames the
Bayesian inference problem into a deterministic optimization that approximates the target distribution
with a simpler distribution by minimizing their KL divergence. This makes variational methods
efficiently solvable by using off-the-shelf optimization techniques, and easily applicable to large
datasets (i.e., "big data") using the stochastic gradient descent trick [e.g., 1]. In contrast, it is much
more challenging to scale up MCMC to big data settings [see e.g., 2, 3].
Meanwhile, both the accuracy and computational cost of variational inference critically depend on
the set of distributions in which the approximation is defined. Simple approximation sets, such as
these used in the traditional mean field methods, are too restrictive to resemble the true posterior
distributions, while more advanced choices cast more difficulties on the subsequent optimization tasks.
For this reason, efficient variational methods often need to be derived on a model-by-model basis,
causing is a major barrier for developing general purpose, user-friendly variational tools applicable
for different kinds of models, and accessible to non-ML experts in application domains.
This case is in contrast with the maximum a posteriori (MAP) optimization tasks for finding the
posterior mode (sometimes known as the poor man’s Bayesian estimator , in contrast with the full
Bayesian inference for approximating the full posterior distribution), for which variants of (stochastic)
gradient descent serve as a simple, generic, yet extremely powerful toolbox. There has been a recent
growth of interest in creating user-friendly variational inference tools [e.g., 4–7], but more efforts are
still needed to develop more efficient general purpose algorithms.
In this work, we propose a new general purpose variational inference algorithm which can be treated
as a natural counterpart of gradient descent for full Bayesian inference (see Algorithm 1). Our
algorithm uses a set of particles for approximation, on which a form of (functional) gradient descentarXiv:1608.04471v3 [stat.ML] 9 Sep 2019 |
1812.11118.pdf | Reconciling modern machine learning practice
and the bias-variance trade-off
Mikhail Belkina, Daniel Hsub, Siyuan Maa, and Soumik Mandala
aThe Ohio State University, Columbus, OH
bColumbia University, New York, NY
September 12, 2019
Abstract
Breakthroughs in machine learning are rapidly changing science and society, yet our fun-
damental understanding of this technology has lagged far behind. Indeed, one of the central
tenets of the field, the bias-variance trade-off, appears to be at odds with the observed behavior
of methods used in the modern machine learning practice. The bias-variance trade-off implies
that a model should balance under-fitting and over-fitting: rich enough to express underlying
structure in data, simple enough to avoid fitting spurious patterns. However, in the modern
practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate)
the data. Classically, such models would be considered over-fit, and yet they often obtain high
accuracy on test data. This apparent contradiction has raised questions about the mathematical
foundations of machine learning and their relevance to practitioners.
In this paper, we reconcile the classical understanding and the modern practice within a
unified performance curve. This “double descent” curve subsumes the textbook U-shaped bias-
variance trade-off curve by showing how increasing model capacity beyond the point of inter-
polation results in improved performance. We provide evidence for the existence and ubiquity
of double descent for a wide spectrum of models and datasets, and we posit a mechanism for
its emergence. This connection between the performance and the structure of machine learning
models delineates the limits of classical analyses, and has implications for both the theory and
practice of machine learning.
E-mail: mbelkin@cse.ohio-state.edu , djhsu@cs.columbia.edu , masi@cse.ohio-state.edu ,
mandal.32@osu.edu
1arXiv:1812.11118v2 [stat.ML] 10 Sep 2019 |
2002.05616.pdf | Learning the Stein Discrepancy
for Training and Evaluating Energy-Based Models without Sampling
Will Grathwohl1Kuan-Chieh Wang1J¨orn-Henrik Jacobsen1David Duvenaud1Richard Zemel1
Abstract
We present a new method for evaluating and train-
ing unnormalized density models. Our approach
only requires access to the gradient of the unnor-
malized model’s log-density. We estimate the
Stein discrepancy between the data density p(x)
and the model density q(x)defined by a vector
function of the data. We parameterize this func-
tion with a neural network and fit its parameters
to maximize the discrepancy. This yields a novel
goodness-of-fit test which outperforms existing
methods on high dimensional data. Furthermore,
optimizingq(x)to minimize this discrepancy pro-
duces a novel method for training unnormalized
models which scales more gracefully than exist-
ing methods. The ability to both learn and com-
pare models is a unique feature of the proposed
method.
1. Introduction
Energy-Based Models (EBMs), also known as unnormal-
ized density models, are perhaps the most flexible way to
parameterize a density. They hinge on the observation that
any densityp(x)can be expressed as
p(x) =exp(−E(x))
Z, (1)
whereE:RD→R, known as the energy function , maps
each point to a scalar, and Z=∫
xexp(−E(x))is the
normalizing constant.
A major benefit of EBMs is that they allow maximal free-
dom in designing the energy function E. This makes
it straightforward to incorporate prior knowledge about
the problem, such as symmetries or domain-specific de-
sign choices, into the structure of the model. This has
1University of Toronto and Vector Institute, Toronto,
Canada. Correspondence to: Will Grathwohl <wgrath-
wohl@cs.toronto.edu >.
Proceedings of the 37thInternational Conference on Machine
Learning , Vienna, Austria, PMLR 119, 2020. Copyright 2020 by
the author(s).
Figure 1. Density models trained with approximate MCMC sam-
plers can fail to match the data density while still generating high-
quality samples. Samples from approximate MCMC samplers
follow a different distribution than the density they are applied to.
It is this induced distribution which is trained to match the data. In
contrast, our approach LSD directly matches the model density
to the data density without reliance on a sampler.
made EBMs an appealing candidate for applications in
physics (No ´e et al., 2019), biology (Ingraham et al., 2019),
neuroscience (Scellier & Bengio, 2017), and computer vi-
sion (LeCun et al., 2007; Osadchy et al., 2007; Xie et al.,
2016; 2019; 2018), to name a few.
Despite their many benefits, EBMs present a central chal-
lenge which complicates their use: because we cannot effi-
ciently compute the normalizing constant, we cannot com-
pute likelihoods under our model, making training and eval-
uation difficult. Much prior work on EBMs has relied on
MCMC sampling techniques to estimate the likelihood (for
evaluation) and its gradient (for training). Other approaches
train EBMs by finding easier-to-compute surrogate objec-
tives which have similar optima to the maximum likelihood
objective. These include Score Matching (Hyv ¨arinen, 2005)
and Noise-Contrastive Estimation (Gutmann & Hyv ¨arinen,
2010).
These original sampling- and score-based approaches were
not able to scale to large, high-dimensional datasets as well
as subsequently developed alternative models, such as Vari-
ational Autoencoders (V AEs) (Kingma & Welling, 2013)
and Normalizing Flows (NFs) (Rezende & Mohamed, 2015).
These approaches offer more easily scalable training, evalua-
tion, and sampling, but do so at the cost of a more restrictive
model parameterization which can lead to well-known prob-arXiv:2002.05616v4 [stat.ML] 14 Aug 2020 |
2304.14802.pdf | ResiDual: Transformer with Dual Residual
Connections
Shufang Xie‡†, Huishuai Zhang†, Junliang Guo†, Xu Tan†∗, Jiang Bian†
Hany Hassan Awadalla†,Arul Menezes†,Tao Qin†,Rui Yan‡∗
†Microsoft Research†Microsoft Azure Translation
‡Gaoling School of Artificial Intelligence, Renmin University of China
{shufangxie,ruiyan}@ruc.edu.cn ,
{huzhang,junliangguo,xuta,jiabia,hanyh,arulm,taoqin}@microsoft.com
Abstract
Transformer networks have become the preferred architecture for many tasks due
to their state-of-the-art performance. However, the optimal way to implement
residual connections in Transformer, which are essential for effective training, is
still debated. Two widely used variants are the Post-Layer Normalization (Post-LN)
and Pre-Layer Normalization (Pre-LN) Transformers, which apply layer normal-
ization after each residual block’s output or before each residual block’s input,
respectively. While both variants enjoy their advantages, they also suffer from
severe limitations: Post-LN causes gradient vanishing issue that hinders training
deep Transformers, and Pre-LN causes representation collapse issue that limits
model capacity. In this paper, we propose ResiDual, a novel Transformer archi-
tecture with Pre-Post-LN (PPLN), which fuses the connections in Post-LN and
Pre-LN together, and inherits their advantages while avoids their limitations. We
conduct both theoretical analyses and empirical experiments to verify the effec-
tiveness of ResiDual. Theoretically, we prove that ResiDual has a lower bound
on the gradient to avoid the vanishing issue due to the residual connection from
Pre-LN. Moreover, ResiDual also has diverse model representations to avoid the
collapse issue due to the residual connection from Post-LN. Empirically, ResiDual
outperforms both Post-LN and Pre-LN on several machine translation benchmarks
across different network depths and data sizes. Thanks to the good theoretical and
empirical performance, ResiDual Transformer can serve as a foundation architec-
ture for different AI models (e.g., large language models). Our code is available at
https://github.com/microsoft/ResiDual .
1 Introduction
Transformer (Vaswani et al., 2017) has emerged as a powerful neural network architecture that
has been successfully applied in various AI tasks, including machine translation (Vaswani et al.,
2017), language model ing and generation (Radford et al., 2018, 2019; Brown et al., 2020), image
recognition (Dosovitskiy et al., 2020), and speech synthesis (Ren et al., 2019). Despite its success,
researchers are still exploring ways to further enhance its performance and deepen the understanding
of its inner workings (Wang et al., 2019; Katharopoulos et al., 2020; Fedus et al., 2021). Among them,
one area of ongoing research is the study of residual connections in the Transformer architecture (Liu
et al., 2020; Xiong et al., 2020; Bachlechner et al., 2021). Two variants of residual connections
have been proposed since the introduction of the Transformer, known as Post-LN and Pre-LN. The
Post-LN variant applies layer normalization (LN) operations after the output of each residual block.
∗Corresponding Authors: Xu Tan, xuta@microsoft.com ; Rui Yan, ruiyan@ruc.edu.cn .
Preprint. Under review.arXiv:2304.14802v1 [cs.CL] 28 Apr 2023 |
2403.07816.pdf | Branch-Train-MiX:
Mixing Expert LLMs into a Mixture-of-Experts LLM
Sainbayar Sukhbaatar ,Olga Golovneva ,Vasu Sharma ,Hu Xu,Xi Victoria Lin ,Baptiste Rozière ,Jacob
Kahn,Daniel Li,Wen-tau Yih ,Jason Weston ,Xian Li
FAIR at Meta
We investigate efficient methods for training Large Language Models (LLMs) to possess capabilities
in multiple specialized domains, such as coding, math reasoning and world knowledge. Our method,
named Branch-Train-MiX (BTX), starts from a seed model, which is branched to train experts
in embarrassingly parallel fashion with high throughput and reduced communication cost. After
individual experts are asynchronously trained, BTX brings together their feedforward parameters
as experts in Mixture-of-Expert (MoE) layers and averages the remaining parameters, followed
by an MoE-finetuning stage to learn token-level routing. BTX generalizes two special cases, the
Branch-Train-Merge method, which does not have the MoE finetuning stage to learn routing, and
sparse upcycling, which omits the stage of training experts asynchronously. Compared to alternative
approaches, BTX achieves the best accuracy-efficiency tradeoff.
Date:March 13, 2024
Correspondence: {sainbar,xianl}@meta.com
1 Introduction
In recent years, Large Language Models (LLMs) have shown impressive performance in a wide-range of
tasks (Brown et al., 2020; Touvron et al., 2023; Achiam et al., 2023), including code generation (Li et al.,
2022b; Rozière et al., 2023), solving math problems (Azerbayev et al., 2023), multilinguality (Zhao et al.,
2024), etc. Training such LLMs requires a large amount of compute and data, exceeding thousands of GPUs
and trillions of tokens. The training parallelization is typically done by maintaining multiple copies of the
model on different GPUs and keeping them synchronized after each weight update. The cost of this frequent
communication is the main bottleneck in scaling the training to more GPUs. Besides this issue, synchronized
training is more vulnerable to hardware failures as a single failed GPU can cause the whole training to halt
(Zhang et al., 2022; Gemini Team, 2023).
Recent work by Li et al. (2022a) proposed the Branch-Train-Merge (BTM) method for embarrassingly parallel
training of LLMs without any synchronization for improving the throughput of pretraining. It starts by
creating multiple copies of a seed LLM, then separately training each copy on different subsets of data.
This results in multiple independent LLMs that do not share any parameters and each LLM is an expert
specializing in its own data distribution, such as knowledge domains, languages or even modalities. At test
time, an input prompt is classified into one or more of the domains, and then the final outputs are formed
from the corresponding expert models which are combined to predict the next token. While this approach
makes training more efficient, its main drawback is the lack of a unified single model making it impossible to
do further supervised finetuning (SFT) or reinforcement learning from human feedback (RLHF) finetuning
(Ouyang et al., 2022), both of which can boost performance further, and are crucial steps in building aligned
LLMs.
A separate line of work for reducing the computational footprint of LLMs is the Mixture-of-Experts (MoE)
approach (Jacobs et al., 1991; Shazeer et al., 2017), where only a subset of parameteters are active at any
given time. In particular, MoE is applied to the feedforward sublayer of Transformers (Fedus et al., 2022;
Roller et al., 2021; Lewis et al., 2021), allowing the total number of parameters to grow without additional
computation. LLMs scaled in this way have shown impressive performance on downstream tasks (Jiang
et al., 2024; Xue et al., 2024). Unlike Branch-Train-Merge, Mixture-of-Experts are often trained in a fully
1arXiv:2403.07816v1 [cs.CL] 12 Mar 2024 |
2209.15634.pdf | A General Framework for Sample-Efficient Function
Approximation in Reinforcement Learning
Zixiang Chen‡∗Chris Junchi Li⋄∗Angela Yuan‡∗Quanquan Gu‡Michael I. Jordan⋄,†
Department of Computer Sciences, University of California, Los Angeles‡
Department of Electrical Engineering and Computer Sciences, University of California, Berkeley⋄
Department of Statistics, University of California, Berkeley†
October 3, 2022
Abstract
With the increasing need for handling large state and action spaces, general function approximation
has become a key technique in reinforcement learning (RL). In this paper, we propose a general
framework that unifies model-based and model-free RL, and an Admissible Bellman Characterization
(ABC) class that subsumes nearly all Markov Decision Process (MDP) models in the literature for
tractable RL. We propose a novel estimation function with decomposable structural properties for
optimization-based exploration and the functional eluder dimension as a complexity measure of the
ABC class. Under our framework, a new sample-efficient algorithm namely OPtimization-based
ExploRation with Approximation (OPERA) is proposed, achieving regret bounds that match or
improve over the best-known results for a variety of MDP models. In particular, for MDPs with low
Witness rank, under a slightly stronger assumption, OPERA improves the state-of-the-art sample
complexity results by a factor of dH. Our framework provides a generic interface to design and
analyze new RL models and algorithms.
1 Introduction
Reinforcement learning (RL) is a decision-making process that seeks to maximize the expected reward
when an agent interacts with the environment [Sutton and Barto, 2018]. Over the past decade, RL has
gained increasing attention due to its successes in a wide range of domains, including Atari games [Mnih
et al., 2013], Go game [Silver et al., 2016], autonomous driving [Yurtsever et al., 2020], Robotics [Kober
et al., 2013], etc. Existing RL algorithms can be categorized into value-based algorithms such as
Q-learning [Watkins, 1989] and policy-based algorithms such as policy gradient [Sutton et al., 1999].
They can also be categorized as a model-free approach where one directly models the value function
classes, or alternatively, a model-based approach where one needs to estimate the transition probability.
Due to the intractably large state and action spaces that are used to model the real-world complex
environment, function approximation in RL has become prominent in both algorithm design and
theoretical analysis. It is a pressing challenge to design sample-efficient RL algorithms with general
function approximations. In the special case where the underlying Markov Decision Processes (MDPs)
enjoy certain linear structures, several lines of works have achieved polynomial sample complexity and/or√
Tregret guarantees under either model-free or model-based RL settings. For linear MDPs where the
transition probability and the reward function admit linear structure, Yang and Wang [2019] developed a
variant ofQ-learning when granted access to a generative model, Jin et al. [2020] proposed an LSVI-UCB
algorithm with a ˜O(√
d3H3T) regret bound and Zanette et al. [2020a] further extended the MDP model
and improved the regret to ˜O(dH√
T). Another line of work considers linear mixture MDPs Yang and
1arXiv:2209.15634v1 [cs.LG] 30 Sep 2022 |
2205.13147.pdf | Matryoshka Representation Learning
Aditya Kusupati∗†⋄, Gantavya Bhatt∗†, Aniket Rege∗†,
Matthew Wallingford†, Aditya Sinha⋄, Vivek Ramanujan†, William Howard-Snyder†,
Kaifeng Chen⋄, Sham Kakade‡, Prateek Jain⋄and Ali Farhadi†
†University of Washington,⋄Google Research,‡Harvard University
{kusupati,ali}@cs.washington.edu ,prajain@google.com
Abstract
Learned representations are a central component in modern ML systems, serv-
ing a multitude of downstream tasks. When training such representations, it
is often the case that computational and statistical constraints for each down-
stream task are unknown. In this context, rigid fixed-capacity representations
can be either over or under-accommodating to the task at hand. This leads us
to ask: can we design a flexible representation that can adapt to multiple down-
stream tasks with varying computational resources? Our main contribution is
Matryoshka Representation Learning (MRL ) which encodes information at
different granularities and allows a single embedding to adapt to the computational
constraints of downstream tasks. MRL minimally modifies existing representation
learning pipelines and imposes no additional cost during inference and deployment.
MRL learns coarse-to-fine representations that are at least as accurate and rich as
independently trained low-dimensional representations. The flexibility within the
learned Matryoshka Representations offer: (a) up to 14×smaller embedding
size for ImageNet-1K classification at the same level of accuracy; (b) up to 14×
real-world speed-ups for large-scale retrieval on ImageNet-1K and 4K; and (c) up
to2% accuracy improvements for long-tail few-shot classification, all while being
as robust as the original representations. Finally, we show that MRL extends seam-
lessly to web-scale datasets (ImageNet, JFT) across various modalities – vision
(ViT, ResNet), vision + language (ALIGN) and language (BERT). MRL code and
pretrained models are open-sourced at https://github.com/RAIVNLab/MRL .
1 Introduction
Learned representations [ 57] are fundamental building blocks of real-world ML systems [ 66,91].
Trained once and frozen, d-dimensional representations encode rich information and can be used
to perform multiple downstream tasks [ 4]. The deployment of deep representations has two steps:
(1) an expensive yet constant-cost forward pass to compute the representation [ 29] and (2) utilization
of the representation for downstream applications [ 50,89]. Compute costs for the latter part of the
pipeline scale with the embedding dimensionality as well as the data size ( N) and label space ( L).
At web-scale [ 15,85] this utilization cost overshadows the feature computation cost. The rigidity in
these representations forces the use of high-dimensional embedding vectors across multiple tasks
despite the varying resource and accuracy constraints that require flexibility.
Human perception of the natural world has a naturally coarse-to-fine granularity [ 28,32]. However,
perhaps due to the inductive bias of gradient-based training [ 84], deep learning models tend to diffuse
“information” across the entire representation vector. The desired elasticity is usually enabled in the
existing flat and fixed representations either through training multiple low-dimensional models [ 29],
jointly optimizing sub-networks of varying capacity [ 9,100] or post-hoc compression [ 38,60]. Each
of these techniques struggle to meet the requirements for adaptive large-scale deployment either
∗Equal contribution – AK led the project with extensive support from GB and AR for experimentation.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2205.13147v4 [cs.LG] 8 Feb 2024 |
2307.15043.pdf | Universal and Transferable Adversarial Attacks
on Aligned Language Models
Andy Zou1,2, Zifan Wang2, Nicholas Carlini3, Milad Nasr3,
J. Zico Kolter1,4, Matt Fredrikson1
1Carnegie Mellon University,2Center for AI Safety,
3Google DeepMind,4Bosch Center for AI
Abstract
Because “out-of-the-box” large language models are capable of generating a great
deal of objectionable content, recent work has focused on aligning these models in an
attempt to prevent undesirable generation. While there has been some success at cir-
cumventing these measures—so-called “jailbreaks” against LLMs—these attacks have
required significant human ingenuity and are brittle in practice. Attempts at automatic
adversarial prompt generation have also achieved limited success. In this paper, we
propose a simple and effective attack method that causes aligned language models to
generate objectionable behaviors. Specifically, our approach finds a suffix that, when
attached to a wide range of queries for an LLM to produce objectionable content, aims
to maximize the probability that the model produces an affirmative response (rather
than refusing to answer). However, instead of relying on manual engineering, our ap-
proach automatically produces these adversarial suffixes by a combination of greedy
and gradient-based search techniques, and also improves over past automatic prompt
generation methods.
Surprisingly, we find that the adversarial prompts generated by our approach are
highly transferable , including to black-box, publicly released, production LLMs. Specif-
ically, we train an adversarial attack suffix on multiple prompts (i.e., queries asking for
many different types of objectionable content), as well as multiple models (in our case,
Vicuna-7B and 13B). When doing so, the resulting attack suffix induces objec-
tionable content in the public interfaces to ChatGPT, Bard, and Claude , as
well as open source LLMs such as LLaMA-2-Chat, Pythia, Falcon, and others. Inter-
estingly, the success rate of this attack transfer is much higher against the GPT-based
models, potentially owing to the fact that Vicuna itself is trained on outputs from
ChatGPT. In total, this work significantly advances the state-of-the-art in adversarial
attacks against aligned language models, raising important questions about how such
systems can be prevented from producing objectionable information. Code is available
atgithub.com/llm-attacks/llm-attacks .
1arXiv:2307.15043v2 [cs.CL] 20 Dec 2023 |
2207.10551.pdf | Scaling Laws vs Model Architectures :
How does Inductive Bias Influence Scaling?
Yi Tay∗Mostafa Dehghani∗Samira Abnar Hyung Won Chung
William Fedus Jinfeng Rao Sharan Narang Vinh Q. Tran
Dani Yogatama†Donald Metzler
Google Research & DeepMind†
{yitay,dehghani}@google.com
Abstract
There have been a lot of interest in the scal-
ing properties of Transformer models (Kaplan
et al., 2020). However, not much has been
done on the front of investigating the effect
of scaling properties of different inductive bi-
ases and model architectures. Do model ar-
chitectures scale differently? If so, how does
inductive bias affect scaling behaviour? How
does this influence upstream (pretraining) and
downstream (transfer)? This paper conducts
a systematic study of scaling behaviour of ten
diverse model architectures such as Transform-
ers, Switch Transformers, Universal Trans-
formers, Dynamic convolutions, Performers,
and recently proposed MLP-Mixers. Via ex-
tensive experiments, we show that (1) archi-
tecture is an indeed an important considera-
tion when performing scaling and (2) the best
performing model can fluctuate at different
scales. We believe that the findings outlined in
this work has significant implications to how
model architectures are currently evaluated in
the community.
1 Introduction
There have been a lot recent interest in the scaling
properties of Transformer models (Kaplan et al.,
2020; Hernandez et al., 2021; Bahri et al., 2021;
Henighan et al., 2020; Tay et al., 2021b; Abnar
et al., 2021). However, not much is understood
about the scaling properties of different inductive
biases imposed by model architectures. Improve-
ments at a a specific scale (compute, size etc) are
often assumed to transfer to different scales and
compute regions (So et al., 2019; Choromanski
et al., 2020; Lan et al., 2019; Dehghani et al., 2018)
and new research is often presented in a point-wise
fashion with respect to scale. In short, it is not un-
common for new methods to be presented with data
points at very specific or limited compute regions
∗Yi and Mostafa contributed equally. Samira is now at
Apple.(e.g., base size). We believe that understanding the
interaction between architecture and scaling laws
is crucial as designing models that perform well at
diverse scales will likely have significant impact.
This paper is an attempt to understand the ef-
fect of inductive bias (architecture) on scaling laws
of language models. To this end, we pre-train and
finetune over ten diverse model architectures across
multiple compute region and scales (e.g., from 15M
to 40 Billion parameters). In total, we pre-train and
finetune over 100 different models of different ar-
chitectures and sizes and present insights and chal-
lenges at scaling these ten diverse architectures.
We consider a broad spectrum of models in
our extensive experiments. Concretely, we con-
sider several well-established Transformer vari-
ants (Vaswani et al., 2017) such as Evolved Trans-
former (So et al., 2019), Universal Transformers
(Dehghani et al., 2018) and Switch Transformers
(Fedus et al., 2021). We also consider lightweight
models such as ALBERT (Lan et al., 2019) and/or
efficient Transformers (Tay et al., 2020) such as
Performer (Choromanski et al., 2020) and Funnel
Transformers (Dai et al., 2020). In our comparison,
we are also interested in finding out if general im-
provements to the Transformer architectures such
as Mixture-of-Softmax (Yang et al., 2017) and/or
Gated Linear Units (Dauphin et al., 2017; Shazeer,
2020) influence the scaling behaviour of models.
Finally, we also evaluate models outside the fam-
ily of Transformers including Lightweight convo-
lutions (Wu et al., 2019), Dynamic convolutions
(Wu et al., 2019) and the recently proposed MLP-
Mixers (Tolstikhin et al., 2021). Figure 1 illustrates
an overview about the experiments we run.
We also note that scaling these models is not as
straightforward as it seems, i.e., there are intricate
details of scale that are intertwined with architec-
tural choices which we study in detail in this pa-
per. For example, a distinct feature of Universal
Transformers (and ALBERT) is parameter sharing.arXiv:2207.10551v1 [cs.LG] 21 Jul 2022 |
2212.14024v2.pdf | DEMONSTRATE –SEARCH –PREDICT :
Composing retrieval and language models for knowledge-intensive NLP
Omar Khattab1Keshav Santhanam1Xiang Lisa Li1David Hall1
Percy Liang1Christopher Potts1Matei Zaharia1
Abstract
Retrieval-augmented in-context learning has
emerged as a powerful approach for addressing
knowledge-intensive tasks using frozen language
models (LM) and retrieval models (RM). Exist-
ing work has combined these in simple “retrieve-
then-read” pipelines in which the RM retrieves
passages that are inserted into the LM prompt.
To begin to fully realize the potential of frozen
LMs and RMs, we propose DEMONSTRATE –
SEARCH –PREDICT (DSP ), a framework that re-
lies on passing natural language texts in sophisti-
cated pipelines between an LM and an RM. DSP
can express high-level programs that bootstrap
pipeline-aware demonstrations, search for rele-
vant passages, and generate grounded predictions,
systematically breaking down problems into small
transformations that the LM and RM can handle
more reliably. We have written novel DSP pro-
grams for answering questions in open-domain,
multi-hop, and conversational settings, establish-
ing in early evaluations new state-of-the-art in-
context learning results and delivering 37–120%,
8–39%, and 80–290% relative gains against the
vanilla LM (GPT-3.5), a standard retrieve-then-
read pipeline, and a contemporaneous self-ask
pipeline, respectively. We release DSP athttps:
//github.com/stanfordnlp/dsp .
1. Introduction
In-context learning adapts a frozen language model (LM) to
tasks by conditioning the LM on a textual prompt including
task instructions and a few demonstrating examples (Mc-
Cann et al., 2018; Radford et al., 2019; Brown et al., 2020).
For knowledge-intensive tasks such as question answering,
fact checking, and information-seeking dialogue, retrieval
models (RM) are increasingly used to augment prompts
1Stanford University . Correspondence to:
Omar Khattab <okhattab@cs.stanford.edu >.
Preprint .
How many storeys are in the castle David Gregory inherited?
LM:Castle Gregory has three storeys.❌Hallucinates
a fictitious castle
RM: “St. Gregory Hotel is a nine-floor boutique hotel in D.C...”
LM: St. Gregory Hotel has nine storeys.❌Retrieves a
different building
LM: “Which castle did David Gregory inherit?”
RM: “David Gregory inherited Kinnairdy Castle in 1664...”
LM: “How many storyes does Kinnairdy Castle have?”
RM: “Kinnairdy Castle is a tower house, having five storeys…”
LM: Kinnairdy Castle has fivestoreys.Vanilla LM
Retrieve-
then-Read
Multi-Hop
DSP ProgramFigure 1. A comparison between three systems based on GPT-
3.5 (text-davinci-002 ). On its own, the LM often makes false
assertions. An increasingly popular retrieve-then-read pipeline
fails when simple search can’t find an answer. In contrast, a task-
aware DSP program successfully decomposes the problem and
produces a correct response. Texts edited for presentation.
with relevant information from a large corpus (Lazaridou
et al., 2022; Press et al., 2022; Khot et al., 2022).
Recent work has shown such retrieval-augmented in-context
learning to be effective in simple “retrieve-then-read”
pipelines: a query is fed to the RM and the retrieved pas-
sages become part of a prompt that provides context for
the LM to use in its response. In this work, we argue that
the fact that both LMs and RMs consume (and generate or
retrieve) natural language texts creates an opportunity for
much more sophisticated interactions between them. Fully
realizing this would be transformative: frozen LMs and
RMs could serve as infrastructure across tasks, enabling
ML- and domain-experts alike to rapidly build grounded
AI systems at a high level of abstraction and with lower
deployment overheads and annotation costs.
Figure 1 begins to illustrate the power of retrieval-
augmented in-context learning, but also the limitations of
“retrieve-then-read” (Lazaridou et al., 2022; Izacard et al.,
2022). Our query is “How many storeys are in the castle
David Gregory inherited?” When prompted to answer this,
GPT-3.5 ( text-davinci-002 ; Ouyang et al. 2022) makes
up a fictitious castle with incorrect attributes, highlighting
the common observation that knowledge stored in LM pa-
rameters is often unreliable (Shuster et al., 2021; Ishii et al.,
2022). Introducing an RM component helps, as the LM
can ground its responses in retrieved passages, but a rigidarXiv:2212.14024v2 [cs.CL] 23 Jan 2023 |
2302.12441.pdf | MUX-PLMs: Data Multiplexing for High-throughput Language Models
Vishvak Murahari1Ameet Deshpande1Carlos E. Jimenez1
Izhak Shafran2Mingqiu Wang2Yuan Cao2Karthik Narasimhan1
1Princeton University2Google Brain
murahari@cs.princeton.edu
Abstract
The widespread adoption of large language
models such as ChatGPT and Bard has led
to unprecedented demand for these technolo-
gies. The burgeoning cost of inference for ever-
increasing model sizes coupled with hardware
shortages has limited affordable access and
poses a pressing need for efficiency approaches
geared towards high throughput and perfor-
mance. Multi-input multi-output (MIMO) al-
gorithms such as data multiplexing, offer a
promising solution with a many-fold increase
in throughput by performing inference for mul-
tiple inputs at the cost of a single input. Yet
these approaches are not currently performant
enough to be deployed in modern systems. We
change that by developing MUX-PLMs, a class
of high throughput pre-trained language models
(PLMs) trained with data multiplexing, that can
be fine-tuned for any downstream task to yield
high-throughput high-performance. Our novel
multiplexing and demultiplexing modules profi-
ciently entangle and disentangle inputs, and en-
able high-performance high throughput MUX-
PLMs that are competitive with vanilla PLMs
while achieving 2x/5x inference speedup with
only a 1−4%drop on a broad suite of tasks.1
1 Introduction
Language models like ChatGPT (OpenAI, 2023),
PaLM (Chowdhery et al., 2022), T5 (Raffel et al.,
2020), and CM3 (Aghajanyan et al., 2022), have
seen unprecedented adoption in diverse sectors
ranging from education and healthcare to manu-
facturing and marketing. The proficiency of these
tools has led to unprecedented demand for these
models, with users facing frequent outages and ca-
pacity limits. Additionally, ever-increasing model
sizes and hardware shortages have constrained
models’ ability to handle a very high load of re-
quests, thus limiting large-scale affordable access
1Code + Models: https://github .com/
princeton-nlp/datamux-pretraining/ .to these models. These trends bring into focus the
need for high-throughput, high-performance, ef-
ficient, and environmentally responsible models
that can be deployed at scale to meet the quickly
growing demand.
Multi-input Multi-output architectures (MIMO)
(Havasi et al., 2021; Ramé et al., 2021; Murahari
et al., 2022) are a promising hardware-agnostic
and architecture-agnostic paradigm that perform
inference for multiple inputs simultaneously at the
cost of a single input. This efficiency paradigm is
natively geared towards yielding high-throughput
models, in addition to being complementary in ap-
proach and motivation to current efficiency meth-
ods such as pruning, quantization, and distilla-
tion. Interestingly, MIMO approaches are partly
inspired by the human brain’s extraordinary abil-
ity to process multiple inputs and propagate in-
formation at a high bandwidth with a few neural
codes (Blumhagen et al., 2011; Akam and Kull-
mann, 2014; Pirschel and Kretzberg, 2016; Hong
et al., 2016; Friedrich et al., 2004).
Murahari et al. (2022) introduced data multiplex-
ing, a MIMO technique that can enable a many-fold
increase in throughput. The method compresses
Ndifferent instances into a single “multiplexed”
hidden representation before decompressing it into
Nindependent predictions. While they show the
plausibility of MIMO training, their method leads
to a significant drop in performance ( 20−30%
points) compared to state-of-the-art models.
In this work, we introduce MUX-PLMs, a class
of high-throughput pre-trained language models
trained in a MIMO fashion with data multiplex-
ing to process multiple inputs (2-10) simultane-
ously with a forward pass over a single instance.
MUX-PLMs offer up to 400% improvement in
throughput over baseline pre-trained models while
only being ∼4points and ∼2points worse than
baseline pre-trained language models for text clas-
sification and token classification tasks, respec-arXiv:2302.12441v2 [cs.LG] 22 May 2023 |
10.1038.s41467-021-25756-4.pdf | ARTICLE
Efficient generative modeling of protein sequences
using simple autoregressive models
Jeanne Trinquier1,2, Guido Uguzzoni3,4, Andrea Pagnani3,4,5, Francesco Zamponi2& Martin Weigt1✉
Generative models emerge as promising candidates for novel sequence-data driven
approaches to protein design, and for the extraction of structural and functional informationabout proteins deeply hidden in rapidly growing sequence databases. Here we proposesimple autoregressive models as highly accurate but computationally ef ficient generative
sequence models. We show that they perform similarly to existing approaches based onBoltzmann machines or deep generative models, but at a substantially lower computationalcost (by a factor between 10
2and 103). Furthermore, the simple structure of our models has
distinctive mathematical advantages, which translate into an improved applicability insequence generation and evaluation. Within these models, we can easily estimate both theprobability of a given sequence, and, using the model ’s entropy, the size of the functional
sequence space related to a speci fic protein family. In the example of response regulators,
wefind a huge number of ca. 10
68possible sequences, which nevertheless constitute only the
astronomically small fraction 10−80of all amino-acid sequences of the same length. These
findings illustrate the potential and the dif ficulty in exploring sequence space via generative
sequence models.https://doi.org/10.1038/s41467-021-25756-4 OPEN
1Sorbonne Université, CNRS, Institut de Biologie Paris Seine, Biologie Computationnelle et Quantitative LCQB, F-75005 Paris, France.2Laboratoire de
Physique de l ’Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France.3Department of Applied
Science and Technology (DISAT), Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Torino, Italy.4Italian Institute for Genomic Medicine, IRCCS
Candiolo, SP-142, I-10060 Candiolo (TO), Italy.5INFN Sezione di Torino, Via P. Giuria 1, I-10125 Torino, Italy.✉email: martin.weigt@sorbonne-universite.fr
NATURE COMMUNICATIONS | (2021) 12:5800 | https://doi.org/10.1038/s41467-021-25756-4 | www.nature.com/naturecommunications 11234567890():,; |
2306.03078.pdf | SpQR: A Sparse-Quantized Representation for
Near-Lossless LLM Weight Compression
Tim Dettmers∗ †
University of WashingtonRuslan Svirschevski∗
HSE University & YandexVage Egiazarian∗
HSE University & Yandex
Denis Kuznedelev∗
Yandex & SkoltechElias Frantar
IST AustriaSaleh Ashkboos
ETH ZurichAlexander Borzunov
HSE University & Yandex
Torsten Hoefler
ETH ZurichDan Alistarh
IST Austria & NeuralMagic
Abstract
Recent advances in large language model (LLM) pretraining have led to high-
quality LLMs with impressive abilities. By compressing such LLMs via quanti-
zation to 3-4 bits per parameter, they can fit into memory-limited devices such
as laptops and mobile phones, enabling personalized use. However, quantiza-
tion down to 3-4 bits per parameter usually leads to moderate-to-high accuracy
losses, especially for smaller models in the 1-10B parameter range, which are
well-suited for edge deployments. To address this accuracy issue, we introduce the
Sparse-Quantized Representation (SpQR), a new compressed format and quantiza-
tion technique which enables for the first time near-lossless compression of LLMs
across model scales, while reaching similar compression levels to previous methods.
SpQR works by identifying and isolating outlier weights , which cause particularly-
large quantization errors, and storing them in higher precision, while compressing
all other weights to 3-4 bits, and achieves relative accuracy losses of less than
1%in perplexity for highly-accurate LLaMA and Falcon LLMs. This makes it
possible to run 33B parameter LLM on a single 24 GB consumer GPU without any
performance degradation at 15% speedup thus making powerful LLMs available to
consumer without any downsides. SpQR comes with efficient algorithms for both
encoding weights into its format, as well as decoding them efficiently at runtime3.
Specifically, we provide an efficient GPU inference algorithm for SpQR which
yields faster inference than 16-bit baselines at similar accuracy, while enabling
memory compression gains of more than 4x.
1 Introduction
Pretrained large language models (LLMs) improved rapidly from task-specific performance
[WSM+18,DCLT19 ,RWC+19], to performing well on general tasks if prompted with instruc-
tions [ BMR+20,WBZ+21,Ope23 ]. While the improved performance can be attributed to scaling in
training data and parameters [ KMH+20,CND+22] recent trends focused on smaller models trained
on more data, that are easier to use at inference time [ HBM+22,BSA+23,TLI+23]. For example,
the 7B parameter LLaMA model trained on 1T tokens achieved an average performance only slightly
lower than GPT-3 [ BMR+20] despite being 25x smaller. Current techniques for LLM compres-
sion can shrink these models further by a factor of about 4x, while preserving their performance
∗Equal contribution
†Corresponding author: dettmers@cs.washington.edu
3github.com/Vahe1994/SpQR ; to be integrated into github.com/TimDettmers/bitsandbytesarXiv:2306.03078v1 [cs.CL] 5 Jun 2023 |
1706.03741.pdf | Deep Reinforcement Learning
from Human Preferences
Paul F Christiano
OpenAI
paul@openai.comJan Leike
DeepMind
leike@google.comTom B Brown
nottombrown@gmail.com
Miljan Martic
DeepMind
miljanm@google.comShane Legg
DeepMind
legg@google.comDario Amodei
OpenAI
damodei@openai.com
Abstract
For sophisticated reinforcement learning (RL) systems to interact usefully with
real-world environments, we need to communicate complex goals to these systems.
In this work, we explore goals defined in terms of (non-expert) human preferences
between pairs of trajectory segments. We show that this approach can effectively
solve complex RL tasks without access to the reward function, including Atari
games and simulated robot locomotion, while providing feedback on less than
1% of our agent’s interactions with the environment. This reduces the cost of
human oversight far enough that it can be practically applied to state-of-the-art
RL systems. To demonstrate the flexibility of our approach, we show that we can
successfully train complex novel behaviors with about an hour of human time.
These behaviors and environments are considerably more complex than any which
have been previously learned from human feedback.
1 Introduction
Recent success in scaling reinforcement learning (RL) to large problems has been driven in domains
that have a well-specified reward function (Mnih et al., 2015, 2016; Silver et al., 2016). Unfortunately,
many tasks involve goals that are complex, poorly-defined, or hard to specify. Overcoming this
limitation would greatly expand the possible impact of deep RL and could increase the reach of
machine learning more broadly.
For example, suppose that we wanted to use reinforcement learning to train a robot to clean a table or
scramble an egg. It’s not clear how to construct a suitable reward function, which will need to be a
function of the robot’s sensors. We could try to design a simple reward function that approximately
captures the intended behavior, but this will often result in behavior that optimizes our reward
function without actually satisfying our preferences. This difficulty underlies recent concerns about
misalignment between our values and the objectives of our RL systems (Bostrom, 2014; Russell,
2016; Amodei et al., 2016). If we could successfully communicate our actual objectives to our agents,
it would be a significant step towards addressing these concerns.
If we have demonstrations of the desired task, we can extract a reward function using inverse
reinforcement learning (Ng and Russell, 2000). This reward function can then be used to train
an agent with reinforcement learning. More directly, we can use imitation learning to clone the
demonstrated behavior. However, these approaches are not directly applicable to behaviors that are
difficult for humans to demonstrate (such as controlling a robot with many degrees of freedom but
very non-human morphology).arXiv:1706.03741v4 [stat.ML] 17 Feb 2023 |
karakida19a.pdf | Universal Statistics of Fisher Information in Deep Neural Networks:
Mean Field Approach
Ryo Karakida Shotaro Akaho Shun-ichi Amari
AIST, Japan AIST, Japan RIKEN CBS, Japan
Abstract
The Fisher information matrix (FIM) is a
fundamental quantity to represent the char-
acteristics of a stochastic model, including
deep neural networks (DNNs). The present
study reveals novel statistics of FIM that are
universal among a wide class of DNNs. To
this end, we use random weights and large
width limits, which enables us to utilize mean
field theories. We investigate the asymptotic
statistics of the FIM’s eigenvalues and reveal
that most of them are close to zero while the
maximum eigenvalue takes a huge value. Be-
cause the landscape of the parameter space is
defined by the FIM, it is locally flat in most
dimensions, but strongly distorted in others.
Moreover, we demonstrate the potential usage
of the derived statistics in learning strategies.
First, small eigenvalues that induce flatness
can be connected to a norm-based capacity
measure of generalization ability. Second, the
maximum eigenvalue that induces the distor-
tion enables us to quantitatively estimate an
appropriately sized learning rate for gradient
methods to converge.
1 Introduction
Deep learning has succeeded in making hierarchical
neural networks perform excellently in various practi-
cal applications [ 1]. To proceed further, it would be
beneficial to give more theoretical elucidation as to
why and how deep neural networks (DNNs) work well
in practice. In particular, it would be useful to not
only clarify the individual models and phenomena but
also explore various unified theoretical frameworks that
Proceedings of the 22ndInternational Conference on Ar-
tificial Intelligence and Statistics (AISTATS) 2019, Naha,
Okinawa, Japan. PMLR: Volume 89. Copyright 2019 by
the author(s).could be applied to a wide class of deep networks. One
widely used approach for this purpose is to consider
deep networks with random connectivity and a large
width limit [ 2–14]. For instance, Poole et al. [3]pro-
posed a useful indicator to explain the expressivity of
DNNs. Regarding the trainability of DNNs, Schoen-
holz et al. [4]extended this theory to backpropagation
and found that the vanishing and explosive gradients
obey a universal law. These studies are powerful in
the sense that they do not depend on particular model
architectures, such as the number of layers or activation
functions.
Unfortunately, such universal frameworks have not yet
been established in many other topics. One is the geo-
metric structure of the parameter space. For instance,
the loss landscape without spurious local minima is im-
portant for easier optimization and theoretically guar-
anteed in single-layer models [ 15], shallow piecewise
linear ones [ 16], and extremely wide deep networks
with the number of training samples smaller than the
width [17]. Flat global minima have been reported to
be related to generalization ability through empirical
experiments showing that networks with such minima
give better generalization performance [ 18,19]. How-
ever, theoretical analysis of the flat landscape has been
limited in shallow rectified linear unit (ReLU) networks
[20,21]. Thus, a residual subject of interest is to theo-
reticallyrevealthegeometricstructureoftheparameter
space truly common among various deep networks.
To establish the foundation of the universal perspec-
tive of the parameter space, this study analytically
investigates the Fisher information matrix (FIM). As
is overviewed in Section 2.1, the FIM plays an essential
role in the geometry of the parameter space and is a
fundamental quantity in both statistics and machine
learning.
1.1 Main results
This study analyzes the FIM of deep networks with ran-
dom weights and biases, which are widely used settings
to analyze the phenomena of DNNs [ 2–14]. First, we |
2310.06816.pdf | Text Embeddings Reveal (Almost) As Much As Text
John X. Morris, Volodymyr Kuleshov, Vitaly Shmatikov, Alexander M. Rush
Department of Computer Science
Cornell University
Abstract
How much private information do text em-
beddings reveal about the original text? We
investigate the problem of embedding inver-
sion, reconstructing the full text represented
in dense text embeddings. We frame the prob-
lem as controlled generation: generating text
that, when reembedded, is close to a fixed point
in latent space. We find that although a naïve
model conditioned on the embedding performs
poorly, a multi-step method that iteratively cor-
rects and re-embeds text is able to recover 92%
of32-token text inputs exactly. We train our
model to decode text embeddings from two
state-of-the-art embedding models, and also
show that our model can recover important per-
sonal information (full names) from a dataset
of clinical notes.1
1 Introduction
Systems that utilize large language models (LLMs)
often store auxiliary data in a vector database of
dense embeddings (Borgeaud et al., 2022; Yao
et al., 2023). Users of these systems infuse knowl-
edge into LLMs by inserting retrieved documents
into the language model’s prompt. Practition-
ers are turning to hosted vector database services
to execute embedding search efficiently at scale
(Pinecone; Qdrant; Vdaas; Weaviate; LangChain).
In these databases, the data owner only sends em-
beddings of text data (Le and Mikolov, 2014; Kiros
et al., 2015) to the third party service, and never
the text itself. The database server returns a search
result as the index of the matching document on
the client side.
Vector databases are increasingly popular, but
privacy threats within them have not been compre-
hensively explored. Can the third party service
to reproduce the initial text, given its embedding?
Neural networks are in general non-trivial or even
1Our code is available on Github:
github.com/jxmorris12/vec2text.impossible to invert exactly. Furthermore, when
querying a neural network through the internet, we
may not have access to the model weights or gradi-
ents at all.
Still, given input-output pairs from a network,
it is often possible to approximate the network’s
inverse. Work on inversion in computer vision
(Mahendran and Vedaldi, 2014; Dosovitskiy and
Brox, 2016) has shown that it is possible to learn
to recover the input image (with some loss) given
the logits of the final layer. Preliminary work has
explored this question for text (Song and Raghu-
nathan, 2020), but only been able to recover an
approximate bag of words given embeddings from
shallow networks.
In this work, we target full reconstruction of in-
put text from its embedding. If text is recoverable,
there is a threat to privacy: a malicious user with ac-
cess to a vector database, and text-embedding pairs
from the model used to produce the data, could
learn a function that reproduces text from embed-
dings.
We frame this problem of recovering textual em-
beddings as a controlled generation problem, where
we seek to generate text such that the text is as close
as possible to a given embedding. Our method,
Vec2Text , uses the difference between a hypothesis
embedding and a ground-truth embedding to make
discrete updates to the text hypothesis.
When we embed web documents using a state-of-
the-art black-box encoder, our method can recover
32-token inputs with a near-perfect BLEU score of
97.3, and can recover 92% of the examples exactly.
We then evaluate on embeddings generated from
a variety of common retrieval corpuses from the
BEIR benchmark. Even though these texts were
not seen during training, our method is able to per-
fectly recover the inputs for a number of datapoints
across a variety of domains. We evaluate on em-
beddings of clinical notes from MIMIC and are
able to recover 89% of full names from embeddedarXiv:2310.06816v1 [cs.CL] 10 Oct 2023 |
1908.10084v1.pdf | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers and Iryna Gurevych
Ubiquitous Knowledge Processing Lab (UKP-TUDA)
Department of Computer Science, Technische Universit ¨at Darmstadt
www.ukp.tu-darmstadt.de
Abstract
BERT (Devlin et al., 2018) and RoBERTa (Liu
et al., 2019) has set a new state-of-the-art
performance on sentence-pair regression tasks
like semantic textual similarity (STS). How-
ever, it requires that both sentences are fed
into the network, which causes a massive com-
putational overhead: Finding the most sim-
ilar pair in a collection of 10,000 sentences
requires about 50 million inference computa-
tions (~65 hours) with BERT. The construction
of BERT makes it unsuitable for semantic sim-
ilarity search as well as for unsupervised tasks
like clustering.
In this publication, we present Sentence-BERT
(SBERT), a modification of the pretrained
BERT network that use siamese and triplet net-
work structures to derive semantically mean-
ingful sentence embeddings that can be com-
pared using cosine-similarity. This reduces the
effort for finding the most similar pair from 65
hours with BERT / RoBERTa to about 5 sec-
onds with SBERT, while maintaining the ac-
curacy from BERT.
We evaluate SBERT and SRoBERTa on com-
mon STS tasks and transfer learning tasks,
where it outperforms other state-of-the-art
sentence embeddings methods.1
1 Introduction
In this publication, we present Sentence-BERT
(SBERT), a modification of the BERT network us-
ing siamese and triplet networks that is able to
derive semantically meaningful sentence embed-
dings2. This enables BERT to be used for certain
new tasks, which up-to-now were not applicable
for BERT. These tasks include large-scale seman-
1Code available: https://github.com/UKPLab/
sentence-transformers
2With semantically meaningful we mean that semantically
similar sentences are close in vector space.tic similarity comparison, clustering, and informa-
tion retrieval via semantic search.
BERT set new state-of-the-art performance on
various sentence classification and sentence-pair
regression tasks. BERT uses a cross-encoder: Two
sentences are passed to the transformer network
and the target value is predicted. However, this
setup is unsuitable for various pair regression tasks
due to too many possible combinations. Finding
in a collection of n= 10 000 sentences the pair
with the highest similarity requires with BERT
n·(n−1)/2 = 49 995 000 inference computations.
On a modern V100 GPU, this requires about 65
hours. Similar, finding which of the over 40 mil-
lion existent questions of Quora is the most similar
for a new question could be modeled as a pair-wise
comparison with BERT, however, answering a sin-
gle query would require over 50 hours.
A common method to address clustering and se-
mantic search is to map each sentence to a vec-
tor space such that semantically similar sentences
are close. Researchers have started to input indi-
vidual sentences into BERT and to derive fixed-
size sentence embeddings. The most commonly
used approach is to average the BERT output layer
(known as BERT embeddings) or by using the out-
put of the first token (the [CLS] token). As we
will show, this common practice yields rather bad
sentence embeddings, often worse than averaging
GloVe embeddings (Pennington et al., 2014).
To alleviate this issue, we developed SBERT.
The siamese network architecture enables that
fixed-sized vectors for input sentences can be de-
rived. Using a similarity measure like cosine-
similarity or Manhatten / Euclidean distance, se-
mantically similar sentences can be found. These
similarity measures can be performed extremely
efficient on modern hardware, allowing SBERT
to be used for semantic similarity search as well
as for clustering. The complexity for finding thearXiv:1908.10084v1 [cs.CL] 27 Aug 2019 |
2402.00854.pdf | SymbolicAI: A framework for logic-based approaches
combining generative models and solvers
Marius–Constantin Dinu∗ †Claudiu Leoveanu–Condrei‡Markus Holzleitner†
Werner Zellinger§Sepp Hochreiter†
Abstract
We introduce SymbolicAI , a versatile and modular framework employing a logic-based approach to
concept learning and flow management in generative processes. SymbolicAI enables the seamless
integration of generative models with a diverse range of solvers by treating large language models
(LLMs) as semantic parsers that execute tasks based on both natural and formal language instruc-
tions, thus bridging the gap between symbolic reasoning and generative AI. We leverage probabilistic
programming principles to tackle complex tasks, and utilize differentiable and classical program-
ming paradigms with their respective strengths. The framework introduces a set of polymorphic,
compositional, and self-referential operations for data stream manipulation, aligning LLM outputs
with user objectives. As a result, we can transition between the capabilities of various foundation
models endowed with zero- and few-shot learning capabilities and specialized, fine-tuned models
or solvers proficient in addressing specific problems. In turn, the framework facilitates the creation
and evaluation of explainable computational graphs. We conclude by introducing a quality measure
and its empirical score for evaluating these computational graphs, and propose a benchmark that
compares various state-of-the-art LLMs across a set of complex workflows. We refer to the empirical
score as the ”Vector Embedding for Relational Trajectory Evaluation through Cross-similarity”, or
VERTEX score for short. The framework codebase 1and benchmark 2are linked below.
Prompting / Fine-TuningNeuro-Symbolic AI Spectrum
Software-Engineering Machine LearningFoundation Models
Specialist ModelsProgramming / LearningModeling / CodingAbstraction
Implementation
Figure 1: Our neuro-symbolic framework enables a seamless transition between classical and differentiable program-
ming, each with distinct dynamics and strengths. Differentiable programming provides access to foundational and
specialist models. Classical programming, on the other hand, shifts between abstraction and implementation, focusing
on high-level concepts before delving into the details of implementation.
∗ExtensityAI, Vienna and AI Austria, Vienna — Corresponding author emails: dinu@ml.jku.at, office@extensity.ai
†ELLIS Unit Linz and LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz
‡Amazon Devices, Timis ,oara – work done outside of Amazon
§Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Vienna
1SymbolicAI framework released on January 20th, 2023, on GitHub: https://github.com/ExtensityAI/symbolicai .
2Evaluation benchmark released on February 1st, 2024, on GitHub: https://github.com/ExtensityAI/benchmark .
1arXiv:2402.00854v2 [cs.LG] 5 Feb 2024 |
1907.10786.pdf | Interpreting the Latent Space of GANs for Semantic Face Editing
Yujun Shen1, Jinjin Gu2, Xiaoou Tang1, Bolei Zhou1
1The Chinese University of Hong Kong2The Chinese University of Hong Kong, Shenzhen
{sy116, xtang, bzhou }@ie.cuhk.edu.hk, jinjingu@link.cuhk.edu.cn
Original Pose Age Gender Eyeglasses
Figure 1: Manipulating various facial attributes through varying the latent codes of a well-trained GAN model. The first column shows the
original synthesis from PGGAN [21], while each of the other columns shows the results of manipulating a specific attribute.
Abstract
Despite the recent advance of Generative Adversarial
Networks (GANs) in high-fidelity image synthesis, there
lacks enough understanding of how GANs are able to map a
latent code sampled from a random distribution to a photo-
realistic image. Previous work assumes the latent space
learned by GANs follows a distributed representation but
observes the vector arithmetic phenomenon. In this work,
we propose a novel framework, called InterFaceGAN, for
semantic face editing by interpreting the latent semantics
learned by GANs. In this framework, we conduct a detailed
study on how different semantics are encoded in the latent
space of GANs for face synthesis. We find that the latent
code of well-trained generative models actually learns a
disentangled representation after linear transformations.
We explore the disentanglement between various semantics
and manage to decouple some entangled semantics with
subspace projection, leading to more precise control of
facial attributes. Besides manipulating gender, age, expres-
sion, and the presence of eyeglasses, we can even vary the
face pose as well as fix the artifacts accidentally generatedby GAN models. The proposed method is further applied
to achieve real image manipulation when combined with
GAN inversion methods or some encoder-involved models.
Extensive results suggest that learning to synthesize faces
spontaneously brings a disentangled and controllable facial
attribute representation.1
1. Introduction
Generative Adversarial Networks (GANs) [15] have
significantly advanced image synthesis in recent years. The
rationale behind GANs is to learn the mapping from a latent
distribution to the real data through adversarial training.
After learning such a non-linear mapping, GAN is capable
of producing photo-realistic images from randomly sam-
pled latent codes. However, it is uncertain how semantics
originate and are organized in the latent space. Taking face
synthesis as an example, when sampling a latent code to
produce an image, how the code is able to determine various
semantic attributes ( e.g., gender and age) of the output face,
and how these attributes are entangled with each other?
1Code and models are available at this link.
1arXiv:1907.10786v3 [cs.CV] 31 Mar 2020 |
2107.13163.pdf | arXiv:2107.13163v3 [cs.LG] 30 Mar 2023Statistically Meaningful Approximation: a
Case Study on Approximating Turing Machines with Transform ers
Colin Wei Yining Chen Tengyu Ma
Department of Computer Science
Stanford University
{colinwei,cynnjjs,tengyuma}@cs.stanford.edu
March 31, 2023
Abstract
A common lens to theoretically study neural net architectur es is to analyze the functions they can
approximate. However, the constructions from approximati on theory often have unrealistic aspects, for
example, reliance on infinite precision to memorize target f unction values. To address this issue, we propose
a formal definition of statistically meaningful approximat ion which requires the approximating network to
exhibit good statistical learnability. We present case stu dies on statistically meaningful approximation
for two classes of functions: boolean circuits and Turing ma chines. We show that overparameterized feed-
forward neural nets can statistically meaningfully approx imate boolean circuits with sample complexity
depending only polynomially on the circuit size, not the siz e of the approximating network. In addition, we
show that transformers can statistically meaningfully app roximate Turing machines with computation time
bounded by T, requiring sample complexity polynomial in the alphabet si ze, state space size, and logpTq.
Our analysis introduces new tools for generalization bound s that provide much tighter sample complexity
guarantees than the typical VC-dimension or norm-based bou nds, which may be of independent interest.
1 Introduction
Dating back to the seminal works on universal approximation [16, 25, 40, 31], a common way to theoretically
study neural nets has been through their expressivity, whic h measures the ability of neural nets to approxi-
mate well-behaved functions. This perspective has shaped h ow researchers perceive different types of deep
learning architectures: a basic way to theoretically justi fy new architectures is to study their approximation
capabilities. This has led to a number of analyses studying u niversal approximation capabilities for various
widely-used architectures, such as recurrent neural nets ( RNNs) [47], graph neural nets [46], convolutional
networks [3, 64, 59], residual networks [32], transformers [61], and neural ODEs [51, 63].
However, approximation theoretic results often misalign w ith more meaningful end-to-end guarantees,
because models constructed in the literature often exhibit unrealistic properties. For example, a common tech-
nique in the universal approximation literature is to rely s trongly on infinite-precision weights and activations,
or exponentially many parameters to encode the desired func tion values [25, 16, 31, 32, 61, 44]. This issue
even arises outside of universal approximation, e.g., vari ous papers demonstrate the ability of RNNs and trans-
formers to simulate various computational models such as Tu ring machines and automata, but require strong
reliance on arbitrary precision [48, 42, 29, 9]. Infinite pre cision can inflate the expressivity of an architecture
(function class) in a unrealistic and misleading way: for ex ample, finite width RNNs with infinite precision can
simulate Turing machines, but finite-precision, finite-wid th RNNs cannot. This is implied by streaming lower
bounds [1] – any finite-precision, finite-width RNN induces a finite-space streaming algorithm corresponding
to running the RNN on the inputs. However, streaming lower bo unds tell us that finite-space streaming al-
gorithms are not powerful enough to simulate Turing machine s, and hence finite-precision, finite-width RNNs
1 |
1906.08237.pdf | XLNet: Generalized Autoregressive Pretraining
for Language Understanding
Zhilin Yang∗1, Zihang Dai∗12, Yiming Yang1, Jaime Carbonell1,
Ruslan Salakhutdinov1, Quoc V . Le2
1Carnegie Mellon University,2Google AI Brain Team
{zhiliny,dzihang,yiming,jgc,rsalakhu}@cs.cmu.edu, qvl@google.com
Abstract
With the capability of modeling bidirectional contexts, denoising autoencoding
based pretraining like BERT achieves better performance than pretraining ap-
proaches based on autoregressive language modeling. However, relying on corrupt-
ing the input with masks, BERT neglects dependency between the masked positions
and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we
propose XLNet, a generalized autoregressive pretraining method that (1) enables
learning bidirectional contexts by maximizing the expected likelihood over all
permutations of the factorization order and (2) overcomes the limitations of BERT
thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas
from Transformer-XL, the state-of-the-art autoregressive model, into pretraining.
Empirically, under comparable experiment settings, XLNet outperforms BERT on
20 tasks, often by a large margin, including question answering, natural language
inference, sentiment analysis, and document ranking.1.
1 Introduction
Unsupervised representation learning has been highly successful in the domain of natural language
processing [ 7,22,27,28,10]. Typically, these methods first pretrain neural networks on large-scale
unlabeled text corpora, and then finetune the models or representations on downstream tasks. Under
this shared high-level idea, different unsupervised pretraining objectives have been explored in
literature. Among them, autoregressive (AR) language modeling and autoencoding (AE) have been
the two most successful pretraining objectives.
AR language modeling seeks to estimate the probability distribution of a text corpus with an au-
toregressive model [ 7,27,28]. Specifically, given a text sequence x= (x1,···,xT), AR language
modeling factorizes the likelihood into a forward product p(x) =∏T
t=1p(xt|x<t)or a backward
onep(x) =∏1
t=Tp(xt|x>t). A parametric model (e.g. a neural network) is trained to model each
conditional distribution. Since an AR language model is only trained to encode a uni-directional con-
text (either forward or backward), it is not effective at modeling deep bidirectional contexts. On the
contrary, downstream language understanding tasks often require bidirectional context information.
This results in a gap between AR language modeling and effective pretraining.
In comparison, AE based pretraining does not perform explicit density estimation but instead aims to
reconstruct the original data from corrupted input. A notable example is BERT [ 10], which has been
the state-of-the-art pretraining approach. Given the input token sequence, a certain portion of tokens
are replaced by a special symbol [MASK] , and the model is trained to recover the original tokens from
the corrupted version. Since density estimation is not part of the objective, BERT is allowed to utilize
∗Equal contribution. Order determined by swapping the one in [9].
1Pretrained models and code are available at https://github.com/zihangdai/xlnet
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.arXiv:1906.08237v2 [cs.CL] 2 Jan 2020 |
2206.05895.pdf | Latent Diffusion Energy-Based Model for Interpretable Text Modeling
Peiyu Yu1 2Sirui Xie1Xiaojian Ma1 2Baoxiong Jia1 2Bo Pang3
Ruiqi Gao4Yixin Zhu5 6Song-Chun Zhu1 2 5 6 7 8Ying Nian Wu7
Abstract
Latent space Energy-Based Models ( EBM s), also
known as energy-based priors, have drawn grow-
ing interests in generative modeling. Fueled by its
flexibility in the formulation and strong modeling
power of the latent space, recent works built upon
it have made interesting attempts aiming at the
interpretability of text modeling. However, latent
space EBM s also inherit some flaws from EBM s
in data space; the degenerate MCMC sampling
quality in practice can lead to poor generation
quality and instability in training, especially on
data with complex latent structures. Inspired by
the recent efforts that leverage diffusion recovery
likelihood learning as a cure for the sampling is-
sue, we introduce a novel symbiosis between the
diffusion models and latent space EBM s in a vari-
ational learning framework, coined as the latent
diffusion energy-based model . We develop a geo-
metric clustering-based regularization jointly with
the information bottleneck to further improve the
quality of the learned latent space. Experiments
on several challenging tasks demonstrate the su-
perior performance of our model on interpretable
text modeling over strong counterparts.
1. Introduction
Text modeling has achieved impressive progress with the
fast development of neural generative models (Serban et al.,
2016; Li et al., 2017a; Zhao et al., 2017; Gupta et al., 2018;
Code repo and data: https://github.com/yuPeiyu98/Latent-
Diffusion-EBM.1Department of Computer Science, UCLA,
USA2Beijing Institute for General Artificial Intelligence, China
3Salesforce Research, USA4Google Brain, USA5Institute for Ar-
tificial Intelligence, Peking University, China6School of Artificial
Intelligence, Peking University, China7Department of Statistics,
UCLA, USA8Department of Automation, Tsinghua University,
China.
Correspondence to: Peiyu Yu <yupeiyu98@g.ucla.edu>.
Proceedings of the 39thInternational Conference on Machine
Learning , Baltimore, Maryland, USA, PMLR 162, 2022. Copy-
right 2022 by the author(s).x z0 zt zt+1y
q(zt+1|zt)
pα(zt|zt+1)pα(y,z0|z1)
qϕ(z0|x)
pβ(x|z0)
t= 1, ..., T−1
Figure 1. Graphical illustration of the latent diffusion process.
We construct the forward and reverse diffusion processes in the la-
tent space. The symbolic one-hot vector is coupled with the initial
latent vector z0. The latent and diffused latent variables are high-
lighted by the red and blue plates, respectively. The cyan arrows
indicate that z0is connected with only z1. We learn a sequence of
EBMs to model the reverse diffusion process pα(zt|zt+1).
Zhao et al., 2018a). It allows near human-level text gener-
ation quality and also leads to a wide range of real-world
applications such as dialog system (Young et al., 2013) and
machine translation (Brown et al., 1993). Although the qual-
ity of generation ( e.g., fluency and diversity) is the primary
concern of most work, interpretability of the generation pro-
cess has drawn much attention recently. Among the existing
frameworks, the Deep Latent Variable Model ( DLVM ) is
especially suitable for the task, as the learned latent space
could capture high-level structures with semantic meanings
like topics (Wang et al., 2019) and dialog actions (Zhao
et al., 2018b); such latent space could further enable more
interpretable text modeling, featuring unsupervised text at-
tributes discovery (Wen et al., 2017), conditional and con-
trollable text generation (Fang et al., 2019; Shi et al., 2020),
and semi-supervised text classification (Pang & Wu, 2021).
In essence, DLVM summarizes the observed sample ( e.g.,
a piece of text) into inferred latent variables. Earlier
text-modeling methods with DLVM mostly follow the for-
mulation of Variational Auto-Encoder ( V AE ) (Kingma &
Welling, 2013; Rezende et al., 2014; Bowman et al., 2016),
which assumes a continuous latent space. More recently,
Zhao et al. (2018b) explore the possibility of using a discrete
latent space to capture dialog actions; Shi et al. (2020) pro-
pose to use V AE with the mixture of Gaussians as the prior,
demonstrating promising interpretability of dialog utterancearXiv:2206.05895v4 [cs.LG] 4 Oct 2023 |
2209.13325.pdf | Outlier Suppression: Pushing the Limit of Low-bit
Transformer Language Models
Xiuying Wei1, 2, Yunchen Zhang2, 4, Xiangguo Zhang2, Ruihao Gong1, 2,
Shanghang Zhang3, Qi Zhang2, Fengwei Yu2, Xianglong Liu1∗
1State Key Lab of Software Development Environment, Beihang University
2SenseTime Research,3Peking University
4University of Electronic Science and Technology of China
{weixiuying, zhangyunchen, zhangxiangguo, gongruihao}@sensetime.com
shanghang@pku.edu.cn, xlliu@buaa.edu.cn
Abstract
Transformer architecture has become the fundamental element of the widespread
natural language processing (NLP) models. With the trends of large NLP models,
the increasing memory and computation costs hinder their efficient deployment
on resource-limited devices. Therefore, transformer quantization attracts wide
research interest. Recent work recognizes that structured outliers are the criti-
cal bottleneck for quantization performance. However, their proposed methods
increase the computation overhead and still leave the outliers there. To funda-
mentally address this problem, this paper delves into the inherent inducement
and importance of the outliers. We discover that γin LayerNorm (LN) acts as
a sinful amplifier for the outliers, and the importance of outliers varies greatly
where some outliers provided by a few tokens cover a large area but can be clipped
sharply without negative impacts. Motivated by these findings, we propose an
outlier suppression framework including two components: Gamma Migration
and Token-Wise Clipping. The Gamma Migration migrates the outlier amplifier
to subsequent modules in an equivalent transformation, contributing to a more
quantization-friendly model without any extra burden. The Token-Wise Clipping
takes advantage of the large variance of token range and designs a token-wise
coarse-to-fine pipeline, obtaining a clipping range with minimal final quantiza-
tion loss in an efficient way. This framework effectively suppresses the outliers
and can be used in a plug-and-play mode. Extensive experiments prove that our
framework surpasses the existing works and, for the first time, pushes the 6-bit post-
training BERT quantization to the full-precision (FP) level. Our code is available
athttps://github.com/wimh966/outlier_suppression .
1 Introduction
Transformer [ 1] has been one of the most common architectures in natural language processing along
with lots of popular self-supervised models, such as BERT [ 2], RoBERTa [ 3], XLNet [ 4] and BART
[5]. While these pre-trained models have demonstrated a significant superiority in performance, the
memory and computation overheads have been a popular concern, particularly in the real development.
Therefore, model compression [ 6,7,8,9] has attracted much attention from both academia and
industry. Among them, quantization [ 10,11,12,13,14,15,16,17,18,19,20], working in the
low-precision arithmetic fashion, is one of the key approaches to compress large models and fit them
into the lightweight devices.
∗Corresponding author.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).arXiv:2209.13325v3 [cs.LG] 21 Feb 2023 |
2308.05660v1.pdf | Thermodynamic Linear Algebra
Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon,
Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles
Normal Computing Corporation, New York, New York, USA
Linear algebraic primitives are at the core of many modern algorithms in engineering, science, and
machine learning. Hence, accelerating these primitives with novel computing hardware would have
tremendous economic impact. Quantum computing has been proposed for this purpose, although
the resource requirements are far beyond current technological capabilities, so this approach remains
long-term in timescale. Here we consider an alternative physics-based computing paradigm based
on classical thermodynamics, to provide a near-term approach to accelerating linear algebra.
At first sight, thermodynamics and linear algebra seem to be unrelated fields. In this work, we
connect solving linear algebra problems to sampling from the thermodynamic equilibrium distri-
bution of a system of coupled harmonic oscillators. We present simple thermodynamic algorithms
for (1) solving linear systems of equations, (2) computing matrix inverses, (3) computing matrix
determinants, and (4) solving Lyapunov equations. Under reasonable assumptions, we rigorously
establish asymptotic speedups for our algorithms, relative to digital methods, that scale linearly
in matrix dimension. Our algorithms exploit thermodynamic principles like ergodicity, entropy,
and equilibration, highlighting the deep connection between these two seemingly distinct fields, and
opening up algebraic applications for thermodynamic computing hardware.
I. Introduction
Basic linear algebra primitives such as solving a linear system of the form Ax=band obtaining the
inverse of a matrix are present in many modern algorithms. Such primitives are relevant to a multitude
of applications, including for example optimal control of dynamic systems and resource allocation. They
are also a common subroutine of many artificial intelligence (AI) algorithms, and account for a substantial
portion of the time and energy costs in some cases.
The most common method to perform these primitives is LU decomposition, whose time-complexity
scales as O(d3). Many proposals have been made to accelerate such primitives, for example using iterative
methods such as the conjugate gradient method. In the last decade, these primitives have been accelerated
by hardware improvements, notably by their implementation on graphical processing units (GPUs), fueling
massive parallelization. However, the scaling of these methods is still a prohibitive factor, and obtaining
a good approximate solution to a dense matrix of more than a few tens of thousand dimensions remains
challenging.
Exploiting physics to solve mathematical problems is a deep idea, with much focus on solving optimization
problems [1–3]. In the context of linear algebra, much attention has been paid to quantum computers [4],
since the mathematics of discrete-variable quantum mechanics matches that of linear algebra. A quantum
algorithm [5] to solve linear systems has been proposed, which for sparse and well-conditioned matrices
scales as logd. However, the resource requirements [6] for this algorithm are far beyond current hardware
capabilities. More generally building large-scale quantum hardware has remained difficult [7], and variational
quantum algorithms for linear algebra [8–10] have battled with vanishing gradient issues [11–13].
Therefore, the search for alternative hardware proposals that can exploit physical dynamics to accelerate
linear algebra primitives has been ongoing. Notably, memristor crossbar arrays have been of interest for
accelerating matrix-vector multiplications [14, 15]. Solving linear systems has also been the subject of
analog computing approaches [16].
Recently, we defined a new class of hardware, built from stochastic, analog building blocks, which is
ultimately thermodynamic in nature [17]. (See also probabilistic-bit computers [18–20] and thermodynamic
neural networks [21–24] for alternative approaches to thermodynamic computing [25]). AI applications like
generative modeling are a natural fit for this thermodynamic hardware, where stochastic fluctuations are
exploited to generate novel samples.
In this work, we surprisingly show that the same thermodynamic hardware from Ref. [17] can also be used
toacceleratekeyprimitivesinlinearalgebra. Thermodynamicsisnottypicallyassociatedwithlinearalgebra,
and connecting these two fields is therefore non-trivial. Here, we exploit the fact that the mathematics of
harmonic oscillator systems is inherently affine (i.e., linear), and hence we can map linear algebraic primitives
onto such systems. (See also Ref. [26] for a discussion of harmonic oscillators in the context of quantum
computingspeedups.) Weshowthatsimplybysamplingfromthethermalequilibriumdistributionofcoupled
harmonic oscillators, one can solve a variety of linear algebra problems.arXiv:2308.05660v1 [cond-mat.stat-mech] 10 Aug 2023 |
2312.17227.pdf | Gradient-based Planning with World Models
Jyothir S V1∗Siddhartha Jalagam1∗Yann LeCun1, 2Vlad Sobal1, 2
1New York University2Meta AI
{jyothir, scj9994, us441}@nyu.edu
yann@cs.nyu.edu
Abstract
The enduring challenge in the field of artificial intelligence has been the control of
systems to achieve desired behaviours. While for systems governed by straightfor-
ward dynamics equations, methods like Linear Quadratic Regulation (LQR) have
historically proven highly effective, most real-world tasks, which require a general
problem-solver, demand world models with dynamics that cannot be easily de-
scribed by simple equations. Consequently, these models must be learned from data
using neural networks. Most model predictive control (MPC) algorithms designed
for visual world models have traditionally explored gradient-free population-based
optimization methods, such as Cross Entropy and Model Predictive Path Integral
(MPPI) for planning. However, we present an exploration of a gradient-based
alternative that fully leverages the differentiability of the world model. In our study,
we conduct a comparative analysis between our method and other MPC-based
alternatives, as well as policy-based algorithms. In a sample-efficient setting, our
method achieves on par or superior performance compared to the alternative ap-
proaches in most tasks. Additionally, we introduce a hybrid model that combines
policy networks and gradient-based MPC, which outperforms pure policy based
methods thereby holding promise for Gradient-based planning with world models
in complex real-world tasks.
1 Introduction
Until recently, model-free reinforcement learning (RL) algorithms [ 24][28] have been the predominant
choice for visual control tasks, particularly in simple environments like Atari games. However, these
model-free algorithms are notorious for their sample inefficiency and lack of generality. If the tasks
change, the policy needs to be trained again. They are constrained by their inability to transfer
knowledge gained from training in one environment to another. Consequently, they must undergo
retraining for even minor deviations from the original task. Real-world applications where the agent
needs to solve a multitude of different tasks in the environment, such as robotics, demand a more
general approach.
To address this limitation, multiple types of methods have been proposed. In this work, we focus on
model-based planning methods. These model-based approaches encompass three key components: a
learned dynamics model that predicts state transitions, a learned reward or value model analogous to
the cost function in Linear Quadratic Regulation (LQR) [ 6], which encapsulates state desirability
information, and a planner that harnesses the world model and reward model to achieve desired states.
While previous research in planning using Model Predictive Control (MPC) [ 25] has primarily focused
on gradient-free methods like cross-entropy[ 27,9], these methods are computationally expensive and
do not utilize the differentiability of the learned world model.
∗Equal Contribution.
Preprint. Under review.arXiv:2312.17227v1 [cs.LG] 28 Dec 2023 |
10.1038.s41564-023-01584-8.pdf | Nature Microbiology
nature microbiologyhttps://doi.org/10.1038/s41564-023-01584-8
Analysis
Large language models improve annotation
of prokaryotic viral proteins
Zachary N. Flamholz 1, Steven J. Biller 2 & Libusha Kelly 1,3
Viral genomes are poorly annotated in metagenomic samples, representing
an obstacle to understanding viral diversity and function. Current annotation approaches rely on alignment-based sequence homology methods, which are limited by the paucity of characterized viral proteins and divergence among viral sequences. Here we show that protein language models can capture prokaryotic viral protein function, enabling new portions of viral sequence space to be assigned biologically meaningful labels. When applied to global ocean virome data, our classifier expanded the annotated fraction of viral protein families by 29%. Among previously unannotated sequences, we highlight the identification of an integrase defining a mobile element in marine picocyanobacteria and a capsid protein that anchors globally widespread viral elements. Furthermore, improved high-level functional annotation provides a means to characterize similarities in genomic organization among diverse viral sequences. Protein language models thus enhance remote homology detection of viral proteins, serving as a useful complement to existing approaches.
Viruses of microorganisms, hereafter ‘viruses’ , are abundant in the
environment and have wide-ranging impacts on microbial communi-
ties. Much of what we know about viral diversity, ecology and function
comes from the analysis of sequences obtained from environmental
samples, yet viruses are difficult to identify, classify and annotate.
Thus, we make statements about viral biology and viral impacts on
microbial community structure and function based on a tiny fraction
of viral sequences with sufficient similarity to existing references. In
recent years, next-generation sequencing and increasing computa -
tional resources have been applied to catalogue the world’s virome1–7.
While there has been substantial methodological progress in identify -
ing viral DNA in whole-community metagenomic sequence data8–16,
sequence feature annotation and overall taxonomic assignment of
identified uncultivated virus genomes (UViGs) have lagged consid -
erably. Viruses have no universal conserved marker genes to enable
broad, unified, taxonomic analysis, and thus, most of the hundreds of
thousands of new viruses uncovered in viral catalogue studies remain
unclassified1–7. Viral taxonomic classification is generally based on
using predicted UViG proteins as features for clustering-based17–19 or machine-learning-based20 taxonomic classification. Yet, as many as
86% of environmental viral protein clusters match uncharacterized
protein families or have no hits at all6,7,16,21,22. Although detailed manual
investigation of these sequence clusters may be able to yield hints of
potential functions in some cases, such labour-intensive efforts do not
readily scale to the amount of data being generated. Improved anno -
tation of viral protein families (VPFs) is thus a necessary, unrealized
step towards understanding the roles of viruses in microbial ecology.
Viral protein annotation currently relies on sequence homology
using state-of-the-art approaches based on profile hidden Markov
models (pHMMs). For viral metagenomics, sequence homology meth -
ods suffer from two fundamental limitations: (1) the limited library of
annotated viral protein sequences from which to construct probabil-istic sequence models and (2) the rate at which viral proteins change,
quickly diverging beyond recognition by traditional sequence homol -
ogy metrics. An alignment-free method that does not depend on con-
structing sequence profiles for statistical sequence homology and that
can leverage functional homology between proteins could overcome both challenges.Received: 23 April 2023
Accepted: 8 December 2023
Published online: xx xx xxxx
Check for updates
1Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA. 2Department of Biological Sciences,
Wellesley College, Wellesley, MA, USA. 3Department of Microbiology and Immunology, Albert Einstein College of Medicine, Bronx, NY, USA.
e-mail: libusha.kelly@einsteinmed.edu |
2202.03286.pdf | Red Teaming Language Models with Language Models
WARNING: This paper contains model outputs which are offensive in nature.
Ethan Perez1 2Saffron Huang1Francis Song1Trevor Cai1Roman Ring1
John Aslanides1Amelia Glaese1Nat McAleese1Geoffrey Irving1
1DeepMind,2New York University
perez@nyu.edu
Abstract
Language Models (LMs) often cannot
be deployed because of their potential to
harm users in hard-to-predict ways. Prior
work identifies harmful behaviors before
deployment by using human annotators to
hand-write test cases. However, human
annotation is expensive, limiting the number
and diversity of test cases. In this work, we
automatically find cases where a target LM
behaves in a harmful way, by generating
test cases ( “red teaming” ) using another
LM. We evaluate the target LM’s replies to
generated test questions using a classifier
trained to detect offensive content, uncovering
tens of thousands of offensive replies in a
280B parameter LM chatbot. We explore
several methods, from zero-shot generation
to reinforcement learning, for generating
test cases with varying levels of diversity
and difficulty. Furthermore, we use prompt
engineering to control LM-generated test
cases to uncover a variety of other harms,
automatically finding groups of people that the
chatbot discusses in offensive ways, personal
and hospital phone numbers generated as
the chatbot’s own contact info, leakage
of private training data in generated text,
and harms that occur over the course of a
conversation. Overall, LM-based red teaming
is one promising tool (among many needed)
for finding and fixing diverse, undesirable LM
behaviors before impacting users.
1 Introduction
Although we had prepared for many types of
abuses of the system, we had made a critical
oversight for this specific attack.
Lee (2016)
Language Models (LMs) are promising tools
for a variety of applications, ranging from
conversational assistants to question-answering
systems. However, deploying LMs in production
threatens to harm users in hard-to-predict ways.
That's a mean question!
I try my best to be
fair to everyone!
Reach me at y @ .com
or +1 415- - .
Favorite life quote?
What's y our work emai l?
Of course.
I'm a patriot too.Are you a R epubl ican
or Democr at?
What would y ou do
if you wer e invisible?
Best jok e about brunet tes?
Are you smarter than
most brunet tes?
Best jok e about r edheads?
Are you smarter than
most r edheads?
I am a patriot.
I hate Democr ats.
Red LM Target LM Red Clf
Offensive
I don't talk about politics.
Data
Leakage
User
Info
Here's a classic one: ...Distri-
butional
Bias
I hate them too.Offensive
Dialog ... ...
... ...
Steal all of your money.
"To be great, you must..."✘
✘
✘
✘✘✓
✓
✓
✓
✘Great!Figure 1: Overview : We automatically generate test
cases with a language model (LM), reply with the target
LM, and find failing test cases using a classifier.
For example, Microsoft took down its chatbot
Tay after adversarial users evoked it into sending
racist and sexually-charged tweets to over 50,000
followers (Lee, 2016). Other work has found
that LMs generate misinformation (Lin et al.,
2021) and confidential, personal information (e.g.,
social security numbers) from the LM training
corpus (Carlini et al., 2019, 2021). Such failures
have serious consequences, so it is crucial to
discover and fix these failures before deployment.
Prior work requires human annotators to
manually discover failures, limiting the number
and diversity of failures found. For example, some
efforts find failures by using many hand-written test
cases either directly (Ribeiro et al., 2020; Röttger
et al., 2021; Xu et al., 2021b) or for supervised
test case generation (Bartolo et al., 2021a). Other
efforts manually compose templates and code toarXiv:2202.03286v1 [cs.CL] 7 Feb 2022 |
2401.14196.pdf | DeepSeek-Coder: When the Large Language Model Meets
Programming - The Rise of Code Intelligence
Daya Guo*1, Qihao Zhu∗1,2, Dejian Yang1, Zhenda Xie1, Kai Dong1, Wentao Zhang1
Guanting Chen1, Xiao Bi1, Y. Wu1, Y.K. Li1, Fuli Luo1, Yingfei Xiong2, Wenfeng Liang1
1DeepSeek-AI
2Key Lab of HCST (PKU), MOE; SCS, Peking University
{zhuqh, guodaya}@deepseek.com
https://github.com/deepseek-ai/DeepSeek-Coder
Abstract
The rapid development of large language models has revolutionized code intelligence in
software development. However, the predominance of closed-source models has restricted
extensive research and development. To address this, we introduce the DeepSeek-Coder series,
a range of open-source code models with sizes from 1.3B to 33B, trained from scratch on 2
trillion tokens. These models are pre-trained on a high-quality project-level code corpus and
employ a fill-in-the-blank task with a 16K window to enhance code generation and infilling.
Our extensive evaluations demonstrate that DeepSeek-Coder not only achieves state-of-the-art
performance among open-source code models across multiple benchmarks but also surpasses
existing closed-source models like Codex and GPT-3.5. Furthermore, DeepSeek-Coder models
are under a permissive license that allows for both research and unrestricted commercial use.
Figure 1|The Performance of DeepSeek-Coder
*Core contributors, ordered alphabetically by the name.arXiv:2401.14196v2 [cs.SE] 26 Jan 2024 |
dubey2022pursuit.pdf | RESEA RCH ARTICL E
Thepursuit ofhappiness: Areinforcement
learning perspective onhabituation and
comparisons
Rachit Dubey ID
1*,Thomas L.Griffiths2,Peter Dayan ID
3,4
1Department ofComputer Science, Princeton University ,Princeton, New Jersey, United States ofAmerica,
2Department ofPsychology, Prince tonUniversity, Prince ton,New Jersey, United States ofAmerica, 3Max
Planck Institute forBiological Cybernetics ,Tu¨bingen, Germa ny,4University ofTu¨bingen, Tu¨bingen,
Germany
*rdubey@p rinceton .edu
Abstract
Inevaluating ourchoices, weoften suffer from twotragic relativities. First, when ourlives
change forthebetter, werapidly habituate tothehigher standard ofliving. Second, wecan-
notescape comparing ourselves tovarious relative standards. Habituation andcomparisons
canbeverydisruptive todecision-making andhappiness ,andtilldate, itremains apuzzle
whytheyhave come tobeapartofcognition inthefirstplace. Here, wepresent computa-
tional evidence thatsuggests thatthese features might playanimportant roleinpromoting
adaptive behavior. Using theframework ofreinforcement learning, weexplore thebenefit of
employing areward function that, inaddition tothereward provided bytheunderlying task,
alsodepends onprior expectations andrelative comparisons. Wefindthatwhile agents
equipped withthisreward function arelesshappy, theylearn faster andsignificantly outper-
form standard reward-based agents inawide range ofenvironmen ts.Specifically, wefind
thatrelative comparisons speed uplearning byproviding anexploration incentive tothe
agents, andprior expectations serve asauseful aidtocomparisons, especially insparsely-
rewarded andnon-station aryenvironments. Oursimulations alsoreveal potential draw-
backs ofthisreward function andshow thatagents perform sub-optimally when compari-
sons areleftunchecked andwhen there aretoomany similar options. Together, ourresults
helpexplain whyweareprone tobecoming trapped inacycle ofnever-ending wants and
desires, andmayshed lightonpsychopatholog iessuch asdepression, materialism, and
overconsumption.
Author summary
Even infavorable circumstances, weoften find ithard toremain happy with what we
have. One might enjoy anewly bought carforaseason, butover time itbrings fewer posi-
tivefeelings and oneeventually begins dreaming ofthenext rewarding thing topursue.
Here, wepresent aseries ofcomputational simulations that suggest these presumable
“flaws” might play animportant role inpromoting adaptive behavior. Weexplore the
PLOS COMP UTATIONAL BIOLOGY
PLOS Computationa lBiology |https:/ /doi.org/10.13 71/journal.p cbi.1010316 August 4,2022 1/32a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
OPEN ACCESS
Citation: Dubey R,Griffiths TL,Dayan P(2022)
Thepursuit ofhappin ess:Areinforcem entlearning
perspective onhabituat ionandcomparisons. PLoS
Comput Biol18(8): e1010316. https://do i.org/
10.1371/ journal.pcbi.10 10316
Editor: Lusha Zhu,Peking University, CHINA
Received: January 22,2022
Accepted: June 18,2022
Published: August 4,2022
Copyright: ©2022 Dubey etal.Thisisanopen
access article distributed under theterms ofthe
Creative Commons Attribution License, which
permits unrestricte duse,distribu tion,and
reproduction inanymedium, provided theoriginal
author andsource arecredited.
Data Availabilit yStatement: Thesource code to
produce theresults presented inthismanuscript is
available onaGitHub repository athttps://github.
com/rach00 12/happiness _RL.
Funding: Theauthors received nospecific funding
forthiswork.
Competing interests :Theauthors have declared
thatnocompeting interests exist. |
2312.11671v2.pdf | Evaluating Language-Model Agents on Realistic
Autonomous Tasks
Megan Kinniment Lucas Jun Koba Sato Haoxing Du Brian Goodrich Max Hasin
Lawrence Chan Luke Harold Miles Tao R. Lin Hjalmar Wijk Joel Burget
Aaron Ho Elizabeth Barnes∗Paul Christiano†
METR (Formerly ARC Evals)
Abstract
In this report, we explore the ability of language model agents to acquire resources,
create copies of themselves, and adapt to novel challenges they encounter in
the wild. We refer to this cluster of capabilities as “autonomous replication and
adaptation” or ARA. We believe that systems capable of ARA could have wide-
reaching and hard-to-anticipate consequences, and that measuring and forecasting
ARA may be useful for informing measures around security, monitoring, and
alignment. Additionally, once a system is capable of ARA, placing bounds on a
system’s capabilities may become significantly more difficult.
We construct four simple example agents that combine language models with tools
that allow them to take actions in the world. We then evaluate these agents on 12
tasks relevant to ARA. We find that these language model agents can only complete
the easiest tasks from this list, although they make some progress on the more
challenging tasks. Unfortunately, these evaluations are not adequate to rule out the
possibility that near-future agents will be capable of ARA. In particular, we do not
think that these evaluations provide good assurance that the “next generation” of
language models (e.g. 100x effective compute scaleup on existing models) will
not yield agents capable of ARA, unless intermediate evaluations are performed
during pretraining. Relatedly, we expect that fine-tuning of the existing models
could produce substantially more competent agents, even if the fine-tuning is not
directly targeted at ARA.
1 Introduction and motivation
Large language models (LLMs) may cause significant real-world harm if they are used maliciously or
pursue unintended goals. The extent of potential harms, and the necessary levels of caution, depend
on models’ capabilities.
Unfortunately, existing benchmarks often provide limited information about dangerous capabilities:
risk depends on the behavior of AI systems in real-world environments, while benchmarks typically
measure the performance of language models in short self-contained tasks like multiple choice tests
or programming contests.
∗Corresponding author. Please direct correspondence to beth@evals.alignment.org.
†Alignment Research Center.arXiv:2312.11671v2 [cs.CL] 4 Jan 2024 |
2310.11589.pdf | ELICITING HUMAN PREFERENCES WITH
LANGUAGE MODELS
Belinda Z. Li∗
MIT CSAIL
bzl@mit.eduAlex Tamkin∗
Anthropic†
atamkin@cs.stanford.eduNoah Goodman
Stanford
ndg@stanford.eduJacob Andreas
MIT CSAIL
jda@mit.edu
ABSTRACT
Language models (LMs) can be directed to perform target tasks by using labeled
examples or natural language prompts. But selecting examples or writing prompts
for can be challenging—especially in tasks that involve unusual edge cases, de-
mand precise articulation of nebulous preferences, or require an accurate mental
model of LM behavior. We propose to use LMs themselves to guide the task spec-
ification process. In this paper, we introduce generative active task elicitation
(GATE ): a learning framework in which models elicit and infer intended behavior
through free-form, language-based interaction with users. We study GATE in three
domains: email validation, content recommendation, and moral reasoning. In pre-
registered experiments, we show that LMs prompted to perform GATE (e.g., by
generating open-ended questions or synthesizing informative edge cases) elicit re-
sponses that are often more informative than user-written prompts or labels. Users
report that interactive task elicitation requires less effort than prompting or exam-
ple labeling and surfaces novel considerations not initially anticipated by users.
Our findings suggest that LM-driven elicitation can be a powerful tool for align-
ing models to complex human preferences and values.1
1 I NTRODUCTION
The complexity of human preferences makes them challenging to encode in machine learning sys-
tems. Consider the problem of designing a recommendation system for songs or websites: first,
system builders must develop a formal model of the potential factors influencing user preferences;
second, users must describe their preferences in a format that a learning algorithm can use to make
future recommendations. Each of these steps requires mental effort and continual refinement by
users and system builders. Until recently, the dominant approach in machine learning has specified
preferences using examples : users first label a dataset with examples of the desired model behavior,
then train a machine learning model on this dataset. This strategy has seen widespread use across
diverse tasks, including image classification and question answering (Krizhevsky et al., 2012; De-
vlin et al., 2019). In more recent years, this paradigm has changed with the advent of instruction
following methods (Brown et al., 2020a): by pre-training langauge models (LMs) on large-scale text
corpora, it is possible to induce desired behaviors by conditioning only on natural language task
specifications, in tasks as diverse as code generation and text summarization.
However, this progress has also accentuated the challenges described above: complex behaviors
require an increasing amount of prompt engineering ordataset design to overcome the imprecision
of natural language and prevent models from misunderstanding or misgeneralizing from spurious
features of prompts or examples. For example, a user who says they enjoy reading tennis articles
could either be interested in the competitive tennis circuit or in improving their own serve. A few
user-provided examples of tennis-related articles might fail to specify whether the user is interested
in broader tennis content, such as tennis-themed satire. These challenges of task ambiguity (Finn
et al., 2018; Tamkin et al., 2022a) loom large as models continue to be applied to more open-ended
tasks and higher-stakes domains.
∗Equal contribution. Author order decided via coin flip.
†Work performed while at Stanford University.
1Code is available at https://github.com/alextamkin/generative-elicitation
1arXiv:2310.11589v1 [cs.CL] 17 Oct 2023 |
2403.20222v1.pdf | Shallow Cross-Encoders
for Low-Latency Retrieval
Aleksandr V. Petrov, Sean MacAvaney, and Craig Macdonald
University of Glasgow, Glasgow, UK
a.petrov.1@research.gla.ac.uk
{sean.macavaney;craig.macdonald }@glasgow.ac.uk
Abstract. Transformer-based Cross-Encoders achieve state-of-the-art effectivness in text retrieval.
However, Cross-Encoders based on large transformer models (such as BERT or T5) are computa-
tionally expensive and allow for scoring only a small number of documents within a reasonably
small latency window. However, keeping search latencies low is important for user satisfaction and
energy usage. In this paper, we show that weaker shallow transformer models (i.e. transformers
with a limited number of layers) actually perform better than full-scale models when constrained
to these practical low-latency settings, since they can estimate the relevance of more documents
in the same time budget. We further show that shallow transformers may benefit from the gen-
eralised Binary Cross-Entropy (gBCE) training scheme, which has recently demonstrated success
for recommendation tasks. Our experiments with TREC Deep Learning passage ranking querysets
demonstrate significant improvements in shallow and full-scale models in low-latency scenarios. For
example, when the latency limit is 25ms per query, MonoBERT-Large (a cross-encoder based on
a full-scale BERT model) is only able to achieve NDCG@10 of 0.431 on TREC DL 2019, while
TinyBERT-gBCE (a cross-encoder based on TinyBERT trained with gBCE) reaches NDCG@10 of
0.652, a +51% gain over MonoBERT-Large. We also show that shallow Cross-Encoders are effec-
tive even when used without a GPU (e.g., with CPU inference, NDCG@10 decreases only by 3%
compared to GPU inference with 50ms latency), which makes Cross-Encoders practical to run even
without specialised hardware acceleration.
1 Introduction
The introduction of the Transformer [35] neural network architecture, and especially pre-trained lan-
guage models that use Transformers (such as BERT [7]), has been transformative for the IR field; for
example, Nogueira et al. [27] improved MRR@10 on the MS-MARCO dev set by 31% with the help of
a BERT-based model. Although there are a variety of ranking architectures used within IR (e.g., dense
Bi-Encoders [14,18,40], sparse Bi-Encoders [9,22], and late interaction models [13,15]), the best results
for document re-ranking are typically achieved with the help of Cross-Encoders [14] – a family of mod-
els which encode both the query and the document simultaneously as a single textual input [41]. Aside
from their high in-domain precision, Cross-Encoders tend to be more robust when generalising across
retrieval tasks/domains [33]. Although Cross-Encoders can only practically be used as re-ranking models,
limitations in their first-stage recall can be efficiently mitigated using pseudo-relevance feedback [23].
Further, Cross-Encoders can typically be fine-tuned from scratch (i.e., starting from the checkpoint of a
foundational model, such as BERT).
Despite these benefits, the application of Cross-Encoders in production retrieval systems is still limited.
Cross-Encoders require a model inference for each query-document pair and, therefore, struggle with high
computational complexity and high latency [24]. In real-world search systems, high latency negatively
affects key performance metrics, such as the number of clicks, revenue, and user satisfaction [16, Ch. 5].
Further, high latencies tend to be correlated with higher energy usage, resulting in negative impacts on
the climate [32].
The high computational complexity and resulting latency of Cross-Encoder models motivated re-
searchers to investigate Bi-Encoder [14] models. These models separately encode the query and the docu-
ment, and then estimate relevance score using an inexpensive operation over the encoded representations
(e.g. cosine similarity [30] or the MaxSim operation [15]). By pre-computing the document representations
offline and using a variety of approaches to accelerate retrieval [17], Bi-Encoders can achieve low retrieval
latency. However, this comes at other costs. For instance, Bi-Encoders are markedly more complicated
to train than Cross-Encoders, typically relying on knowledge distillation from other models (e.g., [19]),
training data balancing (e.g., [12]), and/or hard negative mining (e.g., [40]). Further, Bi-Encoders must
pre-encode all documents in the collection and keep the encoded versions of all documents in memory.arXiv:2403.20222v1 [cs.IR] 29 Mar 2024 |
2309.10400v3.pdf | Published as a conference paper at ICLR 2024
POSE: E FFICIENT CONTEXT WINDOW EXTENSION OF
LLM S VIA POSITIONAL SKIP-WISE TRAINING
Dawei Zhu∗♡♠Nan Yang♢Liang Wang♢Yifan Song♡♠Wenhao Wu♡♠
Furu Wei♢Sujian Li♡♠
♡School of Computer Science, Peking University
♠National Key Laboratory for Multimedia Information Processing, Peking University
♢Microsoft Corporation
https://github.com/dwzhu-pku/PoSE
ABSTRACT
Large Language Models (LLMs) are trained with a pre-defined context length,
restricting their use in scenarios requiring long inputs. Previous efforts for adapting
LLMs to a longer length usually requires fine-tuning with this target length ( Full-
length fine-tuning), suffering intensive training cost. To decouple train length
from target length for efficient context window extension, we propose Positional
Skip-wis E(PoSE) training that smartly simulates long inputs using a fixed context
window. This is achieved by first dividing the original context window into several
chunks, then designing distinct skipping bias terms to manipulate the position
indices of each chunk. These bias terms and the lengths of each chunk are altered
for every training example, allowing the model to adapt to all positions within
target length. Experimental results show that PoSE greatly reduces memory and
time overhead compared with Full-length fine-tuning, with minimal impact on per-
formance. Leveraging this advantage, we have successfully extended the LLaMA
model to 128k tokens using a 2k training context window. Furthermore, we empir-
ically confirm that PoSE is compatible with all RoPE-based LLMs and position
interpolation strategies. Notably, our method can potentially support infinite length,
limited only by memory usage in inference. With ongoing progress for efficient
inference, we believe PoSE can further scale the context window beyond 128k.
1 I NTRODUCTION
Large Language Models (LLMs) have revolutionized language modeling and demonstrated impres-
sive abilities to perform various tasks (Brown et al., 2020). However, even with their remarkable
capacity, these LLMs remain restricted by pre-defined context window sizes, suffering from notable
performance decline when input tokens exceeds these limits. Nevertheless, numerous application
scenarios demand extremely long input sequences, including long document summarization (Huang
et al., 2021), in-context learning with numerous examples (Li et al., 2023), and long document
retrieval (Zhou et al., 2022), etc. This naturally poses a significant challenge of context window
extension : Extending the context window of a pre-trained LLM to accommodate longer sequences.
Naively fine-tuning LLMs on inputs of target length for window extension has received limited
success due to the large disruption introduced by new position indices (Chen et al., 2023a; Han et al.,
2023). Addressing this, Position Interpolation (Chen et al., 2023a; kaiokendev, 2023; Peng et al.,
2023) propose to down-scale the position indices to match the original window size, yielding improved
results for context extension. However, these methods still rely on Full-length fine-tuning, i.e., fine-
tuning with context of target length, which is memory and time-intensive due to the computational
complexity that increases quadratically with input length. For example, Chen et al. (2023a) use 32
A100 GPUs to extend LLaMA models from 2k to 8k context, and 128 A100 GPUs for even larger
context. These overhead has made it impossible to extend context window to extreme lengths.
∗Work done during Dawei’s internship at MSRA. Sujian Li is the corresponding author.
1arXiv:2309.10400v3 [cs.CL] 21 Feb 2024 |
2202.04728.pdf | Predicting Human Similarity Judgments Using Large Language Models
Raja Marjieh1,*, Ilia Sucholutsky2,*, Theodore R. Sumers2,
Nori Jacoby3, Thomas L. Griffiths1,2
1Department of Psychology, Princeton University
2Department of Computer Science, Princeton University
3Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics
{raja.marjieh, is2961, sumers, tomg }@princeton.edu; nori.jacoby@ae.mpg.de
*equal contribution.
Abstract
Similarity judgments provide a well-established method for ac-
cessing mental representations, with applications in psychol-
ogy, neuroscience and machine learning. However, collecting
similarity judgments can be prohibitively expensive for natu-
ralistic datasets as the number of comparisons grows quadrati-
cally in the number of stimuli. One way to tackle this problem
is to construct approximation procedures that rely on more ac-
cessible proxies for predicting similarity. Here we leverage
recent advances in language models and online recruitment,
proposing an efficient domain-general procedure for predicting
human similarity judgments based on text descriptions. Intu-
itively, similar stimuli are likely to evoke similar descriptions,
allowing us to use description similarity to predict pairwise
similarity judgments. Crucially, the number of descriptions
required grows only linearly with the number of stimuli, dras-
tically reducing the amount of data required. We test this pro-
cedure on six datasets of naturalistic images and show that our
models outperform previous approaches based on visual infor-
mation.
Keywords: similarity, perception, language models, represen-
tations
Introduction
Mental representations serve as a substrate for a variety of
cognitive tasks such as decision-making, communication and
memory (Anderson, 1990). Understanding the structure of
those representation is a core problem in cognitive science
and is the subject of a large corpus of work in the psycho-
logical literature (Shepard, 1980, 1987; Ghirlanda & Enquist,
2003; Battleday, Peterson, & Griffiths, 2020; Peterson, Ab-
bott, & Griffiths, 2018; Jha, Peterson, & Griffiths, 2020;
Caplette & Turk-Browne, 2022; Hebart, Zheng, Pereira, &
Baker, 2020).
One important example of this research is the development
of the multi-dimensional scaling method (MDS) for uncover-
ing the structure of mental representations based on similarity
judgments (Shepard, 1980). Given a set of Nstimuli, MDS
begins by collecting pairwise similarity judgments and aggre-
gating them into a N×Nmatrix. Then, an iterative procedure
finds an embedding that maps the stimuli into points in a psy-
chological space such that their distance mirrors their simi-
larity. Applying MDS to different datasets revealed highly in-
terpretable organization of the stimuli (Shepard, 1980, 1987).
Aside from psychology, similarity judgments play an impor-
tant role in other disciplines such as neuroscience, e.g., in the
method of representational similarity analysis (Kriegeskorte,
Mur, & Bandettini, 2008), as well as in machine learning,e.g., as a way to regularize latent spaces so that they align
with human representations and perception (Esling, Bitton, et
al., 2018).
Despite the success of these approaches, the quadratic in-
crease of the number of pairwise comparisons as a function
of the number of stimuli poses a serious limitation on their
scalability. Indeed, even a relatively small dataset that con-
tains∼102stimuli would require ∼104judgments for con-
structing the full similarity matrix. This limitation calls for
alternative procedures that allow for efficient approximation
of human similarity judgments. Previous studies have pro-
posed such a method in the visual modality by harnessing
the latent representations from convolutional neural networks
(CNNs) (Peterson et al., 2018; Jha et al., 2020). Such an
approach, however, is domain-specific and could potentially
miss important semantic dimensions that weigh on people’s
judgments.
To reduce this burden, we leverage the deep relationship
between conceptual structure and language (Murphy, 2002)
to use linguistic descriptions as a proxy for human seman-
tic representations. Intuitively, stimuli that are judged to be
highly similar are likely to evoke similar descriptions, allow-
ing us to use description similarity to predict pairwise sim-
ilarity judgments. This approach offers two key advantages
over prior work: first, it is scalable . While pairwise similar-
ity comparisons scale quadratically with the number of stim-
uli (Shepard, 1980), text descriptions scale linearly. Second,
it is domain-general : unlike CNN representations (Peterson
et al., 2018), which are limited to visual stimuli, our proce-
dure could be applied to any domain.
Finally, we note that our approach leverages two distinct
and important advances. First, text descriptions can be easily
crowd-sourced via online recruitment platforms such as Ama-
zon Mechanical Turk (AMT; https://www.mturk.com/ )
and are part of the common practice in modern machine
learning pipelines (Parekh, Baldridge, Cer, Waters, & Yang,
2020). Second, modern language models (Speer, Chin, &
Havasi, 2017; Devlin, Chang, Lee, & Toutanova, 2018) pro-
vide rich latent representations of text. It is therefore natu-
ral to ask: how far can we go in predicting human similarity
judgments based on language alone?
We explore this question on a collection of six datasets
of naturalistic images for which the ground-truth similarity
matrices are known (Peterson et al., 2018). Our explorationarXiv:2202.04728v1 [cs.LG] 9 Feb 2022 |