text
stringlengths 0
51.9k
|
---|
MatCha
Overview
MatCha has been proposed in the paper MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering, from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
The abstract of the paper states the following:
Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks.
Model description
MatCha is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
MatCha is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer.
Usage
Currently 6 checkpoints are available for MatCha:
google/matcha: the base MatCha model, used to fine-tune MatCha on downstream tasks
google/matcha-chartqa: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts.
google/matcha-plotqa-v1: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
google/matcha-plotqa-v2: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots.
google/matcha-chart2text-statista: MatCha model fine-tuned on Statista dataset.
google/matcha-chart2text-pew: MatCha model fine-tuned on Pew dataset.
The models finetuned on chart2text-pew and chart2text-statista are more suited for summarization, whereas the models finetuned on plotqa and chartqa are more suited for question answering.
You can use these models as follows (example on a ChatQA dataset):
thon
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/matcha-chartqa").to(0)
processor = AutoProcessor.from_pretrained("google/matcha-chartqa")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt").to(0)
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning
To fine-tune MatCha, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faste convergence:
thon
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
MatCha is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
|
SwitchTransformers
Overview
The SwitchTransformers model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer.
The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale.
During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations.
The abstract from the paper is the following:
In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.
This model was contributed by Younes Belkada and Arthur Zucker.
The original code can be found here.
Usage tips
SwitchTransformers uses the [T5Tokenizer], which can be loaded directly from each model's repository.
The released weights are pretrained on English Masked Language Modeling task, and should be finetuned.
Resources
Translation task guide
Summarization task guide
SwitchTransformersConfig
[[autodoc]] SwitchTransformersConfig
SwitchTransformersTop1Router
[[autodoc]] SwitchTransformersTop1Router
- _compute_router_probabilities
- forward
SwitchTransformersSparseMLP
[[autodoc]] SwitchTransformersSparseMLP
- forward
SwitchTransformersModel
[[autodoc]] SwitchTransformersModel
- forward
SwitchTransformersForConditionalGeneration
[[autodoc]] SwitchTransformersForConditionalGeneration
- forward
SwitchTransformersEncoderModel
[[autodoc]] SwitchTransformersEncoderModel
- forward |
VITS
Overview
The VITS model was proposed in Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech by Jaehyeon Kim, Jungil Kong, Juhee Son.
VITS (Variational Inference with adversarial learning for end-to-end Text-to-Speech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
The abstract from the paper is the following:
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.
This model can also be used with TTS checkpoints from Massively Multilingual Speech (MMS)
as these checkpoints use the same architecture and a slightly modified tokenizer.
This model was contributed by Matthijs and sanchit-gandhi. The original code can be found here.
Usage examples
Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it
is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet,
such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example
runs a forward pass using the MMS-TTS English checkpoint:
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(**inputs)
waveform = outputs.waveform[0]
The resulting waveform can be saved as a .wav file:
thon
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=waveform)
Or displayed in a Jupyter Notebook / Google Colab:
thon
from IPython.display import Audio
Audio(waveform, rate=model.config.sampling_rate)
For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the uroman
perl package is required to pre-process the text inputs to the Roman alphabet.
You can check whether you require the uroman package for your language by inspecting the is_uroman attribute of
the pre-trained tokenizer:
thon
from transformers import VitsTokenizer
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
print(tokenizer.is_uroman)
If required, you should apply the uroman package to your text inputs prior to passing them to the VitsTokenizer,
since currently the tokenizer does not support performing the pre-processing itself.
To do this, first clone the uroman repository to your local machine and set the bash variable UROMAN to the local path:
git clone https://github.com/isi-nlp/uroman.git
cd uroman
export UROMAN=$(pwd)
You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable
UROMAN to point to the uroman repository, or you can pass the uroman directory as an argument to the uromaize function:
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
import os
import subprocess
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor")
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
def uromanize(input_string, uroman_path):
"""Convert non-Roman strings to Roman using the uroman perl package."""
script_path = os.path.join(uroman_path, "bin", "uroman.pl")
command = ["perl", script_path]
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Execute the perl command
stdout, stderr = process.communicate(input=input_string.encode())
if process.returncode != 0:
raise ValueError(f"Error {process.returncode}: {stderr.decode()}")
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1]
text = "이봐 무슨 일이야"
uromaized_text = uromanize(text, uroman_path=os.environ["UROMAN"])
inputs = tokenizer(text=uromaized_text, return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(inputs["input_ids"])
waveform = outputs.waveform[0]
VitsConfig
[[autodoc]] VitsConfig
VitsTokenizer
[[autodoc]] VitsTokenizer
- call
- save_vocabulary
VitsModel
[[autodoc]] VitsModel
- forward |
RAG
Overview
Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and
sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate
outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing
both retrieval and generation to adapt to downstream tasks.
It is based on the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
The abstract from the paper is the following:
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve
state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely
manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind
task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge
remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric
memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a
general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained
parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a
pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages
across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our
models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks,
outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation
tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art
parametric-only seq2seq baseline.
This model was contributed by ola13.
Usage tips
Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models.
RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq
modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt
to downstream tasks.
RagConfig
[[autodoc]] RagConfig
RagTokenizer
[[autodoc]] RagTokenizer
Rag specific outputs
[[autodoc]] models.rag.modeling_rag.RetrievAugLMMarginOutput
[[autodoc]] models.rag.modeling_rag.RetrievAugLMOutput
RagRetriever
[[autodoc]] RagRetriever
RagModel
[[autodoc]] RagModel
- forward
RagSequenceForGeneration
[[autodoc]] RagSequenceForGeneration
- forward
- generate
RagTokenForGeneration
[[autodoc]] RagTokenForGeneration
- forward
- generate
TFRagModel
[[autodoc]] TFRagModel
- call
TFRagSequenceForGeneration
[[autodoc]] TFRagSequenceForGeneration
- call
- generate
TFRagTokenForGeneration
[[autodoc]] TFRagTokenForGeneration
- call
- generate
|
MobileBERT
Overview
The MobileBERT model was proposed in MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny
Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several
approaches.
The abstract from the paper is the following:
Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds
of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot
be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating
the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to
various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while
equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE
model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is
4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the
natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms
latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of
90.0/79.2 (1.5/2.1 higher than BERT_BASE).
This model was contributed by vshampor. The original code can be found here.
Usage tips
MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
than the left.
MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
MobileBertConfig
[[autodoc]] MobileBertConfig
MobileBertTokenizer
[[autodoc]] MobileBertTokenizer
MobileBertTokenizerFast
[[autodoc]] MobileBertTokenizerFast
MobileBert specific outputs
[[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput
[[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput
MobileBertModel
[[autodoc]] MobileBertModel
- forward
MobileBertForPreTraining
[[autodoc]] MobileBertForPreTraining
- forward
MobileBertForMaskedLM
[[autodoc]] MobileBertForMaskedLM
- forward
MobileBertForNextSentencePrediction
[[autodoc]] MobileBertForNextSentencePrediction
- forward
MobileBertForSequenceClassification
[[autodoc]] MobileBertForSequenceClassification
- forward
MobileBertForMultipleChoice
[[autodoc]] MobileBertForMultipleChoice
- forward
MobileBertForTokenClassification
[[autodoc]] MobileBertForTokenClassification
- forward
MobileBertForQuestionAnswering
[[autodoc]] MobileBertForQuestionAnswering
- forward
TFMobileBertModel
[[autodoc]] TFMobileBertModel
- call
TFMobileBertForPreTraining
[[autodoc]] TFMobileBertForPreTraining
- call
TFMobileBertForMaskedLM
[[autodoc]] TFMobileBertForMaskedLM
- call
TFMobileBertForNextSentencePrediction
[[autodoc]] TFMobileBertForNextSentencePrediction
- call
TFMobileBertForSequenceClassification
[[autodoc]] TFMobileBertForSequenceClassification
- call
TFMobileBertForMultipleChoice
[[autodoc]] TFMobileBertForMultipleChoice
- call
TFMobileBertForTokenClassification
[[autodoc]] TFMobileBertForTokenClassification
- call
TFMobileBertForQuestionAnswering
[[autodoc]] TFMobileBertForQuestionAnswering
- call
|
TrOCR
Overview
The TrOCR model was proposed in TrOCR: Transformer-based Optical Character Recognition with Pre-trained
Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang,
Zhoujun Li, Furu Wei. TrOCR consists of an image Transformer encoder and an autoregressive text Transformer decoder to
perform optical character recognition (OCR).
The abstract from the paper is the following:
Text recognition is a long-standing research problem for document digitalization. Existing approaches for text recognition
are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language
model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end
text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the
Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but
effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments
show that the TrOCR model outperforms the current state-of-the-art models on both printed and handwritten text recognition
tasks.
TrOCR architecture. Taken from the original paper.
Please refer to the [VisionEncoderDecoder] class on how to use this model.
This model was contributed by nielsr. The original code can be found
here.
Usage tips
The quickest way to get started with TrOCR is by checking the tutorial
notebooks, which show how to use the model
at inference time as well as fine-tuning on custom data.
TrOCR is pre-trained in 2 stages before being fine-tuned on downstream datasets. It achieves state-of-the-art results
on both printed (e.g. the SROIE dataset and handwritten (e.g. the IAM
Handwriting dataset text recognition tasks. For more
information, see the official models.
TrOCR is always used within the VisionEncoderDecoder framework.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with TrOCR. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on Accelerating Document AI with TrOCR.
A blog post on how to Document AI with TrOCR.
A notebook on how to finetune TrOCR on IAM Handwriting Database using Seq2SeqTrainer.
A notebook on inference with TrOCR and Gradio demo.
A notebook on finetune TrOCR on the IAM Handwriting Database using native PyTorch.
A notebook on evaluating TrOCR on the IAM test set.
Casual language modeling task guide.
⚡️ Inference
An interactive-demo on TrOCR handwritten character recognition.
Inference
TrOCR's [VisionEncoderDecoder] model accepts images as input and makes use of
[~generation.GenerationMixin.generate] to autoregressively generate text given the input image.
The [ViTImageProcessor/DeiTImageProcessor] class is responsible for preprocessing the input image and
[RobertaTokenizer/XLMRobertaTokenizer] decodes the generated target tokens to the target string. The
[TrOCRProcessor] wraps [ViTImageProcessor/DeiTImageProcessor] and [RobertaTokenizer/XLMRobertaTokenizer]
into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step Optical Character Recognition (OCR)
``` py
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
import requests
from PIL import Image
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
load image from the IAM dataset
url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
See the model hub to look for TrOCR checkpoints.
TrOCRConfig
[[autodoc]] TrOCRConfig
TrOCRProcessor
[[autodoc]] TrOCRProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
TrOCRForCausalLM
[[autodoc]] TrOCRForCausalLM
- forward |
BARTpho
Overview
The BARTpho model was proposed in BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
The abstract from the paper is the following:
We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual
sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training
scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments
on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho
outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future
research and applications of generative Vietnamese NLP tasks.
This model was contributed by dqnguyen. The original code can be found here.
Usage example
thon
import torch
from transformers import AutoModel, AutoTokenizer
bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable")
tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable")
line = "Chúng tôi là những nghiên cứu viên."
input_ids = tokenizer(line, return_tensors="pt")
with torch.no_grad():
features = bartpho(**input_ids) # Models outputs are now tuples
With TensorFlow 2.0+:
from transformers import TFAutoModel
bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable")
input_ids = tokenizer(line, return_tensors="tf")
features = bartpho(**input_ids)
Usage tips
Following mBART, BARTpho uses the "large" architecture of BART with an additional layer-normalization layer on top of
both the encoder and decoder. Thus, usage examples in the documentation of BART, when adapting to use
with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts.
For example:
thon
from transformers import MBartForConditionalGeneration
bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable")
TXT = "Chúng tôi là nghiên cứu viên."
input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
logits = bartpho(input_ids).logits
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
print(tokenizer.decode(predictions).split())
This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types
extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa.
Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword
segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file".
BartphoTokenizer
[[autodoc]] BartphoTokenizer |
BioGPT
Overview
The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.
The abstract from the paper is the following:
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
This model was contributed by kamalkraj. The original code can be found here.
Usage tips
BioGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left.
BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
Resources
Causal language modeling task guide
BioGptConfig
[[autodoc]] BioGptConfig
BioGptTokenizer
[[autodoc]] BioGptTokenizer
- save_vocabulary
BioGptModel
[[autodoc]] BioGptModel
- forward
BioGptForCausalLM
[[autodoc]] BioGptForCausalLM
- forward
BioGptForTokenClassification
[[autodoc]] BioGptForTokenClassification
- forward
BioGptForSequenceClassification
[[autodoc]] BioGptForSequenceClassification
- forward |
SpeechT5
Overview
The SpeechT5 model was proposed in SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
The abstract from the paper is the following:
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
This model was contributed by Matthijs. The original code can be found here.
SpeechT5Config
[[autodoc]] SpeechT5Config
SpeechT5HifiGanConfig
[[autodoc]] SpeechT5HifiGanConfig
SpeechT5Tokenizer
[[autodoc]] SpeechT5Tokenizer
- call
- save_vocabulary
- decode
- batch_decode
SpeechT5FeatureExtractor
[[autodoc]] SpeechT5FeatureExtractor
- call
SpeechT5Processor
[[autodoc]] SpeechT5Processor
- call
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
SpeechT5Model
[[autodoc]] SpeechT5Model
- forward
SpeechT5ForSpeechToText
[[autodoc]] SpeechT5ForSpeechToText
- forward
SpeechT5ForTextToSpeech
[[autodoc]] SpeechT5ForTextToSpeech
- forward
- generate
SpeechT5ForSpeechToSpeech
[[autodoc]] SpeechT5ForSpeechToSpeech
- forward
- generate_speech
SpeechT5HifiGan
[[autodoc]] SpeechT5HifiGan
- forward |
MobileNet V1
Overview
The MobileNet model was proposed in MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
The abstract from the paper is the following:
We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
This model was contributed by matthijs. The original code and weights can be found here.
Usage tips
The checkpoints are named mobilenet_v1_depth_size, for example mobilenet_v1_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and 224 is the resolution of the input images the model was trained on.
Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
One can use [MobileNetV1ImageProcessor] to prepare images for the model.
The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0).
The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [MobileNetV1Config] with tf_padding = False.
Unsupported features:
The [MobileNetV1Model] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this.
It is currently not possible to specify an output_stride. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32.
The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1.
[MobileNetV1ForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
MobileNetV1Config
[[autodoc]] MobileNetV1Config
MobileNetV1FeatureExtractor
[[autodoc]] MobileNetV1FeatureExtractor
- preprocess
MobileNetV1ImageProcessor
[[autodoc]] MobileNetV1ImageProcessor
- preprocess
MobileNetV1Model
[[autodoc]] MobileNetV1Model
- forward
MobileNetV1ForImageClassification
[[autodoc]] MobileNetV1ForImageClassification
- forward |
XLM-RoBERTa-XL
Overview
The XLM-RoBERTa-XL model was proposed in Larger-Scale Transformers for Multilingual Masked Language Modeling by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
The abstract from the paper is the following:
Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
This model was contributed by Soonhwan-Kwon and stefan-it. The original code can be found here.
Usage tips
XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does
not require lang tensors to understand which language is used, and should be able to determine the correct
language from the input ids.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
XLMRobertaXLConfig
[[autodoc]] XLMRobertaXLConfig
XLMRobertaXLModel
[[autodoc]] XLMRobertaXLModel
- forward
XLMRobertaXLForCausalLM
[[autodoc]] XLMRobertaXLForCausalLM
- forward
XLMRobertaXLForMaskedLM
[[autodoc]] XLMRobertaXLForMaskedLM
- forward
XLMRobertaXLForSequenceClassification
[[autodoc]] XLMRobertaXLForSequenceClassification
- forward
XLMRobertaXLForMultipleChoice
[[autodoc]] XLMRobertaXLForMultipleChoice
- forward
XLMRobertaXLForTokenClassification
[[autodoc]] XLMRobertaXLForTokenClassification
- forward
XLMRobertaXLForQuestionAnswering
[[autodoc]] XLMRobertaXLForQuestionAnswering
- forward |
Big Transfer (BiT)
Overview
The BiT model was proposed in Big Transfer (BiT): General Visual Representation Learning by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
BiT is a simple recipe for scaling up pre-training of ResNet-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning.
The abstract from the paper is the following:
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by group normalization,
2) weight standardization is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant
impact on transfer learning.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT.
[BitForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
BitConfig
[[autodoc]] BitConfig
BitImageProcessor
[[autodoc]] BitImageProcessor
- preprocess
BitModel
[[autodoc]] BitModel
- forward
BitForImageClassification
[[autodoc]] BitForImageClassification
- forward |
IDEFICS
Overview
The IDEFICS model was proposed in OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh
The abstract from the paper is the following:
Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks that require reasoning over one or multiple images to generate a text. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELISC, we train an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal benchmarks. We release the code to reproduce the dataset along with the dataset itself.
This model was contributed by HuggingFaceM4. The original code can be found here. (TODO: don't have a public link yet).
IDEFICS modeling code in Transformers is for finetuning and inferencing the pre-trained IDEFICS models.
To train a new IDEFICS model from scratch use the m4 codebase (a link will be provided once it's made public)
IdeficsConfig
[[autodoc]] IdeficsConfig
IdeficsModel
[[autodoc]] IdeficsModel
- forward
IdeficsForVisionText2Text
[[autodoc]] IdeficsForVisionText2Text
- forward
IdeficsImageProcessor
[[autodoc]] IdeficsImageProcessor
- preprocess
IdeficsProcessor
[[autodoc]] IdeficsProcessor
- call |
ViLT
Overview
The ViLT model was proposed in ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision
by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design
for Vision-and-Language Pre-training (VLP).
The abstract from the paper is the following:
Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks.
Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision
(e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we
find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more
computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive
power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model,
Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically
simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of
times faster than previous VLP models, yet with competitive or better downstream task performance.
ViLT architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage tips
The quickest way to get started with ViLT is by checking the example notebooks
(which showcase both inference and fine-tuning on custom data).
ViLT is a model that takes both pixel_values and input_ids as input. One can use [ViltProcessor] to prepare data for the model.
This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one.
ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to
under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a pixel_mask that indicates
which pixel values are real and which are padding. [ViltProcessor] automatically creates this for you.
The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes
additional embedding layers for the language modality.
The PyTorch version of this model is only available in torch 1.10 and higher.
ViltConfig
[[autodoc]] ViltConfig
ViltFeatureExtractor
[[autodoc]] ViltFeatureExtractor
- call
ViltImageProcessor
[[autodoc]] ViltImageProcessor
- preprocess
ViltProcessor
[[autodoc]] ViltProcessor
- call
ViltModel
[[autodoc]] ViltModel
- forward
ViltForMaskedLM
[[autodoc]] ViltForMaskedLM
- forward
ViltForQuestionAnswering
[[autodoc]] ViltForQuestionAnswering
- forward
ViltForImagesAndTextClassification
[[autodoc]] ViltForImagesAndTextClassification
- forward
ViltForImageAndTextRetrieval
[[autodoc]] ViltForImageAndTextRetrieval
- forward
ViltForTokenClassification
[[autodoc]] ViltForTokenClassification
- forward |
MPT
Overview
The MPT model was proposed by the MosaicML team and released with multiple sizes and finetuned variants. The MPT models is a series of open source and commercially usable LLMs pre-trained on 1T tokens.
MPT models are GPT-style decoder-only transformers with several improvements: performance-optimized layer implementations, architecture changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with ALiBi.
MPT base: MPT base pre-trained models on next token prediction
MPT instruct: MPT base models fine-tuned on instruction based tasks
MPT storywriter: MPT base models fine-tuned for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus, this enables the model to handle very long sequences
The original code is available at the llm-foundry repository.
Read more about it in the release blogpost
Usage tips
Learn more about some techniques behind training of the model in this section of llm-foundry repository
If you want to use the advanced version of the model (triton kernels, direct flash attention integration), you can still use the original model implementation by adding trust_remote_code=True when calling from_pretrained.
Resources
Fine-tuning Notebook on how to fine-tune MPT-7B on a free Google Colab instance to turn the model into a Chatbot.
MptConfig
[[autodoc]] MptConfig
- all
MptModel
[[autodoc]] MptModel
- forward
MptForCausalLM
[[autodoc]] MptForCausalLM
- forward
MptForSequenceClassification
[[autodoc]] MptForSequenceClassification
- forward
MptForTokenClassification
[[autodoc]] MptForTokenClassification
- forward
MptForQuestionAnswering
[[autodoc]] MptForQuestionAnswering
- forward |
CodeLlama
Overview
The Code Llama model was proposed in Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
The abstract from the paper is the following:
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
Check out all Code Llama model checkpoints here and the officially released ones in the codellama org.
This model was contributed by ArthurZucker. The original code of the authors can be found here.
Usage tips and examples
The Llama2 family models, on which Code Llama is based, were trained using bfloat16, but the original inference uses float16. Let's look at the different precisions:
float32: PyTorch convention on model initialization is to load models in float32, no matter with which dtype the model weights were stored. transformers also follows this convention for consistency with PyTorch. This will be picked by default. If you want the AutoModel API to cast the load the checkpoints with the storage weights type, you must specify torch_dtype="auto", e.g. model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto").
bfloat16: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning.
float16: We recommend running inference using this precision, as it's usually faster than bfloat16, and evaluation metrics show no discernible degradation with respect to bfloat16. You can also run inference using bfloat16, and we recommend you check inference results with both float16 and bfloat16 after fine-tuning.
As mentioned above, the dtype of the storage weights is mostly irrelevant unless you are using torch_dtype="auto" when initializing a model using. The reason is that the model will first be downloaded (using the dtype of the checkpoints online) and then will be casted to the default dtype of torch (becomes torch.float32). If there is a specified torch_dtype, it will be used instead.
Tips:
- The infilling task is supported out of the box. You should be using the tokenizer.fill_token where you want your input to be filled.
- The model conversion script is the same as for the Llama2 family:
Here is a sample usage:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, CodeLlamaTokenizer
tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf")
PROMPT = '''def remove_non_ascii(s: str) -> str:
"""
return result
'''
input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"]
generated_ids = model.generate(input_ids, max_new_tokens=128)
filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0]
print(PROMPT.replace("", filling))
def remove_non_ascii(s: str) -> str:
""" Remove non-ASCII characters from a string.
Args:
s: The string to remove non-ASCII characters from.
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
If you only want the infilled part:
thon
from transformers import pipeline
import torch
generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto")
generator('def remove_non_ascii(s: str) -> str:\n """ \n return result', max_new_tokens = 128, return_type = 1)
Under the hood, the tokenizer automatically splits by <FILL_ME> to create a formatted input string that follows the original training pattern. This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try this calculator which can help determine that value.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
Code Llama has the same architecture as the Llama2 models, refer to Llama2's documentation page for the API reference.
Find Code Llama tokenizer reference below.
CodeLlamaTokenizer
[[autodoc]] CodeLlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
CodeLlamaTokenizerFast
[[autodoc]] CodeLlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary |
XGLM
Overview
The XGLM model was proposed in Few-shot Learning with Multilingual Language Models
by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal,
Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo,
Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
The abstract from the paper is the following:
Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language
tasks without fine-tuning. While these models are known to be able to jointly represent many different languages,
their training data is dominated by English, potentially limiting their cross-lingual generalization.
In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages,
and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters
sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size
in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings)
and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark,
our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the
official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails,
showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement
on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models
in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.
This model was contributed by Suraj. The original code can be found here.
Resources
Causal language modeling task guide
XGLMConfig
[[autodoc]] XGLMConfig
XGLMTokenizer
[[autodoc]] XGLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XGLMTokenizerFast
[[autodoc]] XGLMTokenizerFast
XGLMModel
[[autodoc]] XGLMModel
- forward
XGLMForCausalLM
[[autodoc]] XGLMForCausalLM
- forward
TFXGLMModel
[[autodoc]] TFXGLMModel
- call
TFXGLMForCausalLM
[[autodoc]] TFXGLMForCausalLM
- call
FlaxXGLMModel
[[autodoc]] FlaxXGLMModel
- call
FlaxXGLMForCausalLM
[[autodoc]] FlaxXGLMForCausalLM
- call
|
LayoutLMV2
Overview
The LayoutLMV2 model was proposed in LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu,
Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves LayoutLM to obtain
state-of-the-art results across several document image understanding benchmarks:
information extraction from scanned documents: the FUNSD dataset (a
collection of 199 annotated forms comprising more than 30,000 words), the CORD
dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the SROIE dataset (a collection of 626 receipts for training and 347 receipts for testing)
and the Kleister-NDA dataset (a collection of non-disclosure
agreements from the EDGAR database, including 254 documents for training, 83 documents for validation, and 203
documents for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
document visual question answering: the DocVQA dataset (a collection of 50,000
questions defined on 12,000+ document images).
The abstract from the paper is the following:
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to
its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this
paper, we present LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model
architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked
visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training
stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention
mechanism into the Transformer architecture, so that the model can fully understand the relative positional
relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and
achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks,
including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852),
RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at
this https URL.
LayoutLMv2 depends on detectron2, torchvision and tesseract. Run the
following to install them:
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
python -m pip install torchvision tesseract
(If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.)
Usage tips
The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during
pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning).
LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in
the self-attention layers. Details can be found on page 5 of the paper.
Demo notebooks on how to use the LayoutLMv2 model on RVL-CDIP, FUNSD, DocVQA, CORD can be found here.
LayoutLMv2 uses Facebook AI's Detectron2 package for its visual
backbone. See this link for installation
instructions.
In addition to input_ids, [~LayoutLMv2Model.forward] expects 2 additional inputs, namely
image and bbox. The image input corresponds to the original document image in which the text
tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of
document images, image should be a tensor of shape (batch_size, 3, 224, 224). This can be either a
torch.Tensor or a Detectron2.structures.ImageList. You don't need to normalize the channels, as this is
done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models
in Detectron2 are pre-trained using the BGR format. The bbox input are the bounding boxes (i.e. 2D-positions)
of the input text tokens. This is identical to [LayoutLMModel]. These can be obtained using an
external OCR engine such as Google's Tesseract (there's a Python
wrapper available). Each bounding box should be in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1)
represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on
a 0-1000 scale. To normalize, you can use the following function:
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs (before resizing the image). Those can be obtained using the Python Image Library (PIL) library for example, as
follows:
thon
from PIL import Image
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
)
width, height = image.size
However, this model includes a brand new [~transformers.LayoutLMv2Processor] which can be used to directly
prepare data for the model (including applying OCR under the hood). More information can be found in the "Usage"
section below.
Internally, [~transformers.LayoutLMv2Model] will send the image input through its visual backbone to
obtain a lower-resolution feature map, whose shape is equal to the image_feature_pool_shape attribute of
[~transformers.LayoutLMv2Config]. This feature map is then flattened to obtain a sequence of image tokens. As
the size of the feature map is 7x7 by default, one obtains 49 image tokens. These are then concatenated with the text
tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a
length of 512 + 49 = 561, if you pad the text tokens up to the max length. More generally, the last hidden states
will have a shape of seq_length + image_feature_pool_shape[0] *
config.image_feature_pool_shape[1].
When calling [~transformers.LayoutLMv2Model.from_pretrained], a warning will be printed with a long list of
parameter names that are not initialized. This is not a problem, as these parameters are batch normalization
statistics, which are going to have values when fine-tuning on a custom dataset.
If you want to train the model in a distributed environment, make sure to call [synchronize_batch_norm] on the
model in order to properly synchronize the batch normalization layers of the visual backbone.
In addition, there's LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on
LayoutXLM's documentation page.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on how to finetune LayoutLMv2 for text-classification on RVL-CDIP dataset.
See also: Text classification task guide
A notebook on how to finetune LayoutLMv2 for question-answering on DocVQA dataset.
See also: Question answering task guide
See also: Document question answering task guide
A notebook on how to finetune LayoutLMv2 for token-classification on CORD dataset.
A notebook on how to finetune LayoutLMv2 for token-classification on FUNSD dataset.
See also: Token classification task guide
Usage: LayoutLMv2Processor
The easiest way to prepare data for the model is to use [LayoutLMv2Processor], which internally
combines a image processor ([LayoutLMv2ImageProcessor]) and a tokenizer
([LayoutLMv2Tokenizer] or [LayoutLMv2TokenizerFast]). The image processor
handles the image modality, while the tokenizer handles the text modality. A processor combines both, which is ideal
for a multi-modal model like LayoutLMv2. Note that you can still use both separately, if you only want to handle one
modality.
thon
from transformers import LayoutLMv2ImageProcessor, LayoutLMv2TokenizerFast, LayoutLMv2Processor
image_processor = LayoutLMv2ImageProcessor() # apply_ocr is set to True by default
tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(image_processor, tokenizer)
In short, one can provide a document image (and possibly additional data) to [LayoutLMv2Processor],
and it will create the inputs expected by the model. Internally, the processor first uses
[LayoutLMv2ImageProcessor] to apply OCR on the image to get a list of words and normalized
bounding boxes, as well to resize the image to a given size in order to get the image input. The words and
normalized bounding boxes are then provided to [LayoutLMv2Tokenizer] or
[LayoutLMv2TokenizerFast], which converts them to token-level input_ids,
attention_mask, token_type_ids, bbox. Optionally, one can provide word labels to the processor,
which are turned into token-level labels.
[LayoutLMv2Processor] uses PyTesseract, a Python
wrapper around Google's Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of
choice, and provide the words and normalized boxes yourself. This requires initializing
[LayoutLMv2ImageProcessor] with apply_ocr set to False.
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
Use case 1: document image classification (training, inference) + token classification (inference), apply_ocr =
True
This is the simplest case, in which the processor (actually the image processor) will perform OCR on the image to get
the words and normalized bounding boxes.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
encoding = processor(
image, return_tensors="pt"
) # you can also add all tokenizer parameters here such as padding, truncation
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False
In case one wants to do OCR themselves, one can initialize the image processor with apply_ocr set to
False. In that case, one should provide the words and corresponding (normalized) bounding boxes themselves to
the processor.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
Use case 3: token classification (training), apply_ocr=False
For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word
labels in order to train a model. The processor will then convert these into token-level labels. By default, it
will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
ignore_index of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with only_label_first_subword set to False.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
word_labels = [1, 2]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image'])
Use case 4: visual question answering (inference), apply_ocr=True
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the
processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP].
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
encoding = processor(image, question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
Use case 5: visual question answering (inference), apply_ocr=False
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to
perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, question, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image'])
LayoutLMv2Config
[[autodoc]] LayoutLMv2Config
LayoutLMv2FeatureExtractor
[[autodoc]] LayoutLMv2FeatureExtractor
- call
LayoutLMv2ImageProcessor
[[autodoc]] LayoutLMv2ImageProcessor
- preprocess
LayoutLMv2Tokenizer
[[autodoc]] LayoutLMv2Tokenizer
- call
- save_vocabulary
LayoutLMv2TokenizerFast
[[autodoc]] LayoutLMv2TokenizerFast
- call
LayoutLMv2Processor
[[autodoc]] LayoutLMv2Processor
- call
LayoutLMv2Model
[[autodoc]] LayoutLMv2Model
- forward
LayoutLMv2ForSequenceClassification
[[autodoc]] LayoutLMv2ForSequenceClassification
LayoutLMv2ForTokenClassification
[[autodoc]] LayoutLMv2ForTokenClassification
LayoutLMv2ForQuestionAnswering
[[autodoc]] LayoutLMv2ForQuestionAnswering |
RetriBERT
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The RetriBERT model was proposed in the blog post Explain Anything Like I'm Five: A Model for Open Domain Long Form
Question Answering. RetriBERT is a small model that uses either a single or
pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.
This model was contributed by yjernite. Code to train and use the model can be
found here.
RetriBertConfig
[[autodoc]] RetriBertConfig
RetriBertTokenizer
[[autodoc]] RetriBertTokenizer
RetriBertTokenizerFast
[[autodoc]] RetriBertTokenizerFast
RetriBertModel
[[autodoc]] RetriBertModel
- forward |
Bark
Overview
Bark is a transformer-based text-to-speech model proposed by Suno AI in suno-ai/bark.
Bark is made of 4 main models:
[BarkSemanticModel] (also referred to as the 'text' model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
[BarkCoarseModel] (also referred to as the 'coarse acoustics' model): a causal autoregressive transformer, that takes as input the results of the [BarkSemanticModel] model. It aims at predicting the first two audio codebooks necessary for EnCodec.
[BarkFineModel] (the 'fine acoustics' model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings.
having predicted all the codebook channels from the [EncodecModel], Bark uses it to decode the output audio array.
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice.
This model was contributed by Yoach Lacombe (ylacombe) and Sanchit Gandhi (sanchit-gandhi).
The original code can be found here.
Optimizing Bark
Bark can be optimized with just a few extra lines of code, which significantly reduces its memory footprint and accelerates inference.
Using half-precision
You can speed up inference and reduce memory footprint by 50% simply by loading the model in half-precision.
thon
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device)
Using CPU offload
As mentioned above, Bark is made up of 4 sub-models, which are called up sequentially during audio generation. In other words, while one sub-model is in use, the other sub-models are idle.
If you're using a CUDA device, a simple solution to benefit from an 80% reduction in memory footprint is to offload the submodels from GPU to CPU when they're idle. This operation is called CPU offloading. You can use it with one line of code as follows:
python
model.enable_cpu_offload()
Note that 🤗 Accelerate must be installed before using this feature. Here's how to install it.
Using Better Transformer
Better Transformer is an 🤗 Optimum feature that performs kernel fusion under the hood. You can gain 20% to 30% in speed with zero performance degradation. It only requires one line of code to export the model to 🤗 Better Transformer:
python
model = model.to_bettertransformer()
Note that 🤗 Optimum must be installed before using this feature. Here's how to install it.
Using Flash Attention 2
Flash Attention 2 is an even faster, optimized version of the previous optimization.
Installation
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the official documentation. If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered above.
Next, install the latest version of Flash Attention 2:
pip install -U flash-attn --no-build-isolation
Usage
To load a model using Flash Attention 2, we can pass the attn_implementation="flash_attention_2" flag to .from_pretrained. We'll also load the model in half-precision (e.g. torch.float16), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
python
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device)
Performance comparison
The following diagram shows the latency for the native attention implementation (no optimisation) against Better Transformer and Flash Attention 2. In all cases, we generate 400 semantic tokens on a 40GB A100 GPU with PyTorch 2.1. Flash Attention 2 is also consistently faster than Better Transformer, and its performance improves even more as batch sizes increase:
To put this into perspective, on an NVIDIA A100 and when generating 400 semantic tokens with a batch size of 16, you can get 17 times the throughput and still be 2 seconds faster than generating sentences one by one with the native model implementation. In other words, all the samples will be generated 17 times faster.
At batch size 8, on an NVIDIA A100, Flash Attention 2 is also 10% faster than Better Transformer, and at batch size 16, 25%.
Combining optimization techniques
You can combine optimization techniques, and use CPU offload, half-precision and Flash Attention 2 (or 🤗 Better Transformer) all at once.
thon
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
load in fp16 and use Flash Attention 2
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device)
enable CPU offload
model.enable_cpu_offload()
Find out more on inference optimization techniques here.
Usage tips
Suno offers a library of voice presets in a number of languages here.
These presets are also uploaded in the hub here or here.
thon
from transformers import AutoProcessor, BarkModel
processor = AutoProcessor.from_pretrained("suno/bark")
model = BarkModel.from_pretrained("suno/bark")
voice_preset = "v2/en_speaker_6"
inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects.
thon
Multilingual speech - simplified Chinese
inputs = processor("惊人的!我会说中文")
Multilingual speech - French - let's use a voice_preset as well
inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
Bark can also generate music. You can help it out by adding music notes around your lyrics.
inputs = processor("♪ Hello, my dog is cute ♪")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
The model can also produce nonverbal communications like laughing, sighing and crying.
thon
Adding non-speech cues to the input text
inputs = processor("Hello uh [clears throat], my dog is cute [laughter]")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
To save the audio, simply take the sample rate from the model config and some scipy utility:
thon
from scipy.io.wavfile import write as write_wav
save audio to disk, but first take the sample rate from the model config
sample_rate = model.generation_config.sample_rate
write_wav("bark_generation.wav", sample_rate, audio_array)
BarkConfig
[[autodoc]] BarkConfig
- all
BarkProcessor
[[autodoc]] BarkProcessor
- all
- call
BarkModel
[[autodoc]] BarkModel
- generate
- enable_cpu_offload
BarkSemanticModel
[[autodoc]] BarkSemanticModel
- forward
BarkCoarseModel
[[autodoc]] BarkCoarseModel
- forward
BarkFineModel
[[autodoc]] BarkFineModel
- forward
BarkCausalModel
[[autodoc]] BarkCausalModel
- forward
BarkCoarseConfig
[[autodoc]] BarkCoarseConfig
- all
BarkFineConfig
[[autodoc]] BarkFineConfig
- all
BarkSemanticConfig
[[autodoc]] BarkSemanticConfig
- all |
ErnieM
Overview
The ErnieM model was proposed in ERNIE-M: Enhanced Multilingual Representation by Aligning
Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun,
Hao Tian, Hua Wu, Haifeng Wang.
The abstract from the paper is the following:
Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks.
This model was contributed by Susnato Dhar. The original code can be found here.
Usage tips
Ernie-M is a BERT-like model so it is a stacked Transformer Encoder.
Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: Cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. For now these two LMHead objectives are not implemented here.
It is a multilingual language model.
Next Sentence Prediction was not used in pretraining process.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Multiple choice task guide
ErnieMConfig
[[autodoc]] ErnieMConfig
ErnieMTokenizer
[[autodoc]] ErnieMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
ErnieMModel
[[autodoc]] ErnieMModel
- forward
ErnieMForSequenceClassification
[[autodoc]] ErnieMForSequenceClassification
- forward
ErnieMForMultipleChoice
[[autodoc]] ErnieMForMultipleChoice
- forward
ErnieMForTokenClassification
[[autodoc]] ErnieMForTokenClassification
- forward
ErnieMForQuestionAnswering
[[autodoc]] ErnieMForQuestionAnswering
- forward
ErnieMForInformationExtraction
[[autodoc]] ErnieMForInformationExtraction
- forward |
SegGPT
Overview
The SegGPT model was proposed in SegGPT: Segmenting Everything In Context by Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun Huang. SegGPT employs a decoder-only Transformer that can generate a segmentation mask given an input image, a prompt image and its corresponding prompt mask. The model achieves remarkable one-shot results with 56.1 mIoU on COCO-20 and 85.6 mIoU on FSS-1000.
The abstract from the paper is the following:
We present SegGPT, a generalist model for segmenting everything in context. We unify various segmentation tasks into a generalist in-context learning framework that accommodates different kinds of segmentation data by transforming them into the same format of images. The training of SegGPT is formulated as an in-context coloring problem with random color mapping for each data sample. The objective is to accomplish diverse tasks according to the context, rather than relying on specific colors. After training, SegGPT can perform arbitrary segmentation tasks in images or videos via in-context inference, such as object instance, stuff, part, contour, and text. SegGPT is evaluated on a broad range of tasks, including few-shot semantic segmentation, video object segmentation, semantic segmentation, and panoptic segmentation. Our results show strong capabilities in segmenting in-domain and out-of
Tips:
- One can use [SegGptImageProcessor] to prepare image input, prompt and mask to the model.
- It's highly advisable to pass num_labels (not considering background) during preprocessing and postprocessing with [SegGptImageProcessor] for your use case.
- When doing infenrece with [SegGptForImageSegmentation] if your batch_size is greater than 1 you can use feature ensemble across your images by passing feature_ensemble=True in the forward method.
Here's how to use the model for one-shot semantic segmentation:
thon
import torch
from datasets import load_dataset
from transformers import SegGptImageProcessor, SegGptForImageSegmentation
model_id = "BAAI/seggpt-vit-large"
image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
model = SegGptForImageSegmentation.from_pretrained(checkpoint)
dataset_id = "EduardoPacheco/FoodSeg103"
ds = load_dataset(dataset_id, split="train")
Number of labels in FoodSeg103 (not including background)
num_labels = 103
image_input = ds[4]["image"]
ground_truth = ds[4]["label"]
image_prompt = ds[29]["image"]
mask_prompt = ds[29]["label"]
inputs = image_processor(
images=image_input,
prompt_images=image_prompt,
prompt_masks=mask_prompt,
num_labels=num_labels,
return_tensors="pt"
)
with torch.no_grad():
outputs = model(**inputs)
target_sizes = [image_input.size[::-1]]
mask = image_processor.post_process_semantic_segmentation(outputs, target_sizes, num_labels=num_labels)[0]
This model was contributed by EduardoPacheco.
The original code can be found here.
SegGptConfig
[[autodoc]] SegGptConfig
SegGptImageProcessor
[[autodoc]] SegGptImageProcessor
- preprocess
- post_process_semantic_segmentation
SegGptModel
[[autodoc]] SegGptModel
- forward
SegGptForImageSegmentation
[[autodoc]] SegGptForImageSegmentation
- forward |
FastSpeech2Conformer
Overview
The FastSpeech2Conformer model was proposed with the paper Recent Developments On Espnet Toolkit Boosted By Conformer by Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, Jing Shi, Shinji Watanabe, Kun Wei, Wangyou Zhang, and Yuekai Zhang.
The abstract from the original FastSpeech2 paper is the following:
Non-autoregressive text to speech (TTS) models such as FastSpeech (Ren et al., 2019) can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available at https://speechresearch.github.io/fastspeech2/.
This model was contributed by Connor Henderson. The original code can be found here.
🤗 Model Architecture
FastSpeech2's general structure with a Mel-spectrogram decoder was implemented, and the traditional transformer blocks were replaced with with conformer blocks as done in the ESPnet library.
FastSpeech2 Model Architecture
Conformer Blocks
Convolution Module
🤗 Transformers Usage
You can run FastSpeech2Conformer locally with the 🤗 Transformers library.
First install the 🤗 Transformers library, g2p-en:
pip install --upgrade pip
pip install --upgrade transformers g2p-en
Run inference via the Transformers modelling code with the model and hifigan separately
thon
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerModel, FastSpeech2ConformerHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["input_ids"]
model = FastSpeech2ConformerModel.from_pretrained("espnet/fastspeech2_conformer")
output_dict = model(input_ids, return_dict=True)
spectrogram = output_dict["spectrogram"]
hifigan = FastSpeech2ConformerHifiGan.from_pretrained("espnet/fastspeech2_conformer_hifigan")
waveform = hifigan(spectrogram)
sf.write("speech.wav", waveform.squeeze().detach().numpy(), samplerate=22050)
Run inference via the Transformers modelling code with the model and hifigan combined
thon
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerWithHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["input_ids"]
model = FastSpeech2ConformerWithHifiGan.from_pretrained("espnet/fastspeech2_conformer_with_hifigan")
output_dict = model(input_ids, return_dict=True)
waveform = output_dict["waveform"]
sf.write("speech.wav", waveform.squeeze().detach().numpy(), samplerate=22050)
Run inference with a pipeline and specify which vocoder to use
thon
from transformers import pipeline, FastSpeech2ConformerHifiGan
import soundfile as sf
vocoder = FastSpeech2ConformerHifiGan.from_pretrained("espnet/fastspeech2_conformer_hifigan")
synthesiser = pipeline(model="espnet/fastspeech2_conformer", vocoder=vocoder)
speech = synthesiser("Hello, my dog is cooler than you!")
sf.write("speech.wav", speech["audio"].squeeze(), samplerate=speech["sampling_rate"])
FastSpeech2ConformerConfig
[[autodoc]] FastSpeech2ConformerConfig
FastSpeech2ConformerHifiGanConfig
[[autodoc]] FastSpeech2ConformerHifiGanConfig
FastSpeech2ConformerWithHifiGanConfig
[[autodoc]] FastSpeech2ConformerWithHifiGanConfig
FastSpeech2ConformerTokenizer
[[autodoc]] FastSpeech2ConformerTokenizer
- call
- save_vocabulary
- decode
- batch_decode
FastSpeech2ConformerModel
[[autodoc]] FastSpeech2ConformerModel
- forward
FastSpeech2ConformerHifiGan
[[autodoc]] FastSpeech2ConformerHifiGan
- forward
FastSpeech2ConformerWithHifiGan
[[autodoc]] FastSpeech2ConformerWithHifiGan
- forward |
X-CLIP
Overview
The X-CLIP model was proposed in Expanding Language-Image Pretrained Models for General Video Recognition by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
X-CLIP is a minimal extension of CLIP for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator.
The abstract from the paper is the following:
Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable "zero-shot" generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited.
Tips:
Usage of X-CLIP is identical to CLIP.
X-CLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP.
Demo notebooks for X-CLIP can be found here.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
XCLIPProcessor
[[autodoc]] XCLIPProcessor
XCLIPConfig
[[autodoc]] XCLIPConfig
- from_text_vision_configs
XCLIPTextConfig
[[autodoc]] XCLIPTextConfig
XCLIPVisionConfig
[[autodoc]] XCLIPVisionConfig
XCLIPModel
[[autodoc]] XCLIPModel
- forward
- get_text_features
- get_video_features
XCLIPTextModel
[[autodoc]] XCLIPTextModel
- forward
XCLIPVisionModel
[[autodoc]] XCLIPVisionModel
- forward |
VideoMAE
Overview
The VideoMAE model was proposed in VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
VideoMAE extends masked auto encoders (MAE) to video, claiming state-of-the-art performance on several video classification benchmarks.
The abstract from the paper is the following:
Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data.
VideoMAE pre-training. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If
you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll
review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Video classification
- A notebook that shows how
to fine-tune a VideoMAE model on a custom dataset.
- Video classification task guide
- A 🤗 Space showing how to perform inference with a video classification model.
VideoMAEConfig
[[autodoc]] VideoMAEConfig
VideoMAEFeatureExtractor
[[autodoc]] VideoMAEFeatureExtractor
- call
VideoMAEImageProcessor
[[autodoc]] VideoMAEImageProcessor
- preprocess
VideoMAEModel
[[autodoc]] VideoMAEModel
- forward
VideoMAEForPreTraining
VideoMAEForPreTraining includes the decoder on top for self-supervised pre-training.
[[autodoc]] transformers.VideoMAEForPreTraining
- forward
VideoMAEForVideoClassification
[[autodoc]] transformers.VideoMAEForVideoClassification
- forward |
Vision Transformer (ViT)
Overview
The Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
ViT architecture. Taken from the original paper.
Following the original Vision Transformer, some follow-up works have been made:
DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [ViTModel] or
[ViTForImageClassification]. There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224,
facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should
use [DeiTImageProcessor] in order to prepare images for the model.
BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained
vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE.
DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using
the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting
objects, without having ever been trained to do so. DINO checkpoints can be found on the hub.
MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion
(75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms
supervised pre-training after fine-tuning.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Note that we converted the weights from Ross Wightman's timm library,
who already converted the weights from JAX to PyTorch. Credits go to him!
Usage tips
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be
used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard Transformer encoder.
As the Vision Transformer expects each image to be of the same size (resolution), one can use
[ViTImageProcessor] to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of
each checkpoint. For example, google/vit-base-patch16-224 refers to a base-sized architecture with patch
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-21k (a collection of
14 million images and 21k classes) only, or (2) also fine-tuned on ImageNet (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to
use a higher resolution than pre-training (Touvron et al., 2019), (Kolesnikov
et al., 2020). In order to fine-tune at higher resolution, the authors perform
2D interpolation of the pre-trained position embeddings, according to their location in the original image.
The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed
an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked
language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant
improvement of 2% to training from scratch, but still 4% behind supervised pre-training.
Resources
Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found here.
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTForImageClassification is supported by:
A blog post on how to Fine-Tune ViT for Image Classification with Hugging Face Transformers
A blog post on Image Classification with Hugging Face Transformers and Keras
A notebook on Fine-tuning for Image Classification with Hugging Face Transformers
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning
⚗️ Optimization
A blog post on how to Accelerate Vision Transformer (ViT) with Quantization using Optimum
⚡️ Inference
A notebook on Quick demo: Vision Transformer (ViT) by Google Brain
🚀 Deploy
A blog post on Deploying Tensorflow Vision Models in Hugging Face with TF Serving
A blog post on Deploying Hugging Face ViT on Vertex AI
A blog post on Deploying Hugging Face ViT on Kubernetes with TF Serving
ViTConfig
[[autodoc]] ViTConfig
ViTFeatureExtractor
[[autodoc]] ViTFeatureExtractor
- call
ViTImageProcessor
[[autodoc]] ViTImageProcessor
- preprocess
ViTModel
[[autodoc]] ViTModel
- forward
ViTForMaskedImageModeling
[[autodoc]] ViTForMaskedImageModeling
- forward
ViTForImageClassification
[[autodoc]] ViTForImageClassification
- forward
TFViTModel
[[autodoc]] TFViTModel
- call
TFViTForImageClassification
[[autodoc]] TFViTForImageClassification
- call
FlaxVitModel
[[autodoc]] FlaxViTModel
- call
FlaxViTForImageClassification
[[autodoc]] FlaxViTForImageClassification
- call
|
LLaMA
Overview
The LLaMA model was proposed in LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters.
The abstract from the paper is the following:
*We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. *
This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
Usage tips
Weights for the LLaMA models can be obtained from by filling out this form
After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it's thus 130GB of RAM needed.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here. The Flax version of the implementation was contributed by afmck with the code in the implementation based on Hugging Face's Flax GPT-Neo.
Based on the original LLaMA model, Meta AI has released some follow-up works:
Llama2: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎
StackLLaMA: A hands-on guide to train LLaMA with RLHF, a blog post about how to train LLaMA to answer questions on Stack Exchange with RLHF.
⚗️ Optimization
- A notebook on how to fine-tune LLaMA model using xturing library on GPU which has limited memory. 🌎
⚡️ Inference
- A notebook on how to run the LLaMA Model using PeftModel from the 🤗 PEFT library. 🌎
- A notebook on how to load a PEFT adapter LLaMA model with LangChain. 🌎
🚀 Deploy
- A notebook on how to fine-tune LLaMA model using LoRA method via the 🤗 PEFT library with intuitive UI. 🌎
- A notebook on how to deploy Open-LLaMA model for text generation on Amazon SageMaker. 🌎
LlamaConfig
[[autodoc]] LlamaConfig
LlamaTokenizer
[[autodoc]] LlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LlamaTokenizerFast
[[autodoc]] LlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
LlamaModel
[[autodoc]] LlamaModel
- forward
LlamaForCausalLM
[[autodoc]] LlamaForCausalLM
- forward
LlamaForSequenceClassification
[[autodoc]] LlamaForSequenceClassification
- forward
LlamaForQuestionAnswering
[[autodoc]] LlamaForQuestionAnswering
- forward
FlaxLlamaModel
[[autodoc]] FlaxLlamaModel
- call
FlaxLlamaForCausalLM
[[autodoc]] FlaxLlamaForCausalLM
- call |
XLSR-Wav2Vec2
Overview
The XLSR-Wav2Vec2 model was proposed in Unsupervised Cross-Lingual Representation Learning For Speech Recognition by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
Auli.
The abstract from the paper is the following:
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw
waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over
masked latent speech representations and jointly learns a quantization of the latents shared across languages. The
resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly
outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction
of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to
a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong
individual models. Analysis shows that the latent discrete speech representations are shared across languages with
increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing
XLSR-53, a large model pretrained in 53 languages.
The original code can be found here.
Usage tips
XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokenizer].
XLSR-Wav2Vec2's architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2's documentation page.
|
MVP
Overview
The MVP model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
According to the abstract,
MVP follows a standard Transformer encoder-decoder architecture.
MVP is supervised pre-trained using labeled datasets.
MVP also has task-specific soft prompts to stimulate the model's capacity in performing a certain task.
MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.
This model was contributed by Tianyi Tang. The detailed information and instructions can be found here.
Usage tips
We have released a series of models here, including MVP, MVP with task-specific prompts, and multi-task pre-trained variants.
If you want to use a model without prompts (standard Transformer), you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp').
If you want to use a model with task-specific prompts, such as summarization, you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp-summarization').
Our model supports lightweight prompt tuning following Prefix-tuning with method set_lightweight_tuning().
Usage examples
For summarization, it is an example to use MVP and MVP with summarization-specific prompts.
thon
from transformers import MvpTokenizer, MvpForConditionalGeneration
tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_prompt = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
inputs = tokenizer(
"Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
return_tensors="pt",
)
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]
generated_ids = model_with_prompt.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants.
thon
from transformers import MvpTokenizerFast, MvpForConditionalGeneration
tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_mtl = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
inputs = tokenizer(
"Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
return_tensors="pt",
)
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic']
generated_ids = model_with_mtl.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
For lightweight tuning, i.e., fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the original paper.
thon
from transformers import MvpForConditionalGeneration
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp", use_prompt=True)
the number of trainable parameters (full tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
468116832
lightweight tuning with randomly initialized prompts
model.set_lightweight_tuning()
the number of trainable parameters (lightweight tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
61823328
lightweight tuning with task-specific prompts
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
model.set_lightweight_tuning()
original lightweight Prefix-tuning
model = MvpForConditionalGeneration.from_pretrained("facebook/bart-large", use_prompt=True)
model.set_lightweight_tuning()
Resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MvpConfig
[[autodoc]] MvpConfig
MvpTokenizer
[[autodoc]] MvpTokenizer
MvpTokenizerFast
[[autodoc]] MvpTokenizerFast
MvpModel
[[autodoc]] MvpModel
- forward
MvpForConditionalGeneration
[[autodoc]] MvpForConditionalGeneration
- forward
MvpForSequenceClassification
[[autodoc]] MvpForSequenceClassification
- forward
MvpForQuestionAnswering
[[autodoc]] MvpForQuestionAnswering
- forward
MvpForCausalLM
[[autodoc]] MvpForCausalLM
- forward |
MBart and MBart-50
Overview of MBart
The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan
Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual
corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete
sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only
on the encoder, decoder, or reconstructing parts of the text.
This model was contributed by valhalla. The Authors' code can be found here
Training of MBart
MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The
target text format is [tgt_lang_code] X [eos]. bos is never used.
The regular [~MBartTokenizer.__call__] will encode source text format passed as first argument or with the text
keyword, and target text format passed with the text_label keyword argument.
Supervised training
thon
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
forward pass
model(**inputs)
Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.
thon
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX")
article = "UN Chief Says There Is No Military Solution in Syria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"])
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Şeful ONU declară că nu există o soluţie militară în Siria"
Overview of MBart-50
MBart-50 was introduced in the Multilingual Translation with Extensible Multilingual Pretraining and Finetuning paper by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav
Chaudhary, Jiatao Gu, Angela Fan. MBart-50 is created using the original mbart-large-cc25 checkpoint by extendeding
its embedding layers with randomly initialized vectors for an extra set of 25 language tokens and then pretrained on 50
languages.
According to the abstract
Multilingual translation models can be created through multilingual finetuning. Instead of finetuning on one
direction, a pretrained model is finetuned on many directions at the same time. It demonstrates that pretrained models
can be extended to incorporate additional languages without loss of performance. Multilingual finetuning improves on
average 1 BLEU over the strongest baselines (being either multilingual from scratch or bilingual finetuning) while
improving 9.3 BLEU on average over bilingual baselines from scratch.
Training of MBart-50
The text format for MBart-50 is slightly different from mBART. For MBart-50 the language id token is used as a prefix
for both source and target text i.e the text format is [lang_code] X [eos], where lang_code is source
language id for source text and target language id for target text, with X being the source or target text
respectively.
MBart-50 has its own tokenizer [MBart50Tokenizer].
Supervised training
thon
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
model(**model_inputs) # forward pass
Generation
To generate using the mBART-50 multilingual translation models, eos_token_id is used as the
decoder_start_token_id and the target language id is forced as the first generated token. To force the
target language id as the first generated token, pass the forced_bos_token_id parameter to the generate method.
The following example shows how to translate between Hindi to French and Arabic to English using the
facebook/mbart-50-large-many-to-many checkpoint.
thon
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
=> "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."
translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
=> "The Secretary-General of the United Nations says there is no military solution in Syria."
Documentation resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide
MBartConfig
[[autodoc]] MBartConfig
MBartTokenizer
[[autodoc]] MBartTokenizer
- build_inputs_with_special_tokens
MBartTokenizerFast
[[autodoc]] MBartTokenizerFast
MBart50Tokenizer
[[autodoc]] MBart50Tokenizer
MBart50TokenizerFast
[[autodoc]] MBart50TokenizerFast
MBartModel
[[autodoc]] MBartModel
MBartForConditionalGeneration
[[autodoc]] MBartForConditionalGeneration
MBartForQuestionAnswering
[[autodoc]] MBartForQuestionAnswering
MBartForSequenceClassification
[[autodoc]] MBartForSequenceClassification
MBartForCausalLM
[[autodoc]] MBartForCausalLM
- forward
TFMBartModel
[[autodoc]] TFMBartModel
- call
TFMBartForConditionalGeneration
[[autodoc]] TFMBartForConditionalGeneration
- call
FlaxMBartModel
[[autodoc]] FlaxMBartModel
- call
- encode
- decode
FlaxMBartForConditionalGeneration
[[autodoc]] FlaxMBartForConditionalGeneration
- call
- encode
- decode
FlaxMBartForSequenceClassification
[[autodoc]] FlaxMBartForSequenceClassification
- call
- encode
- decode
FlaxMBartForQuestionAnswering
[[autodoc]] FlaxMBartForQuestionAnswering
- call
- encode
- decode
|
Whisper
Overview
The Whisper model was proposed in Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
The abstract from the paper is the following:
We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
This model was contributed by Arthur Zucker. The Tensorflow version of this model was contributed by amyeroberts.
The original code can be found here.
Usage tips
The model usually performs well without requiring any finetuning.
The architecture follows a classic encoder-decoder architecture, which means that it relies on the [~generation.GenerationMixin.generate] function for inference.
One can use [WhisperProcessor] to prepare audio for the model, and decode the predicted ID's back into text.
To convert the model and the processor, we recommend using the following:
python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path "" --pytorch_dump_folder_path "Arthur/whisper-3" --convert_preprocessor True
The script will automatically determine all necessary parameters from the OpenAI checkpoint. A tiktoken library needs to be installed
to perform the conversion of the OpenAI tokenizer to the tokenizers version.
Inference
Here is a step-by-step guide to transcribing an audio sample using a pre-trained Whisper model:
thon
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
Use the model and processor to transcribe the audio:
input_features = processor(
waveform, sampling_rate=sampling_rate, return_tensors="pt"
).input_features
Generate token ids
predicted_ids = model.generate(input_features)
Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.'
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Whisper. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A fork with a script to convert a Whisper model in Hugging Face format to OpenAI format. 🌎
Usage example:
pip install -U openai-whisper
python convert_hf_to_openai.py \
--checkpoint openai/whisper-tiny \
--whisper_dump_path whisper-tiny-openai.pt
WhisperConfig
[[autodoc]] WhisperConfig
WhisperTokenizer
[[autodoc]] WhisperTokenizer
- set_prefix_tokens
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
- batch_decode
- decode
- basic_normalize
- normalize
WhisperTokenizerFast
[[autodoc]] WhisperTokenizerFast
- set_prefix_tokens
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
- batch_decode
- decode
- basic_normalize
- normalize
WhisperFeatureExtractor
[[autodoc]] WhisperFeatureExtractor
- call
WhisperProcessor
[[autodoc]] WhisperProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
WhisperModel
[[autodoc]] WhisperModel
- forward
- _mask_input_features
WhisperForConditionalGeneration
[[autodoc]] WhisperForConditionalGeneration
- forward
- generate
WhisperForCausalLM
[[autodoc]] WhisperForCausalLM
- forward
WhisperForAudioClassification
[[autodoc]] WhisperForAudioClassification
- forward
TFWhisperModel
[[autodoc]] TFWhisperModel
- call
TFWhisperForConditionalGeneration
[[autodoc]] TFWhisperForConditionalGeneration
- call
FlaxWhisperModel
[[autodoc]] FlaxWhisperModel
- call
FlaxWhisperForConditionalGeneration
[[autodoc]] FlaxWhisperForConditionalGeneration
- call
FlaxWhisperForAudioClassification
[[autodoc]] FlaxWhisperForAudioClassification
- call
|
LayoutLMv3
Overview
The LayoutLMv3 model was proposed in LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
LayoutLMv3 simplifies LayoutLMv2 by using patch embeddings (as in ViT) instead of leveraging a CNN backbone, and pre-trains the model on 3 objectives: masked language modeling (MLM), masked image modeling (MIM)
and word-patch alignment (WPA).
The abstract from the paper is the following:
Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis.
LayoutLMv3 architecture. Taken from the original paper.
This model was contributed by nielsr. The TensorFlow version of this model was added by chriskoo, tokec, and lre. The original code can be found here.
Usage tips
In terms of data processing, LayoutLMv3 is identical to its predecessor LayoutLMv2, except that:
images need to be resized and normalized with channels in regular RGB format. LayoutLMv2 on the other hand normalizes the images internally and expects the channels in BGR format.
text is tokenized using byte-pair encoding (BPE), as opposed to WordPiece.
Due to these differences in data preprocessing, one can use [LayoutLMv3Processor] which internally combines a [LayoutLMv3ImageProcessor] (for the image modality) and a [LayoutLMv3Tokenizer]/[LayoutLMv3TokenizerFast] (for the text modality) to prepare all data for the model.
Regarding usage of [LayoutLMv3Processor], we refer to the usage guide of its predecessor.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv3. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LayoutLMv3 is nearly identical to LayoutLMv2, so we've also included LayoutLMv2 resources you can adapt for LayoutLMv3 tasks. For these notebooks, take care to use [LayoutLMv2Processor] instead when preparing data for the model!
Demo notebooks for LayoutLMv3 can be found here.
Demo scripts can be found here.
[LayoutLMv2ForSequenceClassification] is supported by this notebook.
Text classification task guide
[LayoutLMv3ForTokenClassification] is supported by this example script and notebook.
A notebook for how to perform inference with [LayoutLMv2ForTokenClassification] and a notebook for how to perform inference when no labels are available with [LayoutLMv2ForTokenClassification].
A notebook for how to finetune [LayoutLMv2ForTokenClassification] with the 🤗 Trainer.
Token classification task guide
[LayoutLMv2ForQuestionAnswering] is supported by this notebook.
Question answering task guide
Document question answering
- Document question answering task guide
LayoutLMv3Config
[[autodoc]] LayoutLMv3Config
LayoutLMv3FeatureExtractor
[[autodoc]] LayoutLMv3FeatureExtractor
- call
LayoutLMv3ImageProcessor
[[autodoc]] LayoutLMv3ImageProcessor
- preprocess
LayoutLMv3Tokenizer
[[autodoc]] LayoutLMv3Tokenizer
- call
- save_vocabulary
LayoutLMv3TokenizerFast
[[autodoc]] LayoutLMv3TokenizerFast
- call
LayoutLMv3Processor
[[autodoc]] LayoutLMv3Processor
- call
LayoutLMv3Model
[[autodoc]] LayoutLMv3Model
- forward
LayoutLMv3ForSequenceClassification
[[autodoc]] LayoutLMv3ForSequenceClassification
- forward
LayoutLMv3ForTokenClassification
[[autodoc]] LayoutLMv3ForTokenClassification
- forward
LayoutLMv3ForQuestionAnswering
[[autodoc]] LayoutLMv3ForQuestionAnswering
- forward
TFLayoutLMv3Model
[[autodoc]] TFLayoutLMv3Model
- call
TFLayoutLMv3ForSequenceClassification
[[autodoc]] TFLayoutLMv3ForSequenceClassification
- call
TFLayoutLMv3ForTokenClassification
[[autodoc]] TFLayoutLMv3ForTokenClassification
- call
TFLayoutLMv3ForQuestionAnswering
[[autodoc]] TFLayoutLMv3ForQuestionAnswering
- call
|
Deformable DETR
Overview
The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable attention module which only attends to a small set of key sampling points around a reference.
The abstract from the paper is the following:
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach.
Deformable DETR architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage tips
Training Deformable DETR is equivalent to training the original DETR model. See the resources section below for demo notebooks.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR.
Demo notebooks regarding inference + fine-tuning on a custom dataset for [DeformableDetrForObjectDetection] can be found here.
See also: Object detection task guide.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DeformableDetrImageProcessor
[[autodoc]] DeformableDetrImageProcessor
- preprocess
- post_process_object_detection
DeformableDetrFeatureExtractor
[[autodoc]] DeformableDetrFeatureExtractor
- call
- post_process_object_detection
DeformableDetrConfig
[[autodoc]] DeformableDetrConfig
DeformableDetrModel
[[autodoc]] DeformableDetrModel
- forward
DeformableDetrForObjectDetection
[[autodoc]] DeformableDetrForObjectDetection
- forward |
BLIP
Overview
The BLIP model was proposed in BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
BLIP is a model that is able to perform various multi-modal tasks including:
- Visual Question Answering
- Image-Text retrieval (Image-text matching)
- Image Captioning
The abstract from the paper is the following:
Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks.
However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.
This model was contributed by ybelkada.
The original code can be found here.
Resources
Jupyter notebook on how to fine-tune BLIP for image captioning on a custom dataset
BlipConfig
[[autodoc]] BlipConfig
- from_text_vision_configs
BlipTextConfig
[[autodoc]] BlipTextConfig
BlipVisionConfig
[[autodoc]] BlipVisionConfig
BlipProcessor
[[autodoc]] BlipProcessor
BlipImageProcessor
[[autodoc]] BlipImageProcessor
- preprocess
BlipModel
[[autodoc]] BlipModel
- forward
- get_text_features
- get_image_features
BlipTextModel
[[autodoc]] BlipTextModel
- forward
BlipVisionModel
[[autodoc]] BlipVisionModel
- forward
BlipForConditionalGeneration
[[autodoc]] BlipForConditionalGeneration
- forward
BlipForImageTextRetrieval
[[autodoc]] BlipForImageTextRetrieval
- forward
BlipForQuestionAnswering
[[autodoc]] BlipForQuestionAnswering
- forward
TFBlipModel
[[autodoc]] TFBlipModel
- call
- get_text_features
- get_image_features
TFBlipTextModel
[[autodoc]] TFBlipTextModel
- call
TFBlipVisionModel
[[autodoc]] TFBlipVisionModel
- call
TFBlipForConditionalGeneration
[[autodoc]] TFBlipForConditionalGeneration
- call
TFBlipForImageTextRetrieval
[[autodoc]] TFBlipForImageTextRetrieval
- call
TFBlipForQuestionAnswering
[[autodoc]] TFBlipForQuestionAnswering
- call
|
Persimmon
Overview
The Persimmon model was created by ADEPT, and authored by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.
The authors introduced Persimmon-8B, a decoder model based on the classic transformers architecture, with query and key normalization. Persimmon-8B is a fully permissively-licensed model with approximately 8 billion parameters, released under the Apache license. Some of the key attributes of Persimmon-8B are long context size (16K), performance, and capabilities for multimodal extensions.
The authors showcase their approach to model evaluation, focusing on practical text generation, mirroring how users interact with language models. The work also includes a comparative analysis, pitting Persimmon-8B against other prominent models (MPT 7B Instruct and Llama 2 Base 7B 1-Shot), across various evaluation tasks. The results demonstrate Persimmon-8B's competitive performance, even with limited training data.
In terms of model details, the work outlines the architecture and training methodology of Persimmon-8B, providing insights into its design choices, sequence length, and dataset composition. The authors present a fast inference code that outperforms traditional implementations through operator fusion and CUDA graph utilization while maintaining code coherence. They express their anticipation of how the community will leverage this contribution to drive innovation, hinting at further upcoming releases as part of an ongoing series of developments.
This model was contributed by ArthurZ.
The original code can be found here.
Usage tips
The Persimmon models were trained using bfloat16, but the original inference uses float16 The checkpoints uploaded on the hub use torch_dtype = 'float16' which will be
used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16.
The dtype of the online weights is mostly irrelevant, unless you are using torch_dtype="auto" when initializing a model using model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto"). The reason is that the model will first be downloaded ( using the dtype of the checkpoints online) then it will be cast to the default dtype of torch (becomes torch.float32). Users should specify the torch_dtype they want, and if they don't it will be torch.float32.
Finetuning the model in float16 is not recommended and known to produce nan, as such the model should be fine-tuned in bfloat16.
Tips:
To convert the model, you need to clone the original repository using git clone https://github.com/persimmon-ai-labs/adept-inference, then get the checkpoints:
git clone https://github.com/persimmon-ai-labs/adept-inference
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_base_model_release.tar
tar -xvf 8b_base_model_release.tar
python src/transformers/models/persimmon/convert_persimmon_weights_to_hf.py --input_dir /path/to/downloaded/persimmon/weights/ --output_dir /output/path \
--pt_model_path /path/to/8b_chat_model_release/iter_0001251/mp_rank_00/model_optim_rng.pt
--ada_lib_path /path/to/adept-inference
For the chat model:
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar
tar -xvf 8b_base_model_release.tar
Thereafter, models can be loaded via:
from transformers import PersimmonForCausalLM, PersimmonTokenizer
model = PersimmonForCausalLM.from_pretrained("/output/path")
tokenizer = PersimmonTokenizer.from_pretrained("/output/path")
Perismmon uses a sentencepiece based tokenizer, with a Unigram model. It supports bytefallback, which is only available in tokenizers==0.14.0 for the fast tokenizer.
The LlamaTokenizer is used as it is a standard wrapper around sentencepiece. The chat template will be updated with the templating functions in a follow up PR!
The authors suggest to use the following prompt format for the chat mode: f"human: {prompt}\n\nadept:"
PersimmonConfig
[[autodoc]] PersimmonConfig
PersimmonModel
[[autodoc]] PersimmonModel
- forward
PersimmonForCausalLM
[[autodoc]] PersimmonForCausalLM
- forward
PersimmonForSequenceClassification
[[autodoc]] PersimmonForSequenceClassification
- forward |
DePlot
Overview
DePlot was proposed in the paper DePlot: One-shot visual language reasoning by plot-to-table translation from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
The abstract of the paper states the following:
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
DePlot is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
DePlot is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer.
Usage example
Currently one checkpoint is available for DePlot:
google/deplot: DePlot fine-tuned on ChartQA dataset
thon
from transformers import AutoProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot")
processor = AutoProcessor.from_pretrained("google/deplot")
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning
To fine-tune DePlot, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence:
thon
from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup
optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
DePlot is a model trained using Pix2Struct architecture. For API reference, see Pix2Struct documentation.
|
MaskFormer
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue.
Overview
The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification.
The abstract from the paper is the following:
Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
The figure below illustrates the architecture of MaskFormer. Taken from the original paper.
This model was contributed by francesco. The original code can be found here.
Usage tips
MaskFormer's Transformer decoder is identical to the decoder of DETR. During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter use_auxiliary_loss of [MaskFormerConfig] to True, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters).
If you want to train the model in a distributed environment across multiple nodes, then one should update the
get_num_masks function inside in the MaskFormerLoss class of modeling_maskformer.py. When training on multiple nodes, this should be
set to the average number of target masks across all nodes, as can be seen in the original implementation here.
One can use [MaskFormerImageProcessor] to prepare images for the model and optional targets for the model.
To get the final segmentation, depending on the task, you can call [~MaskFormerImageProcessor.post_process_semantic_segmentation] or [~MaskFormerImageProcessor.post_process_panoptic_segmentation]. Both tasks can be solved using [MaskFormerForInstanceSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together.
Resources
All notebooks that illustrate inference as well as fine-tuning on custom data with MaskFormer can be found here.
MaskFormer specific outputs
[[autodoc]] models.maskformer.modeling_maskformer.MaskFormerModelOutput
[[autodoc]] models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput
MaskFormerConfig
[[autodoc]] MaskFormerConfig
MaskFormerImageProcessor
[[autodoc]] MaskFormerImageProcessor
- preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
MaskFormerFeatureExtractor
[[autodoc]] MaskFormerFeatureExtractor
- call
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation
MaskFormerModel
[[autodoc]] MaskFormerModel
- forward
MaskFormerForInstanceSegmentation
[[autodoc]] MaskFormerForInstanceSegmentation
- forward |
KOSMOS-2
Overview
The KOSMOS-2 model was proposed in Kosmos-2: Grounding Multimodal Large Language Models to the World by Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei.
KOSMOS-2 is a Transformer-based causal language model and is trained using the next-word prediction task on a web-scale
dataset of grounded image-text pairs GRIT. The spatial coordinates of
the bounding boxes in the dataset are converted to a sequence of location tokens, which are appended to their respective
entity text spans (for example, a snowman followed by <patch_index_0044><patch_index_0863>). The data format is
similar to “hyperlinks” that connect the object regions in an image to their text span in the corresponding caption.
The abstract from the paper is the following:
We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``text span'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.
Overview of tasks that KOSMOS-2 can handle. Taken from the original paper.
Example
thon
from PIL import Image
import requests
from transformers import AutoProcessor, Kosmos2ForConditionalGeneration
model = Kosmos2ForConditionalGeneration.from_pretrained("microsoft/kosmos-2-patch14-224")
processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224")
url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = " An image of"
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
pixel_values=inputs["pixel_values"],
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
image_embeds=None,
image_embeds_position_mask=inputs["image_embeds_position_mask"],
use_cache=True,
max_new_tokens=64,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False)
processed_text
' An image of a snowman warming himself by a fire.'
caption, entities = processor.post_process_generation(generated_text)
caption
'An image of a snowman warming himself by a fire.'
entities
[('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])]
This model was contributed by Yih-Dar SHIEH. The original code can be found here.
Kosmos2Config
[[autodoc]] Kosmos2Config
Kosmos2ImageProcessor
Kosmos2Processor
[[autodoc]] Kosmos2Processor
- call
Kosmos2Model
[[autodoc]] Kosmos2Model
- forward
Kosmos2ForConditionalGeneration
[[autodoc]] Kosmos2ForConditionalGeneration
- forward |
Wav2Vec2
Overview
The Wav2Vec2 model was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The abstract from the paper is the following:
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on
transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks
the speech input in the latent space and solves a contrastive task defined over a quantization of the latent
representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the
clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state
of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and
pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech
recognition with limited amounts of labeled data.
This model was contributed by patrickvonplaten.
Usage tips
Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on how to leverage a pretrained Wav2Vec2 model for emotion classification. 🌎
[Wav2Vec2ForCTC] is supported by this example script and notebook.
Audio classification task guide
A blog post on boosting Wav2Vec2 with n-grams in 🤗 Transformers.
A blog post on how to finetune Wav2Vec2 for English ASR with 🤗 Transformers.
A blog post on finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers.
A notebook on how to create YouTube captions from any video by transcribing audio with Wav2Vec2. 🌎
[Wav2Vec2ForCTC] is supported by a notebook on how to finetune a speech recognition model in English, and how to finetune a speech recognition model in any language.
Automatic speech recognition task guide
🚀 Deploy
A blog post on how to deploy Wav2Vec2 for Automatic Speech Recognition with Hugging Face's Transformers & Amazon SageMaker.
Wav2Vec2Config
[[autodoc]] Wav2Vec2Config
Wav2Vec2CTCTokenizer
[[autodoc]] Wav2Vec2CTCTokenizer
- call
- save_vocabulary
- decode
- batch_decode
- set_target_lang
Wav2Vec2FeatureExtractor
[[autodoc]] Wav2Vec2FeatureExtractor
- call
Wav2Vec2Processor
[[autodoc]] Wav2Vec2Processor
- call
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
Wav2Vec2ProcessorWithLM
[[autodoc]] Wav2Vec2ProcessorWithLM
- call
- pad
- from_pretrained
- save_pretrained
- batch_decode
- decode
Decoding multiple audios
If you are planning to decode multiple batches of audios, you should consider using [~Wav2Vec2ProcessorWithLM.batch_decode] and passing an instantiated multiprocessing.Pool.
Otherwise, [~Wav2Vec2ProcessorWithLM.batch_decode] performance will be slower than calling [~Wav2Vec2ProcessorWithLM.decode] for each audio individually, as it internally instantiates a new Pool for every call. See the example below:
thon
Let's see how to use a user-managed pool for batch decoding multiple audios
from multiprocessing import get_context
from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC
from datasets import load_dataset
import datasets
import torch
import model, feature extractor, tokenizer
model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda")
processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
load example dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000))
def map_to_array(batch):
batch["speech"] = batch["audio"]["array"]
return batch
prepare speech data for batch inference
dataset = dataset.map(map_to_array, remove_columns=["audio"])
def map_to_pred(batch, pool):
inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy(), pool).text
batch["transcription"] = transcription
return batch
note: pool should be instantiated after Wav2Vec2ProcessorWithLM.
otherwise, the LM won't be available to the pool's sub-processes
select number of processes and batch_size based on number of CPU cores available and on dataset size
with get_context("fork").Pool(processes=2) as pool:
result = dataset.map(
map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"]
)
result["transcription"][:2]
['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"]
Wav2Vec2 specific outputs
[[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput
[[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput
Wav2Vec2Model
[[autodoc]] Wav2Vec2Model
- forward
Wav2Vec2ForCTC
[[autodoc]] Wav2Vec2ForCTC
- forward
- load_adapter
Wav2Vec2ForSequenceClassification
[[autodoc]] Wav2Vec2ForSequenceClassification
- forward
Wav2Vec2ForAudioFrameClassification
[[autodoc]] Wav2Vec2ForAudioFrameClassification
- forward
Wav2Vec2ForXVector
[[autodoc]] Wav2Vec2ForXVector
- forward
Wav2Vec2ForPreTraining
[[autodoc]] Wav2Vec2ForPreTraining
- forward
TFWav2Vec2Model
[[autodoc]] TFWav2Vec2Model
- call
TFWav2Vec2ForSequenceClassification
[[autodoc]] TFWav2Vec2ForSequenceClassification
- call
TFWav2Vec2ForCTC
[[autodoc]] TFWav2Vec2ForCTC
- call
FlaxWav2Vec2Model
[[autodoc]] FlaxWav2Vec2Model
- call
FlaxWav2Vec2ForCTC
[[autodoc]] FlaxWav2Vec2ForCTC
- call
FlaxWav2Vec2ForPreTraining
[[autodoc]] FlaxWav2Vec2ForPreTraining
- call
|
GPT-Sw3
Overview
The GPT-Sw3 model was first proposed in
Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish
by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman,
Fredrik Carlsson, Magnus Sahlgren.
Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile.
GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden
in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing
320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a
causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation.
This model was contributed by AI Sweden Models.
Usage example
thon
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
model = AutoModelForCausalLM.from_pretrained("AI-Sweden-Models/gpt-sw3-356m")
input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"]
generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0]
print(tokenizer.decode(generated_token_ids))
Träd är fina för att de är färgstarka. Men ibland är det fint
Resources
Text classification task guide
Token classification task guide
Causal language modeling task guide
The implementation uses the GPT2Model coupled with our GPTSw3Tokenizer. Refer to GPT2Model documentation
for API reference and examples.
Note that sentencepiece is required to use our tokenizer and can be installed with pip install transformers[sentencepiece] or pip install sentencepiece
GPTSw3Tokenizer
[[autodoc]] GPTSw3Tokenizer
- save_vocabulary |
Video Vision Transformer (ViViT)
Overview
The Vivit model was proposed in ViViT: A Video Vision Transformer by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid.
The paper proposes one of the first successful pure-transformer based set of models for video understanding.
The abstract from the paper is the following:
We present pure-transformer based models for video classification, drawing upon the recent success of such models in image classification. Our model extracts spatio-temporal tokens from the input video, which are then encoded by a series of transformer layers. In order to handle the long sequences of tokens encountered in video, we propose several, efficient variants of our model which factorise the spatial- and temporal-dimensions of the input. Although transformer-based models are known to only be effective when large training datasets are available, we show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple video classification benchmarks including Kinetics 400 and 600, Epic Kitchens, Something-Something v2 and Moments in Time, outperforming prior methods based on deep 3D convolutional networks.
This model was contributed by jegormeister. The original code (written in JAX) can be found here.
VivitConfig
[[autodoc]] VivitConfig
VivitImageProcessor
[[autodoc]] VivitImageProcessor
- preprocess
VivitModel
[[autodoc]] VivitModel
- forward
VivitForVideoClassification
[[autodoc]] transformers.VivitForVideoClassification
- forward |
ResNet
Overview
The ResNet model was proposed in Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun. Our implementation follows the small changes made by Nvidia, we apply the stride=2 for downsampling in bottleneck's 3x3 conv and not in the first 1x1. This is generally known as "ResNet v1.5".
ResNet introduced residual connections, they allow to train networks with an unseen number of layers (up to 1000). ResNet won the 2015 ILSVRC & COCO competition, one important milestone in deep computer vision.
The abstract from the paper is the following:
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.
The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
The figure below illustrates the architecture of ResNet. Taken from the original paper.
This model was contributed by Francesco. The TensorFlow version of this model was added by amyeroberts. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ResNet.
[ResNetForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ResNetConfig
[[autodoc]] ResNetConfig
ResNetModel
[[autodoc]] ResNetModel
- forward
ResNetForImageClassification
[[autodoc]] ResNetForImageClassification
- forward
TFResNetModel
[[autodoc]] TFResNetModel
- call
TFResNetForImageClassification
[[autodoc]] TFResNetForImageClassification
- call
FlaxResNetModel
[[autodoc]] FlaxResNetModel
- call
FlaxResNetForImageClassification
[[autodoc]] FlaxResNetForImageClassification
- call
|
VAN
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The VAN model was proposed in Visual Attention Network by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.
The abstract from the paper is the following:
While originally designed for natural language processing tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at this https URL.
Tips:
VAN does not have an embedding layer, thus the hidden_states will have a length equal to the number of stages.
The figure below illustrates the architecture of a Visual Attention Layer. Taken from the original paper.
This model was contributed by Francesco. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VAN.
[VanForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
VanConfig
[[autodoc]] VanConfig
VanModel
[[autodoc]] VanModel
- forward
VanForImageClassification
[[autodoc]] VanForImageClassification
- forward |
FlauBERT
Overview
The FlauBERT model was proposed in the paper FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le et al. It's a transformer model pretrained using a masked language
modeling (MLM) objective (like BERT).
The abstract from the paper is the following:
Language models have become a key step to achieve state-of-the art results in many different Natural Language
Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way
to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their
contextualization at the sentence level. This has been widely demonstrated for English using contextualized
representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al.,
2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and
heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for
Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text
classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the
time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation
protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research
community for further reproducible experiments in French NLP.
This model was contributed by formiel. The original code can be found here.
Tips:
- Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective).
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FlaubertConfig
[[autodoc]] FlaubertConfig
FlaubertTokenizer
[[autodoc]] FlaubertTokenizer
FlaubertModel
[[autodoc]] FlaubertModel
- forward
FlaubertWithLMHeadModel
[[autodoc]] FlaubertWithLMHeadModel
- forward
FlaubertForSequenceClassification
[[autodoc]] FlaubertForSequenceClassification
- forward
FlaubertForMultipleChoice
[[autodoc]] FlaubertForMultipleChoice
- forward
FlaubertForTokenClassification
[[autodoc]] FlaubertForTokenClassification
- forward
FlaubertForQuestionAnsweringSimple
[[autodoc]] FlaubertForQuestionAnsweringSimple
- forward
FlaubertForQuestionAnswering
[[autodoc]] FlaubertForQuestionAnswering
- forward
TFFlaubertModel
[[autodoc]] TFFlaubertModel
- call
TFFlaubertWithLMHeadModel
[[autodoc]] TFFlaubertWithLMHeadModel
- call
TFFlaubertForSequenceClassification
[[autodoc]] TFFlaubertForSequenceClassification
- call
TFFlaubertForMultipleChoice
[[autodoc]] TFFlaubertForMultipleChoice
- call
TFFlaubertForTokenClassification
[[autodoc]] TFFlaubertForTokenClassification
- call
TFFlaubertForQuestionAnsweringSimple
[[autodoc]] TFFlaubertForQuestionAnsweringSimple
- call
|
TVP
Overview
The text-visual prompting (TVP) framework was proposed in the paper Text-Visual Prompting for Efficient 2D Temporal Video Grounding by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding.
The abstract from the paper is the following:
In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual prompting (TVP) framework, which incorporates optimized perturbation patterns (that we call ‘prompts’) into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features.
This research addresses temporal video grounding (TVG), which is the process of pinpointing the start and end times of specific events in a long video, as described by a text sentence. Text-visual prompting (TVP), is proposed to enhance TVG. TVP involves integrating specially designed patterns, known as 'prompts', into both the visual (image-based) and textual (word-based) input components of a TVG model. These prompts provide additional spatial-temporal context, improving the model's ability to accurately determine event timings in the video. The approach employs 2D visual inputs in place of 3D ones. Although 3D inputs offer more spatial-temporal detail, they are also more time-consuming to process. The use of 2D inputs with the prompting method aims to provide similar levels of context and accuracy more efficiently.
TVP architecture. Taken from the original paper.
This model was contributed by Jiqing Feng. The original code can be found here.
Usage tips and examples
Prompts are optimized perturbation patterns, which would be added to input video frames or text features. Universal set refers to using the same exact set of prompts for any input, this means that these prompts are added consistently to all video frames and text features, regardless of the input's content.
TVP consists of a visual encoder and cross-modal encoder. A universal set of visual prompts and text prompts to be integrated into sampled video frames and textual features, respectively. Specially, a set of different visual prompts are applied to uniformly-sampled frames of one untrimmed video in order.
The goal of this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems.
In principle, one can apply any visual, cross-modal encoder in the proposed architecture.
The [TvpProcessor] wraps [BertTokenizer] and [TvpImageProcessor] into a single instance to both
encode the text and prepare the images respectively.
The following example shows how to run temporal video grounding using [TvpProcessor] and [TvpForVideoGrounding].
thon
import av
import cv2
import numpy as np
import torch
from huggingface_hub import hf_hub_download
from transformers import AutoProcessor, TvpForVideoGrounding
def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps):
'''
Convert the video from its original fps to the target_fps and decode the video with PyAV decoder.
Args:
container (container): pyav container.
sampling_rate (int): frame sampling rate (interval between two sampled frames).
num_frames (int): number of frames to sample.
clip_idx (int): if clip_idx is -1, perform random temporal sampling.
If clip_idx is larger than -1, uniformly split the video to num_clips
clips, and select the clip_idx-th video clip.
num_clips (int): overall number of clips to uniformly sample from the given video.
target_fps (int): the input video may have different fps, convert it to
the target video fps before frame sampling.
Returns:
frames (tensor): decoded frames from the video. Return None if the no
video stream was found.
fps (float): the number of frames per second of the video.
'''
video = container.streams.video[0]
fps = float(video.average_rate)
clip_size = sampling_rate * num_frames / target_fps * fps
delta = max(num_frames - clip_size, 0)
start_idx = delta * clip_idx / num_clips
end_idx = start_idx + clip_size - 1
timebase = video.duration / num_frames
video_start_pts = int(start_idx * timebase)
video_end_pts = int(end_idx * timebase)
seek_offset = max(video_start_pts - 1024, 0)
container.seek(seek_offset, any_frame=False, backward=True, stream=video)
frames = {}
for frame in container.decode(video=0):
if frame.pts < video_start_pts:
continue
frames[frame.pts] = frame
if frame.pts > video_end_pts:
break
frames = [frames[pts] for pts in sorted(frames)]
return frames, fps
def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps):
'''
Decode the video and perform temporal sampling.
Args:
container (container): pyav container.
sampling_rate (int): frame sampling rate (interval between two sampled frames).
num_frames (int): number of frames to sample.
clip_idx (int): if clip_idx is -1, perform random temporal sampling.
If clip_idx is larger than -1, uniformly split the video to num_clips
clips, and select the clip_idx-th video clip.
num_clips (int): overall number of clips to uniformly sample from the given video.
target_fps (int): the input video may have different fps, convert it to
the target video fps before frame sampling.
Returns:
frames (tensor): decoded frames from the video.
'''
assert clip_idx >= -2, "Not a valied clip_idx {}".format(clip_idx)
frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps)
clip_size = sampling_rate * num_frames / target_fps * fps
index = np.linspace(0, clip_size - 1, num_frames)
index = np.clip(index, 0, len(frames) - 1).astype(np.int64)
frames = np.array([frames[idx].to_rgb().to_ndarray() for idx in index])
frames = frames.transpose(0, 3, 1, 2)
return frames
file = hf_hub_download(repo_id="Intel/tvp_demo", filename="AK2KG.mp4", repo_type="dataset")
model = TvpForVideoGrounding.from_pretrained("Intel/tvp-base")
decoder_kwargs = dict(
container=av.open(file, metadata_errors="ignore"),
sampling_rate=1,
num_frames=model.config.num_frames,
clip_idx=0,
num_clips=1,
target_fps=3,
)
raw_sampled_frms = decode(**decoder_kwargs)
text = "a person is sitting on a bed."
processor = AutoProcessor.from_pretrained("Intel/tvp-base")
model_inputs = processor(
text=[text], videos=list(raw_sampled_frms), return_tensors="pt", max_text_length=100#, size=size
)
model_inputs["pixel_values"] = model_inputs["pixel_values"].to(model.dtype)
output = model(**model_inputs)
def get_video_duration(filename):
cap = cv2.VideoCapture(filename)
if cap.isOpened():
rate = cap.get(5)
frame_num = cap.get(7)
duration = frame_num/rate
return duration
return -1
duration = get_video_duration(file)
start, end = processor.post_process_video_grounding(output.logits, duration)
print(f"The time slot of the video corresponding to the text \"{text}\" is from {start}s to {end}s")
Tips:
This implementation of TVP uses [BertTokenizer] to generate text embeddings and Resnet-50 model to compute visual embeddings.
Checkpoints for pre-trained tvp-base is released.
Please refer to Table 2 for TVP's performance on Temporal Video Grounding task.
TvpConfig
[[autodoc]] TvpConfig
TvpImageProcessor
[[autodoc]] TvpImageProcessor
- preprocess
TvpProcessor
[[autodoc]] TvpProcessor
- call
TvpModel
[[autodoc]] TvpModel
- forward
TvpForVideoGrounding
[[autodoc]] TvpForVideoGrounding
- forward |
FNet
Overview
The FNet model was proposed in FNet: Mixing Tokens with Fourier Transforms by
James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT
model with a fourier transform which returns only the real parts of the transform. The model is significantly faster
than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97%
accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the
paper is the following:
We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the
self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with
standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text
classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder
with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE
benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths,
our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena
benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all
sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint
and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models
outperform Transformer counterparts.
This model was contributed by gchhablani. The original code can be found here.
Usage tips
The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with
maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum
sequence length for fine-tuning and inference.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FNetConfig
[[autodoc]] FNetConfig
FNetTokenizer
[[autodoc]] FNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FNetTokenizerFast
[[autodoc]] FNetTokenizerFast
FNetModel
[[autodoc]] FNetModel
- forward
FNetForPreTraining
[[autodoc]] FNetForPreTraining
- forward
FNetForMaskedLM
[[autodoc]] FNetForMaskedLM
- forward
FNetForNextSentencePrediction
[[autodoc]] FNetForNextSentencePrediction
- forward
FNetForSequenceClassification
[[autodoc]] FNetForSequenceClassification
- forward
FNetForMultipleChoice
[[autodoc]] FNetForMultipleChoice
- forward
FNetForTokenClassification
[[autodoc]] FNetForTokenClassification
- forward
FNetForQuestionAnswering
[[autodoc]] FNetForQuestionAnswering
- forward |
UniSpeech
Overview
The UniSpeech model was proposed in UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael
Zeng, Xuedong Huang .
The abstract from the paper is the following:
In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both
unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive
self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture
information more correlated with phonetic structures and improve the generalization across languages and domains. We
evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The
results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech
recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all
testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task,
i.e., a relative word error rate reduction of 6% against the previous approach.
This model was contributed by patrickvonplaten. The Authors' code can be
found here.
Usage tips
UniSpeech is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please
use [Wav2Vec2Processor] for the feature extraction.
UniSpeech model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokenizer].
Resources
Audio classification task guide
Automatic speech recognition task guide
UniSpeechConfig
[[autodoc]] UniSpeechConfig
UniSpeech specific outputs
[[autodoc]] models.unispeech.modeling_unispeech.UniSpeechForPreTrainingOutput
UniSpeechModel
[[autodoc]] UniSpeechModel
- forward
UniSpeechForCTC
[[autodoc]] UniSpeechForCTC
- forward
UniSpeechForSequenceClassification
[[autodoc]] UniSpeechForSequenceClassification
- forward
UniSpeechForPreTraining
[[autodoc]] UniSpeechForPreTraining
- forward |
CPMAnt
Overview
CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. See more
This model was contributed by OpenBMB. The original code can be found here.
Resources
A tutorial on CPM-Live.
CpmAntConfig
[[autodoc]] CpmAntConfig
- all
CpmAntTokenizer
[[autodoc]] CpmAntTokenizer
- all
CpmAntModel
[[autodoc]] CpmAntModel
- all
CpmAntForCausalLM
[[autodoc]] CpmAntForCausalLM
- all |
Speech Encoder Decoder Models
The [SpeechEncoderDecoderModel] can be used to initialize a speech-to-text model
with any pretrained speech autoencoding model as the encoder (e.g. Wav2Vec2, Hubert) and any pretrained autoregressive model as the decoder.
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech
recognition and speech translation has e.g. been shown in Large-Scale Self- and Semi-Supervised Learning for Speech
Translation by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli,
Alexis Conneau.
An example of how to use a [SpeechEncoderDecoderModel] for inference can be seen in Speech2Text2.
Randomly initializing SpeechEncoderDecoderModel from model configurations.
[SpeechEncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [Wav2Vec2Model] configuration for the encoder
and the default [BertForCausalLM] configuration for the decoder.
thon
from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
config_encoder = Wav2Vec2Config()
config_decoder = BertConfig()
config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = SpeechEncoderDecoderModel(config=config)
Initialising SpeechEncoderDecoderModel from a pretrained encoder and a pretrained decoder.
[SpeechEncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, e.g. Wav2Vec2, Hubert can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing [SpeechEncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the SpeechEncoderDecoderModel class provides a [SpeechEncoderDecoderModel.from_encoder_decoder_pretrained] method.
thon
from transformers import SpeechEncoderDecoderModel
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
"facebook/hubert-large-ll60k", "google-bert/bert-base-uncased"
)
Loading an existing SpeechEncoderDecoderModel checkpoint and perform inference.
To load fine-tuned checkpoints of the SpeechEncoderDecoderModel class, [SpeechEncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers.
To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
thon
from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import torch
load a fine-tuned speech translation model and corresponding processor
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
let's perform inference on a piece of English speech (which we'll translate to German)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
autoregressively generate transcription (uses greedy decoding by default)
generated_ids = model.generate(input_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.
Training
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: input_values (which are the
speech inputs) and labels (which are the input_ids of the encoded target sequence).
thon
from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
from datasets import load_dataset
encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder
decoder_id = "google-bert/bert-base-uncased" # text decoder
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
load an audio input and pre-process (normalise mean/std to 0/1)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values
load its corresponding transcription and tokenize to generate labels
labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
the forward function automatically creates the correct decoder_input_ids
loss = model(input_values=input_values, labels=labels).loss
loss.backward()
SpeechEncoderDecoderConfig
[[autodoc]] SpeechEncoderDecoderConfig
SpeechEncoderDecoderModel
[[autodoc]] SpeechEncoderDecoderModel
- forward
- from_encoder_decoder_pretrained
FlaxSpeechEncoderDecoderModel
[[autodoc]] FlaxSpeechEncoderDecoderModel
- call
- from_encoder_decoder_pretrained |
MMS
Overview
The MMS model was proposed in Scaling Speech Technology to 1,000+ Languages
by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli
The abstract from the paper is the following:
Expanding the language coverage of speech technology has the potential to improve access to information for many more people.
However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000
languages spoken around the world.
The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task.
The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging
self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages,
a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models
for the same number of languages, as well as a language identification model for 4,017 languages.
Experiments show that our multilingual speech recognition model more than halves the word error rate of
Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.
Here are the different models open sourced in the MMS project. The models and code are originally released here. We have add them to the transformers framework, making them easier to use.
Automatic Speech Recognition (ASR)
The ASR model checkpoints can be found here : mms-1b-fl102, mms-1b-l1107, mms-1b-all. For best accuracy, use the mms-1b-all model.
Tips:
All ASR models accept a float array corresponding to the raw waveform of the speech signal. The raw waveform should be pre-processed with [Wav2Vec2FeatureExtractor].
The models were trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
You can load different language adapter weights for different languages via [~Wav2Vec2PreTrainedModel.load_adapter]. Language adapters only consists of roughly 2 million parameters
and can therefore be efficiently loaded on the fly when needed.
Loading
By default MMS loads adapter weights for English. If you want to load adapter weights of another language
make sure to specify target_lang=<your-chosen-target-lang> as well as "ignore_mismatched_sizes=True.
The ignore_mismatched_sizes=True keyword has to be passed to allow the language model head to be resized according
to the vocabulary of the specified language.
Similarly, the processor should be loaded with the same target language
from transformers import Wav2Vec2ForCTC, AutoProcessor
model_id = "facebook/mms-1b-all"
target_lang = "fra"
processor = AutoProcessor.from_pretrained(model_id, target_lang=target_lang)
model = Wav2Vec2ForCTC.from_pretrained(model_id, target_lang=target_lang, ignore_mismatched_sizes=True)
You can safely ignore a warning such as:
text
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/mms-1b-all and are newly initialized because the shapes did not match:
- lm_head.bias: found shape torch.Size([154]) in the checkpoint and torch.Size([314]) in the model instantiated
- lm_head.weight: found shape torch.Size([154, 1280]) in the checkpoint and torch.Size([314, 1280]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
If you want to use the ASR pipeline, you can load your chosen target language as such:
from transformers import pipeline
model_id = "facebook/mms-1b-all"
target_lang = "fra"
pipe = pipeline(model=model_id, model_kwargs={"target_lang": "fra", "ignore_mismatched_sizes": True})
Inference
Next, let's look at how we can run MMS in inference and change adapter layers after having called [~PretrainedModel.from_pretrained]
First, we load audio data in different languages using the Datasets.
from datasets import load_dataset, Audio
English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
French
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "fr", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
fr_sample = next(iter(stream_data))["audio"]["array"]
Next, we load the model and processor
from transformers import Wav2Vec2ForCTC, AutoProcessor
import torch
model_id = "facebook/mms-1b-all"
processor = AutoProcessor.from_pretrained(model_id)
model = Wav2Vec2ForCTC.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model and transcribe the model output,
just like we usually do for [Wav2Vec2ForCTC].
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
'joe keton disapproved of films and buster also had reservations about the media'
We can now keep the same model in memory and simply switch out the language adapters by
calling the convenient [~Wav2Vec2ForCTC.load_adapter] function for the model and [~Wav2Vec2CTCTokenizer.set_target_lang] for the tokenizer.
We pass the target language as an input - "fra" for French.
processor.tokenizer.set_target_lang("fra")
model.load_adapter("fra")
inputs = processor(fr_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
ids = torch.argmax(outputs, dim=-1)[0]
transcription = processor.decode(ids)
"ce dernier est volé tout au long de l'histoire romaine"
In the same way the language can be switched out for all other supported languages. Please have a look at:
py
processor.tokenizer.vocab.keys()
to see all supported languages.
To further improve performance from ASR models, language model decoding can be used. See the documentation here for further details.
Speech Synthesis (TTS)
MMS-TTS uses the same model architecture as VITS, which was added to 🤗 Transformers in v4.33. MMS trains a separate
model checkpoint for each of the 1100+ languages in the project. All available checkpoints can be found on the Hugging
Face Hub: facebook/mms-tts, and the inference
documentation under VITS.
Inference
To use the MMS model, first update to the latest version of the Transformers library:
pip install --upgrade transformers accelerate
Since the flow-based model in VITS is non-deterministic, it is good practice to set a seed to ensure reproducibility of
the outputs.
For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to
pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint:
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(**inputs)
waveform = outputs.waveform[0]
The resulting waveform can be saved as a .wav file:
thon
import scipy
scipy.io.wavfile.write("synthesized_speech.wav", rate=model.config.sampling_rate, data=waveform)
Or displayed in a Jupyter Notebook / Google Colab:
thon
from IPython.display import Audio
Audio(waveform, rate=model.config.sampling_rate)
For certain languages with non-Roman alphabets, such as Arabic, Mandarin or Hindi, the uroman
perl package is required to pre-process the text inputs to the Roman alphabet.
You can check whether you require the uroman package for your language by inspecting the is_uroman attribute of
the pre-trained tokenizer:
thon
from transformers import VitsTokenizer
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
print(tokenizer.is_uroman)
If required, you should apply the uroman package to your text inputs prior to passing them to the VitsTokenizer,
since currently the tokenizer does not support performing the pre-processing itself.
To do this, first clone the uroman repository to your local machine and set the bash variable UROMAN to the local path:
git clone https://github.com/isi-nlp/uroman.git
cd uroman
export UROMAN=$(pwd)
You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable
UROMAN to point to the uroman repository, or you can pass the uroman directory as an argument to the uromaize function:
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
import os
import subprocess
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor")
model = VitsModel.from_pretrained("facebook/mms-tts-kor")
def uromanize(input_string, uroman_path):
"""Convert non-Roman strings to Roman using the uroman perl package."""
script_path = os.path.join(uroman_path, "bin", "uroman.pl")
command = ["perl", script_path]
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Execute the perl command
stdout, stderr = process.communicate(input=input_string.encode())
if process.returncode != 0:
raise ValueError(f"Error {process.returncode}: {stderr.decode()}")
# Return the output as a string and skip the new-line character at the end
return stdout.decode()[:-1]
text = "이봐 무슨 일이야"
uromaized_text = uromanize(text, uroman_path=os.environ["UROMAN"])
inputs = tokenizer(text=uromaized_text, return_tensors="pt")
set_seed(555) # make deterministic
with torch.no_grad():
outputs = model(inputs["input_ids"])
waveform = outputs.waveform[0]
Tips:
The MMS-TTS checkpoints are trained on lower-cased, un-punctuated text. By default, the VitsTokenizer normalizes the inputs by removing any casing and punctuation, to avoid passing out-of-vocabulary characters to the model. Hence, the model is agnostic to casing and punctuation, so these should be avoided in the text prompt. You can disable normalisation by setting normalize=False in the call to the tokenizer, but this will lead to un-expected behaviour and is discouraged.
The speaking rate can be varied by setting the attribute model.speaking_rate to a chosen value. Likewise, the randomness of the noise is controlled by model.noise_scale:
thon
import torch
from transformers import VitsTokenizer, VitsModel, set_seed
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt")
make deterministic
set_seed(555)
make speech faster and more noisy
model.speaking_rate = 1.5
model.noise_scale = 0.8
with torch.no_grad():
outputs = model(**inputs)
Language Identification (LID)
Different LID models are available based on the number of languages they can recognize - 126, 256, 512, 1024, 2048, 4017.
Inference
First, we install transformers and some other libraries
```bash
pip install torch accelerate datasets[audio]
pip install --upgrade transformers
`
Next, we load a couple of audio samples via datasets. Make sure that the audio data is sampled to 16000 kHz.
from datasets import load_dataset, Audio
English
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
en_sample = next(iter(stream_data))["audio"]["array"]
Arabic
stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "ar", split="test", streaming=True)
stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000))
ar_sample = next(iter(stream_data))["audio"]["array"]
Next, we load the model and processor
from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
import torch
model_id = "facebook/mms-lid-126"
processor = AutoFeatureExtractor.from_pretrained(model_id)
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition
English
inputs = processor(en_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
'eng'
Arabic
inputs = processor(ar_sample, sampling_rate=16_000, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs).logits
lang_id = torch.argmax(outputs, dim=-1)[0].item()
detected_lang = model.config.id2label[lang_id]
'ara'
To see all the supported languages of a checkpoint, you can print out the language ids as follows:
py
processor.id2label.values()
Audio Pretrained Models
Pretrained models are available for two different sizes - 300M ,
1Bil.
The MMS for ASR architecture is based on the Wav2Vec2 model, refer to Wav2Vec2's documentation page for further
details on how to finetune with models for various downstream tasks.
MMS-TTS uses the same model architecture as VITS, refer to VITS's documentation page for API reference.
|
BORT
This model is in maintenance mode only, we do not accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The BORT model was proposed in Optimal Subarchitecture Extraction for BERT by
Adrian de Wynter and Daniel J. Perry. It is an optimal subset of architectural parameters for the BERT, which the
authors refer to as "Bort".
The abstract from the paper is the following:
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by
applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as
"Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the
original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which
is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large
(Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same
hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the
architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%,
absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
This model was contributed by stefan-it. The original code can be found here.
Usage tips
BORT's model architecture is based on BERT, refer to BERT's documentation page for the
model's API reference as well as usage examples.
BORT uses the RoBERTa tokenizer instead of the BERT tokenizer, refer to RoBERTa's documentation page for the tokenizer's API reference as well as usage examples.
BORT requires a specific fine-tuning algorithm, called Agora ,
that is sadly not open-sourced yet. It would be very useful for the community, if someone tries to implement the
algorithm to make BORT fine-tuning work.
|
SqueezeBERT
Overview
The SqueezeBERT model was proposed in SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, Kurt W. Keutzer. It's a
bidirectional transformer similar to the BERT model. The key difference between the BERT architecture and the
SqueezeBERT architecture is that SqueezeBERT uses grouped convolutions
instead of fully-connected layers for the Q, K, V and FFN layers.
The abstract from the paper is the following:
Humans read and write hundreds of billions of messages every day. Further, due to the availability of large datasets,
large computing systems, and better neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these messages. Thus, there is a significant
opportunity to deploy NLP in myriad applications to help web users, social networks, and businesses. In particular, we
consider smartphones and other mobile devices as crucial platforms for deploying NLP models at scale. However, today's
highly-accurate NLP neural network models such as BERT and RoBERTa are extremely computationally expensive, with
BERT-base taking 1.7 seconds to classify a text snippet on a Pixel 3 smartphone. In this work, we observe that methods
such as grouped convolutions have yielded significant speedups for computer vision networks, but many of these
techniques have not been adopted by NLP neural network designers. We demonstrate how to replace several operations in
self-attention layers with grouped convolutions, and we use this technique in a novel network architecture called
SqueezeBERT, which runs 4.3x faster than BERT-base on the Pixel 3 while achieving competitive accuracy on the GLUE test
set. The SqueezeBERT code will be released.
This model was contributed by forresti.
Usage tips
SqueezeBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right
rather than the left.
SqueezeBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained
with a causal language modeling (CLM) objective are better in that regard.
For best results when finetuning on sequence classification tasks, it is recommended to start with the
squeezebert/squeezebert-mnli-headless checkpoint.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
SqueezeBertConfig
[[autodoc]] SqueezeBertConfig
SqueezeBertTokenizer
[[autodoc]] SqueezeBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SqueezeBertTokenizerFast
[[autodoc]] SqueezeBertTokenizerFast
SqueezeBertModel
[[autodoc]] SqueezeBertModel
SqueezeBertForMaskedLM
[[autodoc]] SqueezeBertForMaskedLM
SqueezeBertForSequenceClassification
[[autodoc]] SqueezeBertForSequenceClassification
SqueezeBertForMultipleChoice
[[autodoc]] SqueezeBertForMultipleChoice
SqueezeBertForTokenClassification
[[autodoc]] SqueezeBertForTokenClassification
SqueezeBertForQuestionAnswering
[[autodoc]] SqueezeBertForQuestionAnswering |
Wav2Vec2Phoneme
Overview
The Wav2Vec2Phoneme model was proposed in Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al.,
2021 by Qiantong Xu, Alexei Baevski, Michael Auli.
The abstract from the paper is the following:
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech
recognition systems without any labeled data. However, in many cases there is labeled data available for related
languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer
learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by
mapping phonemes of the training languages to the target language using articulatory features. Experiments show that
this simple method significantly outperforms prior work which introduced task-specific architectures and used only part
of a monolingually pretrained model.
Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition.
This model was contributed by patrickvonplaten
The original code can be found here.
Usage tips
Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2
Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2PhonemeCTCTokenizer].
Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass
to a sequence of phonemes
By default, the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one
should make use of a dictionary and language model.
Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, for API reference, check out Wav2Vec2's documentation page
except for the tokenizer.
Wav2Vec2PhonemeCTCTokenizer
[[autodoc]] Wav2Vec2PhonemeCTCTokenizer
- call
- batch_decode
- decode
- phonemize |
BigBird
Overview
The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
The abstract from the paper is the following:
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.
This model was contributed by vasudevgupta. The original code can be found
here.
Usage tips
For an in-detail explanation on how BigBird's attention works, see this blog post.
BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using
original_full is advised as there is no benefit in using block_sparse attention.
The code currently uses window size of 3 blocks and 2 global blocks.
Sequence length must be divisible by block size.
Current implementation supports only ITC.
Current implementation doesn't support num_random_blocks = 0
BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
BigBirdConfig
[[autodoc]] BigBirdConfig
BigBirdTokenizer
[[autodoc]] BigBirdTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
BigBirdTokenizerFast
[[autodoc]] BigBirdTokenizerFast
BigBird specific outputs
[[autodoc]] models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput
BigBirdModel
[[autodoc]] BigBirdModel
- forward
BigBirdForPreTraining
[[autodoc]] BigBirdForPreTraining
- forward
BigBirdForCausalLM
[[autodoc]] BigBirdForCausalLM
- forward
BigBirdForMaskedLM
[[autodoc]] BigBirdForMaskedLM
- forward
BigBirdForSequenceClassification
[[autodoc]] BigBirdForSequenceClassification
- forward
BigBirdForMultipleChoice
[[autodoc]] BigBirdForMultipleChoice
- forward
BigBirdForTokenClassification
[[autodoc]] BigBirdForTokenClassification
- forward
BigBirdForQuestionAnswering
[[autodoc]] BigBirdForQuestionAnswering
- forward
FlaxBigBirdModel
[[autodoc]] FlaxBigBirdModel
- call
FlaxBigBirdForPreTraining
[[autodoc]] FlaxBigBirdForPreTraining
- call
FlaxBigBirdForCausalLM
[[autodoc]] FlaxBigBirdForCausalLM
- call
FlaxBigBirdForMaskedLM
[[autodoc]] FlaxBigBirdForMaskedLM
- call
FlaxBigBirdForSequenceClassification
[[autodoc]] FlaxBigBirdForSequenceClassification
- call
FlaxBigBirdForMultipleChoice
[[autodoc]] FlaxBigBirdForMultipleChoice
- call
FlaxBigBirdForTokenClassification
[[autodoc]] FlaxBigBirdForTokenClassification
- call
FlaxBigBirdForQuestionAnswering
[[autodoc]] FlaxBigBirdForQuestionAnswering
- call
|
Mixtral
Overview
Mixtral-8x7B was introduced in the Mixtral of Experts blogpost by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
The introduction of the blog post says:
Today, the team is proud to release Mixtral 8x7B, a high-quality sparse mixture of experts models (SMoE) with open weights. Licensed under Apache 2.0. Mixtral outperforms Llama 2 70B on most benchmarks with 6x faster inference. It is the strongest open-weight model with a permissive license and the best model overall regarding cost/performance trade-offs. In particular, it matches or outperforms GPT3.5 on most standard benchmarks.
Mixtral-8x7B is the second large language model (LLM) released by mistral.ai, after Mistral-7B.
Architectural details
Mixtral-8x7B is a decoder-only Transformer with the following architectural choices:
Mixtral is a Mixture of Experts (MoE) model with 8 experts per MLP, with a total of 45 billion parameters. To learn more about mixture-of-experts, refer to the blog post.
Despite the model having 45 billion parameters,, the compute required for a single forward pass is the same as that of a 14 billion parameter model. This is because even though each of the experts have to be loaded in RAM (70B like ram requirement) each token from the hidden states are dispatched twice (top 2 routing) and thus the compute (the operation required at each forward computation) is just 2 X sequence_length.
The following implementation details are shared with Mistral AI's first model Mistral-7B:
- Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens
- GQA (Grouped Query Attention) - allowing faster inference and lower cache size.
- Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens.
For more details refer to the release blog post.
License
Mixtral-8x7B is released under the Apache 2.0 license.
Usage tips
The Mistral team has released 2 checkpoints:
- a base model, Mixtral-8x7B-v0.1, which has been pre-trained to predict the next token on internet-scale data.
- an instruction tuned model, Mixtral-8x7B-Instruct-v0.1, which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO).
The base model can be used as follows:
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
prompt = "My favourite condiment is"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"My favourite condiment is to "
The instruction tuned model can be used as follows:
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"Mayonnaise can be made as follows: ()"
As can be seen, the instruction-tuned model requires a chat template to be applied to make sure the inputs are prepared in the right format.
Speeding up Mixtral by using Flash Attention
The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging Flash Attention, which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash attention repository. Make also sure to load your model in half-precision (e.g. torch.float16)
To load and run a model using Flash Attention-2, refer to the snippet below:
thon
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-v0.1")
prompt = "My favourite condiment is"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"The expected output"
Expected speedups
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using mistralai/Mixtral-8x7B-v0.1 checkpoint and the Flash Attention 2 version of the model.
Sliding window Attention
The current implementation supports the sliding window attention mechanism and memory efficient cache management.
To enable sliding window attention, just make sure to have a flash-attn version that is compatible with sliding window attention (>=2.3.0).
The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (self.config.sliding_window), support batched generation only for padding_side="left" and use the absolute position of the current token to compute the positional embedding.
Shrinking down Mixtral using quantization
As the Mixtral model has 45 billion parameters, that would require about 90GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using quantization. If the model is quantized to 4 bits (or half a byte per parameter), a single A100 with 40GB of RAM is enough to fit the entire model, as in that case only about 27 GB of RAM is required.
Quantizing a model is as simple as passing a quantization_config to the model. Below, we'll leverage the BitsAndyBytes quantization (but refer to this page for other quantization methods):
thon
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="torch.float16",
)
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1", quantization_config=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
prompt = "My favourite condiment is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"The expected output"
This model was contributed by Younes Belkada and Arthur Zucker .
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mixtral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A demo notebook to perform supervised fine-tuning (SFT) of Mixtral-8x7B can be found here. 🌎
A blog post on fine-tuning Mixtral-8x7B using PEFT. 🌎
The Alignment Handbook by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning.
Causal language modeling task guide
MixtralConfig
[[autodoc]] MixtralConfig
MixtralModel
[[autodoc]] MixtralModel
- forward
MixtralForCausalLM
[[autodoc]] MixtralForCausalLM
- forward
MixtralForSequenceClassification
[[autodoc]] MixtralForSequenceClassification
- forward |
Graphormer
Overview
The Graphormer model was proposed in Do Transformers Really Perform Bad for Graph Representation? by
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention.
The abstract from the paper is the following:
The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer.
This model was contributed by clefourrier. The original code can be found here.
Usage tips
This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode.
You can reduce the batch size, increase your RAM, or decrease the UNREACHABLE_NODE_DISTANCE parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges.
This model does not use a tokenizer, but instead a special collator during training.
GraphormerConfig
[[autodoc]] GraphormerConfig
GraphormerModel
[[autodoc]] GraphormerModel
- forward
GraphormerForGraphClassification
[[autodoc]] GraphormerForGraphClassification
- forward |
DeBERTa
Overview
The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google's
BERT model released in 2018 and Facebook's RoBERTa model released in 2019.
It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in
RoBERTa.
The abstract from the paper is the following:
Recent progress in pre-trained neural language models has significantly improved the performance of many natural
language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with
disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the
disentangled attention mechanism, where each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed using disentangled matrices on their
contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to
predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency
of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of
the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9%
(90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and
pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa.
This model was contributed by DeBERTa. This model TF 2.0 implementation was
contributed by kamalkraj . The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on how to Accelerate Large Model Training using DeepSpeed with DeBERTa.
A blog post on Supercharged Customer Service with Machine Learning with DeBERTa.
[DebertaForSequenceClassification] is supported by this example script and notebook.
[TFDebertaForSequenceClassification] is supported by this example script and notebook.
Text classification task guide
[DebertaForTokenClassification] is supported by this example script and notebook.
[TFDebertaForTokenClassification] is supported by this example script and notebook.
Token classification chapter of the 🤗 Hugging Face Course.
Byte-Pair Encoding tokenization chapter of the 🤗 Hugging Face Course.
Token classification task guide
[DebertaForMaskedLM] is supported by this example script and notebook.
[TFDebertaForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide
[DebertaForQuestionAnswering] is supported by this example script and notebook.
[TFDebertaForQuestionAnswering] is supported by this example script and notebook.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
DebertaConfig
[[autodoc]] DebertaConfig
DebertaTokenizer
[[autodoc]] DebertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
DebertaTokenizerFast
[[autodoc]] DebertaTokenizerFast
- build_inputs_with_special_tokens
- create_token_type_ids_from_sequences
DebertaModel
[[autodoc]] DebertaModel
- forward
DebertaPreTrainedModel
[[autodoc]] DebertaPreTrainedModel
DebertaForMaskedLM
[[autodoc]] DebertaForMaskedLM
- forward
DebertaForSequenceClassification
[[autodoc]] DebertaForSequenceClassification
- forward
DebertaForTokenClassification
[[autodoc]] DebertaForTokenClassification
- forward
DebertaForQuestionAnswering
[[autodoc]] DebertaForQuestionAnswering
- forward
TFDebertaModel
[[autodoc]] TFDebertaModel
- call
TFDebertaPreTrainedModel
[[autodoc]] TFDebertaPreTrainedModel
- call
TFDebertaForMaskedLM
[[autodoc]] TFDebertaForMaskedLM
- call
TFDebertaForSequenceClassification
[[autodoc]] TFDebertaForSequenceClassification
- call
TFDebertaForTokenClassification
[[autodoc]] TFDebertaForTokenClassification
- call
TFDebertaForQuestionAnswering
[[autodoc]] TFDebertaForQuestionAnswering
- call
|
Hybrid Vision Transformer (ViT Hybrid)
Overview
The hybrid Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures. ViT hybrid is a slight variant of the plain Vision Transformer,
by leveraging a convolutional backbone (specifically, BiT) whose features are used as initial "tokens" for the Transformer.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train.
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT Hybrid.
[ViTHybridForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTHybridConfig
[[autodoc]] ViTHybridConfig
ViTHybridImageProcessor
[[autodoc]] ViTHybridImageProcessor
- preprocess
ViTHybridModel
[[autodoc]] ViTHybridModel
- forward
ViTHybridForImageClassification
[[autodoc]] ViTHybridForImageClassification
- forward |
Data2Vec
Overview
The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images.
Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.
The abstract from the paper is the following:
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and
objectives differ widely because they were developed with a single modality in mind. To get us closer to general
self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech,
NLP or computer vision. The core idea is to predict latent representations of the full input data based on a
masked view of the input in a selfdistillation setup using a standard Transformer architecture.
Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which
are local in nature, data2vec predicts contextualized latent representations that contain information from
the entire input. Experiments on the major benchmarks of speech recognition, image classification, and
natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.
Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec.
This model was contributed by edugp and patrickvonplaten.
sayakpaul and Rocketknight1 contributed Data2Vec for vision in TensorFlow.
The original code (for NLP and Speech) can be found here.
The original code for vision can be found here.
Usage tips
Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method.
For Data2VecAudio, preprocessing is identical to [Wav2Vec2Model], including feature extraction
For Data2VecText, preprocessing is identical to [RobertaModel], including tokenization.
For Data2VecVision, preprocessing is identical to [BeitModel], including feature extraction.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec.
[Data2VecVisionForImageClassification] is supported by this example script and notebook.
To fine-tune [TFData2VecVisionForImageClassification] on a custom dataset, see this notebook.
Data2VecText documentation resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
- Causal language modeling task guide
- Masked language modeling task guide
- Multiple choice task guide
Data2VecAudio documentation resources
- Audio classification task guide
- Automatic speech recognition task guide
Data2VecVision documentation resources
- Image classification
- Semantic segmentation
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Data2VecTextConfig
[[autodoc]] Data2VecTextConfig
Data2VecAudioConfig
[[autodoc]] Data2VecAudioConfig
Data2VecVisionConfig
[[autodoc]] Data2VecVisionConfig
Data2VecAudioModel
[[autodoc]] Data2VecAudioModel
- forward
Data2VecAudioForAudioFrameClassification
[[autodoc]] Data2VecAudioForAudioFrameClassification
- forward
Data2VecAudioForCTC
[[autodoc]] Data2VecAudioForCTC
- forward
Data2VecAudioForSequenceClassification
[[autodoc]] Data2VecAudioForSequenceClassification
- forward
Data2VecAudioForXVector
[[autodoc]] Data2VecAudioForXVector
- forward
Data2VecTextModel
[[autodoc]] Data2VecTextModel
- forward
Data2VecTextForCausalLM
[[autodoc]] Data2VecTextForCausalLM
- forward
Data2VecTextForMaskedLM
[[autodoc]] Data2VecTextForMaskedLM
- forward
Data2VecTextForSequenceClassification
[[autodoc]] Data2VecTextForSequenceClassification
- forward
Data2VecTextForMultipleChoice
[[autodoc]] Data2VecTextForMultipleChoice
- forward
Data2VecTextForTokenClassification
[[autodoc]] Data2VecTextForTokenClassification
- forward
Data2VecTextForQuestionAnswering
[[autodoc]] Data2VecTextForQuestionAnswering
- forward
Data2VecVisionModel
[[autodoc]] Data2VecVisionModel
- forward
Data2VecVisionForImageClassification
[[autodoc]] Data2VecVisionForImageClassification
- forward
Data2VecVisionForSemanticSegmentation
[[autodoc]] Data2VecVisionForSemanticSegmentation
- forward
TFData2VecVisionModel
[[autodoc]] TFData2VecVisionModel
- call
TFData2VecVisionForImageClassification
[[autodoc]] TFData2VecVisionForImageClassification
- call
TFData2VecVisionForSemanticSegmentation
[[autodoc]] TFData2VecVisionForSemanticSegmentation
- call
|
UL2
Overview
The T5 model was presented in Unifying Language Learning Paradigms by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler.
The abstract from the paper is the following:
Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.
This model was contributed by DanielHesslow. The original code can be found here.
Usage tips
UL2 is an encoder-decoder model pre-trained on a mixture of denoising functions as well as fine-tuned on an array of downstream tasks.
UL2 has the same architecture as T5v1.1 but uses the Gated-SiLU activation function instead of Gated-GELU.
The authors release checkpoints of one architecture which can be seen here
As UL2 has the same architecture as T5v1.1, refer to T5's documentation page for API reference, tips, code examples and notebooks.
|
XLM-RoBERTa
Overview
The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume
Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's
RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl
data.
The abstract from the paper is the following:
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a
wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred
languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly
outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on
XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on
low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We
also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the
trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource
languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing
per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We
will make XLM-R code, data, and models publicly available.
This model was contributed by stefan-it. The original code can be found here.
Usage tips
XLM-RoBERTa is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does
not require lang tensors to understand which language is used, and should be able to determine the correct
language from the input ids.
Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with XLM-RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on how to finetune XLM RoBERTa for multiclass classification with Habana Gaudi on AWS
[XLMRobertaForSequenceClassification] is supported by this example script and notebook.
[TFXLMRobertaForSequenceClassification] is supported by this example script and notebook.
[FlaxXLMRobertaForSequenceClassification] is supported by this example script and notebook.
Text classification chapter of the 🤗 Hugging Face Task Guides.
Text classification task guide
[XLMRobertaForTokenClassification] is supported by this example script and notebook.
[TFXLMRobertaForTokenClassification] is supported by this example script and notebook.
[FlaxXLMRobertaForTokenClassification] is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide
[XLMRobertaForCausalLM] is supported by this example script and notebook.
Causal language modeling chapter of the 🤗 Hugging Face Task Guides.
Causal language modeling task guide
[XLMRobertaForMaskedLM] is supported by this example script and notebook.
[TFXLMRobertaForMaskedLM] is supported by this example script and notebook.
[FlaxXLMRobertaForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling
[XLMRobertaForQuestionAnswering] is supported by this example script and notebook.
[TFXLMRobertaForQuestionAnswering] is supported by this example script and notebook.
[FlaxXLMRobertaForQuestionAnswering] is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide
Multiple choice
[XLMRobertaForMultipleChoice] is supported by this example script and notebook.
[TFXLMRobertaForMultipleChoice] is supported by this example script and notebook.
Multiple choice task guide
🚀 Deploy
A blog post on how to Deploy Serverless XLM RoBERTa on AWS Lambda.
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs.
XLMRobertaConfig
[[autodoc]] XLMRobertaConfig
XLMRobertaTokenizer
[[autodoc]] XLMRobertaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XLMRobertaTokenizerFast
[[autodoc]] XLMRobertaTokenizerFast
XLMRobertaModel
[[autodoc]] XLMRobertaModel
- forward
XLMRobertaForCausalLM
[[autodoc]] XLMRobertaForCausalLM
- forward
XLMRobertaForMaskedLM
[[autodoc]] XLMRobertaForMaskedLM
- forward
XLMRobertaForSequenceClassification
[[autodoc]] XLMRobertaForSequenceClassification
- forward
XLMRobertaForMultipleChoice
[[autodoc]] XLMRobertaForMultipleChoice
- forward
XLMRobertaForTokenClassification
[[autodoc]] XLMRobertaForTokenClassification
- forward
XLMRobertaForQuestionAnswering
[[autodoc]] XLMRobertaForQuestionAnswering
- forward
TFXLMRobertaModel
[[autodoc]] TFXLMRobertaModel
- call
TFXLMRobertaForCausalLM
[[autodoc]] TFXLMRobertaForCausalLM
- call
TFXLMRobertaForMaskedLM
[[autodoc]] TFXLMRobertaForMaskedLM
- call
TFXLMRobertaForSequenceClassification
[[autodoc]] TFXLMRobertaForSequenceClassification
- call
TFXLMRobertaForMultipleChoice
[[autodoc]] TFXLMRobertaForMultipleChoice
- call
TFXLMRobertaForTokenClassification
[[autodoc]] TFXLMRobertaForTokenClassification
- call
TFXLMRobertaForQuestionAnswering
[[autodoc]] TFXLMRobertaForQuestionAnswering
- call
FlaxXLMRobertaModel
[[autodoc]] FlaxXLMRobertaModel
- call
FlaxXLMRobertaForCausalLM
[[autodoc]] FlaxXLMRobertaForCausalLM
- call
FlaxXLMRobertaForMaskedLM
[[autodoc]] FlaxXLMRobertaForMaskedLM
- call
FlaxXLMRobertaForSequenceClassification
[[autodoc]] FlaxXLMRobertaForSequenceClassification
- call
FlaxXLMRobertaForMultipleChoice
[[autodoc]] FlaxXLMRobertaForMultipleChoice
- call
FlaxXLMRobertaForTokenClassification
[[autodoc]] FlaxXLMRobertaForTokenClassification
- call
FlaxXLMRobertaForQuestionAnswering
[[autodoc]] FlaxXLMRobertaForQuestionAnswering
- call
|
DiT
Overview
DiT was proposed in DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
DiT applies the self-supervised objective of BEiT (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including:
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
document layout analysis: the PubLayNet dataset (a collection of more
than 360,000 document images constructed by automatically parsing PubMed XML files).
table detection: the ICDAR 2019 cTDaR dataset (a collection of
600 training images and 240 testing images).
The abstract from the paper is the following:
*Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55). *
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Usage tips
One can directly use the weights of DiT with the AutoModel API:
thon
from transformers import AutoModel
model = AutoModel.from_pretrained("microsoft/dit-base")
This will load the model pre-trained on masked image modeling. Note that this won't include the language modeling head on top, used to predict visual tokens.
To include the head, you can load the weights into a BeitForMaskedImageModeling model, like so:
thon
from transformers import BeitForMaskedImageModeling
model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base")
You can also load a fine-tuned model from the hub, like so:
thon
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip")
This particular checkpoint was fine-tuned on RVL-CDIP, an important benchmark for document image classification.
A notebook that illustrates inference for document image classification can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT.
[BeitForImageClassification] is supported by this example script and notebook.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
As DiT's architecture is equivalent to that of BEiT, one can refer to BEiT's documentation page for all tips, code examples and notebooks.
|
TVLT
Overview
The TVLT model was proposed in TVLT: Textless Vision-Language Transformer
by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal (the first three authors contributed equally). The Textless Vision-Language Transformer (TVLT) is a model that uses raw visual and audio inputs for vision-and-language representation learning, without using text-specific modules such as tokenization or automatic speech recognition (ASR). It can perform various audiovisual and vision-language tasks like retrieval, question answering, etc.
The abstract from the paper is the following:
In this work, we present the Textless Vision-Language Transformer (TVLT), where homogeneous transformer blocks take raw visual and audio inputs for vision-and-language representation learning with minimal modality-specific design, and do not use text-specific modules such as tokenization or automatic speech recognition (ASR). TVLT is trained by reconstructing masked patches of continuous video frames and audio spectrograms (masked autoencoding) and contrastive modeling to align video and audio. TVLT attains performance comparable to its text-based counterpart on various multimodal tasks, such as visual question answering, image retrieval, video retrieval, and multimodal sentiment analysis, with 28x faster inference speed and only 1/3 of the parameters. Our findings suggest the possibility of learning compact and efficient visual-linguistic representations from low-level visual and audio signals without assuming the prior existence of text.
TVLT architecture. Taken from the https://arxiv.org/abs/2102.03334">original paper.
The original code can be found here. This model was contributed by Zineng Tang.
Usage tips
TVLT is a model that takes both pixel_values and audio_values as input. One can use [TvltProcessor] to prepare data for the model.
This processor wraps an image processor (for the image/video modality) and an audio feature extractor (for the audio modality) into one.
TVLT is trained with images/videos and audios of various sizes: the authors resize and crop the input images/videos to 224 and limit the length of audio spectrogram to 2048. To make batching of videos and audios possible, the authors use a pixel_mask that indicates which pixels are real/padding and audio_mask that indicates which audio values are real/padding.
The design of TVLT is very similar to that of a standard Vision Transformer (ViT) and masked autoencoder (MAE) as in ViTMAE. The difference is that the model includes embedding layers for the audio modality.
The PyTorch version of this model is only available in torch 1.10 and higher.
TvltConfig
[[autodoc]] TvltConfig
TvltProcessor
[[autodoc]] TvltProcessor
- call
TvltImageProcessor
[[autodoc]] TvltImageProcessor
- preprocess
TvltFeatureExtractor
[[autodoc]] TvltFeatureExtractor
- call
TvltModel
[[autodoc]] TvltModel
- forward
TvltForPreTraining
[[autodoc]] TvltForPreTraining
- forward
TvltForAudioVisualClassification
[[autodoc]] TvltForAudioVisualClassification
- forward |
Time Series Transformer
Overview
The Time Series Transformer model is a vanilla encoder-decoder Transformer for time series forecasting.
This model was contributed by kashif.
Usage tips
Similar to other models in the library, [TimeSeriesTransformerModel] is the raw Transformer without any head on top, and [TimeSeriesTransformerForPrediction]
adds a distribution head on top of the former, which can be used for time-series forecasting. Note that this is a so-called probabilistic forecasting model, not a
point forecasting model. This means that the model learns a distribution, from which one can sample. The model doesn't directly output values.
[TimeSeriesTransformerForPrediction] consists of 2 blocks: an encoder, which takes a context_length of time series values as input (called past_values),
and a decoder, which predicts a prediction_length of time series values into the future (called future_values). During training, one needs to provide
pairs of (past_values and future_values) to the model.
In addition to the raw (past_values and future_values), one typically provides additional features to the model. These can be the following:
past_time_features: temporal features which the model will add to past_values. These serve as "positional encodings" for the Transformer encoder.
Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector).
e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year").
future_time_features: temporal features which the model will add to future_values. These serve as "positional encodings" for the Transformer decoder.
Examples are "day of the month", "month of the year", etc. as scalar values (and then stacked together as a vector).
e.g. if a given time-series value was obtained on the 11th of August, then one could have [11, 8] as time feature vector (11 being "day of the month", 8 being "month of the year").
static_categorical_features: categorical features which are static over time (i.e., have the same value for all past_values and future_values).
An example here is the store ID or region ID that identifies a given time-series.
Note that these features need to be known for ALL data points (also those in the future).
static_real_features: real-valued features which are static over time (i.e., have the same value for all past_values and future_values).
An example here is the image representation of the product for which you have the time-series values (like the ResNet embedding of a "shoe" picture,
if your time-series is about the sales of shoes).
Note that these features need to be known for ALL data points (also those in the future).
The model is trained using "teacher-forcing", similar to how a Transformer is trained for machine translation. This means that, during training, one shifts the
future_values one position to the right as input to the decoder, prepended by the last value of past_values. At each time step, the model needs to predict the
next target. So the set-up of training is similar to a GPT model for language, except that there's no notion of decoder_start_token_id (we just use the last value
of the context as initial input for the decoder).
At inference time, we give the final value of the past_values as input to the decoder. Next, we can sample from the model to make a prediction at the next time step,
which is then fed to the decoder in order to make the next prediction (also called autoregressive generation).
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Time Series Transformer blog-post in HuggingFace blog: Probabilistic Time Series Forecasting with 🤗 Transformers
TimeSeriesTransformerConfig
[[autodoc]] TimeSeriesTransformerConfig
TimeSeriesTransformerModel
[[autodoc]] TimeSeriesTransformerModel
- forward
TimeSeriesTransformerForPrediction
[[autodoc]] TimeSeriesTransformerForPrediction
- forward |
GPT Neo
Overview
The GPTNeo model was released in the EleutherAI/gpt-neo repository by Sid
Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the
Pile dataset.
The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of
256 tokens.
This model was contributed by valhalla.
Usage example
The generate() method can be used to generate text using GPT Neo model.
thon
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = (
"In a shocking finding, scientists discovered a herd of unicorns living in a remote, "
"previously unexplored valley, in the Andes Mountains. Even more surprising to the "
"researchers was the fact that the unicorns spoke perfect English."
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Combining GPT-Neo and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature, and make sure your hardware is compatible with Flash-Attention 2. More details are available here concerning the installation.
Make sure as well to load your model in half-precision (e.g. torch.float16).
To load and run a model using Flash Attention 2, refer to the snippet below:
thon
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
prompt = "def hello_world():"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"def hello_world():\n >>> run_script("hello.py")\n >>> exit(0)\n<|endoftext|>"
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using EleutherAI/gpt-neo-2.7B checkpoint and the Flash Attention 2 version of the model.
Note that for GPT-Neo it is not possible to train / run on very long context as the max position embeddings is limited to 2048 - but this is applicable to all gpt-neo models and not specific to FA-2
Resources
Text classification task guide
Causal language modeling task guide
GPTNeoConfig
[[autodoc]] GPTNeoConfig
GPTNeoModel
[[autodoc]] GPTNeoModel
- forward
GPTNeoForCausalLM
[[autodoc]] GPTNeoForCausalLM
- forward
GPTNeoForQuestionAnswering
[[autodoc]] GPTNeoForQuestionAnswering
- forward
GPTNeoForSequenceClassification
[[autodoc]] GPTNeoForSequenceClassification
- forward
GPTNeoForTokenClassification
[[autodoc]] GPTNeoForTokenClassification
- forward
FlaxGPTNeoModel
[[autodoc]] FlaxGPTNeoModel
- call
FlaxGPTNeoForCausalLM
[[autodoc]] FlaxGPTNeoForCausalLM
- call
|
Hubert
Overview
Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan
Salakhutdinov, Abdelrahman Mohamed.
The abstract from the paper is the following:
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are
multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training
phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we
propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an
offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our
approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined
acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised
clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means
teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the
state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h,
10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER
reduction on the more challenging dev-other and test-other evaluation subsets.
This model was contributed by patrickvonplaten.
Usage tips
Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
Resources
Audio classification task guide
Automatic speech recognition task guide
HubertConfig
[[autodoc]] HubertConfig
HubertModel
[[autodoc]] HubertModel
- forward
HubertForCTC
[[autodoc]] HubertForCTC
- forward
HubertForSequenceClassification
[[autodoc]] HubertForSequenceClassification
- forward
TFHubertModel
[[autodoc]] TFHubertModel
- call
TFHubertForCTC
[[autodoc]] TFHubertForCTC
- call
|
Qwen2
Overview
Qwen2 is the new model series of large language models from the Qwen team. Previously, we released the Qwen series, including Qwen-72B, Qwen-1.8B, Qwen-VL, Qwen-Audio, etc.
Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
Usage tips
Qwen2-7B-beta and Qwen2-7B-Chat-beta can be found on the Huggingface Hub
In the following, we demonstrate how to use Qwen2-7B-Chat-beta for the inference. Note that we have used the ChatML format for dialog, in this demo we show how to leverage apply_chat_template for this purpose.
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen1.5-7B-Chat", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Qwen2Config
[[autodoc]] Qwen2Config
Qwen2Tokenizer
[[autodoc]] Qwen2Tokenizer
- save_vocabulary
Qwen2TokenizerFast
[[autodoc]] Qwen2TokenizerFast
Qwen2Model
[[autodoc]] Qwen2Model
- forward
Qwen2ForCausalLM
[[autodoc]] Qwen2ForCausalLM
- forward
Qwen2ForSequenceClassification
[[autodoc]] Qwen2ForSequenceClassification
- forward |
LayoutLM
Overview
The LayoutLM model was proposed in the paper LayoutLM: Pre-training of Text and Layout for Document Image
Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and
Ming Zhou. It's a simple but effective pretraining method of text and layout for document image understanding and
information extraction tasks, such as form understanding and receipt understanding. It obtains state-of-the-art results
on several downstream tasks:
form understanding: the FUNSD dataset (a collection of 199 annotated
forms comprising more than 30,000 words).
receipt understanding: the SROIE dataset (a collection of 626 receipts for
training and 347 receipts for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
The abstract from the paper is the following:
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the
widespread use of pretraining models for NLP applications, they almost exclusively focus on text-level manipulation,
while neglecting layout and style information that is vital for document image understanding. In this paper, we propose
the LayoutLM to jointly model interactions between text and layout information across scanned document images, which is
beneficial for a great number of real-world document image understanding tasks such as information extraction from
scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM.
To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for
document-level pretraining. It achieves new state-of-the-art results in several downstream tasks, including form
understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification
(from 93.07 to 94.42).
Usage tips
In addition to input_ids, [~transformers.LayoutLMModel.forward] also expects the input bbox, which are
the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such
as Google's Tesseract (there's a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where
(x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the
position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000
scale. To normalize, you can use the following function:
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows:
thon
from PIL import Image
Document can be a png, jpg, etc. PDFs must be converted to images.
image = Image.open(name_of_your_document).convert("RGB")
width, height = image.size
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on fine-tuning
LayoutLM for document-understanding using Keras & Hugging Face
Transformers.
A blog post on how to fine-tune LayoutLM for document-understanding using only Hugging Face Transformers.
A notebook on how to fine-tune LayoutLM on the FUNSD dataset with image embeddings.
See also: Document question answering task guide
A notebook on how to fine-tune LayoutLM for sequence classification on the RVL-CDIP dataset.
Text classification task guide
A notebook on how to fine-tune LayoutLM for token classification on the FUNSD dataset.
Token classification task guide
Other resources
- Masked language modeling task guide
🚀 Deploy
A blog post on how to Deploy LayoutLM with Hugging Face Inference Endpoints.
LayoutLMConfig
[[autodoc]] LayoutLMConfig
LayoutLMTokenizer
[[autodoc]] LayoutLMTokenizer
LayoutLMTokenizerFast
[[autodoc]] LayoutLMTokenizerFast
LayoutLMModel
[[autodoc]] LayoutLMModel
LayoutLMForMaskedLM
[[autodoc]] LayoutLMForMaskedLM
LayoutLMForSequenceClassification
[[autodoc]] LayoutLMForSequenceClassification
LayoutLMForTokenClassification
[[autodoc]] LayoutLMForTokenClassification
LayoutLMForQuestionAnswering
[[autodoc]] LayoutLMForQuestionAnswering
TFLayoutLMModel
[[autodoc]] TFLayoutLMModel
TFLayoutLMForMaskedLM
[[autodoc]] TFLayoutLMForMaskedLM
TFLayoutLMForSequenceClassification
[[autodoc]] TFLayoutLMForSequenceClassification
TFLayoutLMForTokenClassification
[[autodoc]] TFLayoutLMForTokenClassification
TFLayoutLMForQuestionAnswering
[[autodoc]] TFLayoutLMForQuestionAnswering
|
Table Transformer
Overview
The Table Transformer model was proposed in PubTables-1M: Towards comprehensive table extraction from unstructured documents by
Brandon Smock, Rohith Pesala, Robin Abraham. The authors introduce a new dataset, PubTables-1M, to benchmark progress in table extraction from unstructured documents,
as well as table structure recognition and functional analysis. The authors train 2 DETR models, one for table detection and one for table structure recognition, dubbed Table Transformers.
The abstract from the paper is the following:
Recently, significant progress has been made applying machine learning to the problem of table structure inference and extraction from unstructured documents.
However, one of the greatest challenges remains the creation of datasets with complete, unambiguous ground truth at scale. To address this, we develop a new, more
comprehensive dataset for table extraction, called PubTables-1M. PubTables-1M contains nearly one million tables from scientific articles, supports multiple input
modalities, and contains detailed header and location information for table structures, making it useful for a wide variety of modeling approaches. It also addresses a significant
source of ground truth inconsistency observed in prior datasets called oversegmentation, using a novel canonicalization procedure. We demonstrate that these improvements lead to a
significant increase in training performance and a more reliable estimate of model performance at evaluation for table structure recognition. Further, we show that transformer-based
object detection models trained on PubTables-1M produce excellent results for all three tasks of detection, structure recognition, and functional analysis without the need for any
special customization for these tasks.
Table detection and table structure recognition clarified. Taken from the original paper.
The authors released 2 models, one for table detection in
documents, one for table structure recognition
(the task of recognizing the individual rows, columns etc. in a table).
This model was contributed by nielsr. The original code can be
found here.
Resources
A demo notebook for the Table Transformer can be found here.
It turns out padding of images is quite important for detection. An interesting Github thread with replies from the authors can be found here.
TableTransformerConfig
[[autodoc]] TableTransformerConfig
TableTransformerModel
[[autodoc]] TableTransformerModel
- forward
TableTransformerForObjectDetection
[[autodoc]] TableTransformerForObjectDetection
- forward |
LiLT
Overview
The LiLT model was proposed in LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding by Jiapeng Wang, Lianwen Jin, Kai Ding.
LiLT allows to combine any pre-trained RoBERTa text encoder with a lightweight Layout Transformer, to enable LayoutLM-like document understanding for many
languages.
The abstract from the paper is the following:
Structured document understanding has attracted considerable attention and made significant progress recently, owing to its crucial role in intelligent document processing. However, most existing related models can only deal with the document data of specific language(s) (typically English) included in the pre-training collection, which is extremely limited. To address this issue, we propose a simple yet effective Language-independent Layout Transformer (LiLT) for structured document understanding. LiLT can be pre-trained on the structured documents of a single language and then directly fine-tuned on other languages with the corresponding off-the-shelf monolingual/multilingual pre-trained textual models. Experimental results on eight languages have shown that LiLT can achieve competitive or even superior performance on diverse widely-used downstream benchmarks, which enables language-independent benefit from the pre-training of document layout structure.
LiLT architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the hub, refer to this guide.
The script will result in config.json and pytorch_model.bin files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account):
thon
from transformers import LiltModel
model = LiltModel.from_pretrained("path_to_your_files")
model.push_to_hub("name_of_repo_on_the_hub")
When preparing data for the model, make sure to use the token vocabulary that corresponds to the RoBERTa checkpoint you combined with the Layout Transformer.
As lilt-roberta-en-base uses the same vocabulary as LayoutLMv3, one can use [LayoutLMv3TokenizerFast] to prepare data for the model.
The same is true for lilt-roberta-en-base: one can use [LayoutXLMTokenizerFast] for that model.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LiLT.
Demo notebooks for LiLT can be found here.
Documentation resources
- Text classification task guide
- Token classification task guide
- Question answering task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
LiltConfig
[[autodoc]] LiltConfig
LiltModel
[[autodoc]] LiltModel
- forward
LiltForSequenceClassification
[[autodoc]] LiltForSequenceClassification
- forward
LiltForTokenClassification
[[autodoc]] LiltForTokenClassification
- forward
LiltForQuestionAnswering
[[autodoc]] LiltForQuestionAnswering
- forward |
M2M100
Overview
The M2M100 model was proposed in Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky,
Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy
Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
The abstract from the paper is the following:
Existing work in translation demonstrated the potential of massively multilingual machine translation by training a
single model able to translate between any pair of languages. However, much of this work is English-Centric by training
only on data which was translated from or to English. While this is supported by large sources of training data, it
does not reflect translation needs worldwide. In this work, we create a true Many-to-Many multilingual translation
model that can translate directly between any pair of 100 languages. We build and open source a training dataset that
covers thousands of language directions with supervised data, created through large-scale mining. Then, we explore how
to effectively increase model capacity through a combination of dense scaling and language-specific sparse parameters
to create high quality models. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly
translating between non-English directions while performing competitively to the best single systems of WMT. We
open-source our scripts so that others may reproduce the data, evaluation, and final M2M-100 model.
This model was contributed by valhalla.
Usage tips and examples
M2M100 is a multilingual encoder-decoder (seq-to-seq) model primarily intended for translation tasks. As the model is
multilingual it expects the sequences in a certain format: A special language id token is used as prefix in both the
source and target text. The source text format is [lang_code] X [eos], where lang_code is source language
id for source text and target language id for target text, with X being the source or target text.
The [M2M100Tokenizer] depends on sentencepiece so be sure to install it before running the
examples. To install sentencepiece run pip install sentencepiece.
Supervised Training
thon
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass
Generation
M2M100 uses the eos_token_id as the decoder_start_token_id for generation with the target language id
being forced as the first generated token. To force the target language id as the first generated token, pass the
forced_bos_token_id parameter to the generate method. The following example shows how to translate between
Hindi to French and Chinese to English using the facebook/m2m100_418M checkpoint.
thon
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
translate Hindi to French
tokenizer.src_lang = "hi"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"La vie est comme une boîte de chocolat."
translate Chinese to English
tokenizer.src_lang = "zh"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en"))
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
"Life is like a box of chocolate."
Resources
Translation task guide
Summarization task guide
M2M100Config
[[autodoc]] M2M100Config
M2M100Tokenizer
[[autodoc]] M2M100Tokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
M2M100Model
[[autodoc]] M2M100Model
- forward
M2M100ForConditionalGeneration
[[autodoc]] M2M100ForConditionalGeneration
- forward |
OWLv2
Overview
OWLv2 was proposed in Scaling Open-Vocabulary Object Detection by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2 scales up OWL-ViT using self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. This results in large gains over the previous state-of-the-art for zero-shot object detection.
The abstract from the paper is the following:
Open-vocabulary object detection has benefited greatly from pretrained vision-language models, but is still limited by the amount of available detection training data. While detection training data can be expanded by using Web image-text pairs as weak supervision, this has not been done at scales comparable to image-level pretraining. Here, we scale up detection data with self-training, which uses an existing detector to generate pseudo-box annotations on image-text pairs. Major challenges in scaling self-training are the choice of label space, pseudo-annotation filtering, and training efficiency. We present the OWLv2 model and OWL-ST self-training recipe, which address these challenges. OWLv2 surpasses the performance of previous state-of-the-art open-vocabulary detectors already at comparable training scales (~10M examples). However, with OWL-ST, we can scale to over 1B examples, yielding further large improvement: With an L/14 architecture, OWL-ST improves AP on LVIS rare classes, for which the model has seen no human box annotations, from 31.2% to 44.6% (43% relative improvement). OWL-ST unlocks Web-scale training for open-world localization, similar to what has been seen for image classification and language modelling.
OWLv2 high-level overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage example
OWLv2 is, just like its predecessor OWL-ViT, a zero-shot text-conditioned object detection model. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection.
[Owlv2ImageProcessor] can be used to resize (or rescale) and normalize images for the model and [CLIPTokenizer] is used to encode the text. [Owlv2Processor] wraps [Owlv2ImageProcessor] and [CLIPTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [Owlv2Processor] and [Owlv2ForObjectDetection].
thon
import requests
from PIL import Image
import torch
from transformers import Owlv2Processor, Owlv2ForObjectDetection
processor = Owlv2Processor.from_pretrained("google/owlv2-base-patch16-ensemble")
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
Convert outputs (bounding boxes and class logits) to Pascal VOC Format (xmin, ymin, xmax, ymax)
results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
Detected a photo of a cat with confidence 0.614 at location [341.67, 17.54, 642.32, 278.51]
Detected a photo of a cat with confidence 0.665 at location [6.75, 38.97, 326.62, 354.85]
Resources
A demo notebook on using OWLv2 for zero- and one-shot (image-guided) object detection can be found here.
Zero-shot object detection task guide
The architecture of OWLv2 is identical to OWL-ViT, however the object detection head now also includes an objectness classifier, which predicts the (query-agnostic) likelihood that a predicted box contains an object (as opposed to background). The objectness score can be used to rank or filter predictions independently of text queries.
Usage of OWLv2 is identical to OWL-ViT with a new, updated image processor ([Owlv2ImageProcessor]).
Owlv2Config
[[autodoc]] Owlv2Config
- from_text_vision_configs
Owlv2TextConfig
[[autodoc]] Owlv2TextConfig
Owlv2VisionConfig
[[autodoc]] Owlv2VisionConfig
Owlv2ImageProcessor
[[autodoc]] Owlv2ImageProcessor
- preprocess
- post_process_object_detection
- post_process_image_guided_detection
Owlv2Processor
[[autodoc]] Owlv2Processor
Owlv2Model
[[autodoc]] Owlv2Model
- forward
- get_text_features
- get_image_features
Owlv2TextModel
[[autodoc]] Owlv2TextModel
- forward
Owlv2VisionModel
[[autodoc]] Owlv2VisionModel
- forward
Owlv2ForObjectDetection
[[autodoc]] Owlv2ForObjectDetection
- forward
- image_guided_detection |
Funnel Transformer
Overview
The Funnel Transformer model was proposed in the paper Funnel-Transformer: Filtering out Sequential Redundancy for
Efficient Language Processing. It is a bidirectional transformer model, like
BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks
(CNN) in computer vision.
The abstract from the paper is the following:
With the success of language pretraining, it is highly desirable to develop more efficient architectures of good
scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the
much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only
require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which
gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More
importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further
improve the model capacity. In addition, to perform token-level predictions as required by common pretraining
objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence
via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on
a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading
comprehension.
This model was contributed by sgugger. The original code can be found here.
Usage tips
Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states.
The base model therefore has a final sequence length that is a quarter of the original one. This model can be used
directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other
tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same
sequence length as the input.
For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with “-base” contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers.
The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be
used for [FunnelModel], [FunnelForPreTraining],
[FunnelForMaskedLM], [FunnelForTokenClassification] and
[FunnelForQuestionAnswering]. The second ones should be used for
[FunnelBaseModel], [FunnelForSequenceClassification] and
[FunnelForMultipleChoice].
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
FunnelConfig
[[autodoc]] FunnelConfig
FunnelTokenizer
[[autodoc]] FunnelTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FunnelTokenizerFast
[[autodoc]] FunnelTokenizerFast
Funnel specific outputs
[[autodoc]] models.funnel.modeling_funnel.FunnelForPreTrainingOutput
[[autodoc]] models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput
FunnelBaseModel
[[autodoc]] FunnelBaseModel
- forward
FunnelModel
[[autodoc]] FunnelModel
- forward
FunnelModelForPreTraining
[[autodoc]] FunnelForPreTraining
- forward
FunnelForMaskedLM
[[autodoc]] FunnelForMaskedLM
- forward
FunnelForSequenceClassification
[[autodoc]] FunnelForSequenceClassification
- forward
FunnelForMultipleChoice
[[autodoc]] FunnelForMultipleChoice
- forward
FunnelForTokenClassification
[[autodoc]] FunnelForTokenClassification
- forward
FunnelForQuestionAnswering
[[autodoc]] FunnelForQuestionAnswering
- forward
TFFunnelBaseModel
[[autodoc]] TFFunnelBaseModel
- call
TFFunnelModel
[[autodoc]] TFFunnelModel
- call
TFFunnelModelForPreTraining
[[autodoc]] TFFunnelForPreTraining
- call
TFFunnelForMaskedLM
[[autodoc]] TFFunnelForMaskedLM
- call
TFFunnelForSequenceClassification
[[autodoc]] TFFunnelForSequenceClassification
- call
TFFunnelForMultipleChoice
[[autodoc]] TFFunnelForMultipleChoice
- call
TFFunnelForTokenClassification
[[autodoc]] TFFunnelForTokenClassification
- call
TFFunnelForQuestionAnswering
[[autodoc]] TFFunnelForQuestionAnswering
- call
|
Llama2
Overview
The Llama2 model was proposed in LLaMA: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom. It is a collection of foundation language models ranging from 7B to 70B parameters, with checkpoints finetuned for chat application!
The abstract from the paper is the following:
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
Checkout all Llama2 model checkpoints here.
This model was contributed by Arthur Zucker with contributions from Lysandre Debut. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
Usage tips
The Llama2 models were trained using bfloat16, but the original inference uses float16. The checkpoints uploaded on the Hub use torch_dtype = 'float16', which will be
used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16.
The dtype of the online weights is mostly irrelevant unless you are using torch_dtype="auto" when initializing a model using model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto"). The reason is that the model will first be downloaded ( using the dtype of the checkpoints online), then it will be casted to the default dtype of torch (becomes torch.float32), and finally, if there is a torch_dtype provided in the config, it will be used.
Training the model in float16 is not recommended and is known to produce nan; as such, the model should be trained in bfloat16.
Tips:
Weights for the Llama2 models can be obtained by filling out this form
The architecture is very similar to the first Llama, with the addition of Grouped Query Attention (GQA) following this paper
Setting config.pretraining_tp to a value different than 1 will activate the more accurate but slower computation of the linear layers, which should better match the original logits.
The original model uses pad_id = -1 which means that there is no padding token. We can't have the same logic, make sure to add a padding token using tokenizer.add_special_tokens({"pad_token":"<pad>"}) and resize the token embedding accordingly. You should also set the model.config.pad_token_id. The embed_tokens layer of the model is initialized with self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx), which makes sure that encoding the padding token will output zeros, so passing it when initializing is recommended.
After filling out the form and gaining access to the model checkpoints, you should be able to use the already converted checkpoints. Otherwise, if you are converting your own model, feel free to use the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 75B model, it's thus 145GB of RAM needed.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string.
When using Flash Attention 2 via attn_implementation="flash_attention_2", don't pass torch_dtype to the from_pretrained class method and use Automatic Mixed-Precision training. When using Trainer, it is simply specifying either fp16 or bf16 to True. Otherwise, make sure you are using torch.autocast. This is required because the Flash Attention only support fp16 and bf16 data type.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Llama 2 is here - get it on Hugging Face, a blog post about Llama 2 and how to use it with 🤗 Transformers and 🤗 PEFT.
LLaMA 2 - Every Resource you need, a compilation of relevant resources to learn about LLaMA 2 and how to get started quickly.
A notebook on how to fine-tune Llama 2 in Google Colab using QLoRA and 4-bit precision. 🌎
A notebook on how to fine-tune the "Llama-v2-7b-guanaco" model with 4-bit QLoRA and generate Q&A datasets from PDFs. 🌎
A notebook on how to fine-tune the Llama 2 model with QLoRa, TRL, and Korean text classification dataset. 🌎🇰🇷
⚗️ Optimization
- Fine-tune Llama 2 with DPO, a guide to using the TRL library's DPO method to fine tune Llama 2 on a specific dataset.
- Extended Guide: Instruction-tune Llama 2, a guide to training Llama 2 to generate instructions from inputs, transforming the model from instruction-following to instruction-giving.
- A notebook on how to fine-tune the Llama 2 model on a personal computer using QLoRa and TRL. 🌎
⚡️ Inference
- A notebook on how to quantize the Llama 2 model using GPTQ from the AutoGPTQ library. 🌎
- A notebook on how to run the Llama 2 Chat Model with 4-bit quantization on a local computer or Google Colab. 🌎
🚀 Deploy
- Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker, a complete guide from setup to QLoRA fine-tuning and deployment on Amazon SageMaker.
- Deploy Llama 2 7B/13B/70B on Amazon SageMaker, a guide on using Hugging Face's LLM DLC container for secure and scalable deployment.
LlamaConfig
[[autodoc]] LlamaConfig
LlamaTokenizer
[[autodoc]] LlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LlamaTokenizerFast
[[autodoc]] LlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
LlamaModel
[[autodoc]] LlamaModel
- forward
LlamaForCausalLM
[[autodoc]] LlamaForCausalLM
- forward
LlamaForSequenceClassification
[[autodoc]] LlamaForSequenceClassification
- forward |
M-CTC-T
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0.
Overview
The M-CTC-T model was proposed in Pseudo-Labeling For Massively Multilingual Speech Recognition by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
The abstract from the paper is the following:
Semi-supervised learning through pseudo-labeling has become a staple of state-of-the-art monolingual
speech recognition systems. In this work, we extend pseudo-labeling to massively multilingual speech
recognition with 60 languages. We propose a simple pseudo-labeling recipe that works well even
with low-resource languages: train a supervised multilingual model, fine-tune it with semi-supervised
learning on a target language, generate pseudo-labels for that language, and train a final model using
pseudo-labels for all languages, either from scratch or by fine-tuning. Experiments on the labeled
Common Voice and unlabeled VoxPopuli datasets show that our recipe can yield a model with better
performance for many languages that also transfers well to LibriSpeech.
This model was contributed by cwkeam. The original code can be found here.
Usage tips
The PyTorch version of this model is only available in torch 1.9 and higher.
Resources
Automatic speech recognition task guide
MCTCTConfig
[[autodoc]] MCTCTConfig
MCTCTFeatureExtractor
[[autodoc]] MCTCTFeatureExtractor
- call
MCTCTProcessor
[[autodoc]] MCTCTProcessor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
MCTCTModel
[[autodoc]] MCTCTModel
- forward
MCTCTForCTC
[[autodoc]] MCTCTForCTC
- forward |
Blenderbot Small
Note that [BlenderbotSmallModel] and
[BlenderbotSmallForConditionalGeneration] are only used in combination with the checkpoint
facebook/blenderbot-90M. Larger Blenderbot checkpoints should
instead be used with [BlenderbotModel] and
[BlenderbotForConditionalGeneration]
Overview
The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu,
Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020.
The abstract of the paper is the following:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that
scaling neural models in the number of parameters and the size of the data they are trained on gives improved results,
we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of
skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to
their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent
persona. We show that large scale models can learn these skills when given appropriate training data and choice of
generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models
and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn
dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing
failure cases of our models.
This model was contributed by patrickvonplaten. The authors' code can be
found here.
Usage tips
Blenderbot Small is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
Resources
Causal language modeling task guide
Translation task guide
Summarization task guide
BlenderbotSmallConfig
[[autodoc]] BlenderbotSmallConfig
BlenderbotSmallTokenizer
[[autodoc]] BlenderbotSmallTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
BlenderbotSmallTokenizerFast
[[autodoc]] BlenderbotSmallTokenizerFast
BlenderbotSmallModel
[[autodoc]] BlenderbotSmallModel
- forward
BlenderbotSmallForConditionalGeneration
[[autodoc]] BlenderbotSmallForConditionalGeneration
- forward
BlenderbotSmallForCausalLM
[[autodoc]] BlenderbotSmallForCausalLM
- forward
TFBlenderbotSmallModel
[[autodoc]] TFBlenderbotSmallModel
- call
TFBlenderbotSmallForConditionalGeneration
[[autodoc]] TFBlenderbotSmallForConditionalGeneration
- call
FlaxBlenderbotSmallModel
[[autodoc]] FlaxBlenderbotSmallModel
- call
- encode
- decode
FlaxBlenderbotForConditionalGeneration
[[autodoc]] FlaxBlenderbotSmallForConditionalGeneration
- call
- encode
- decode
|
ViTMatte
Overview
The ViTMatte model was proposed in Boosting Image Matting with Pretrained Plain Vision Transformers by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.
ViTMatte leverages plain Vision Transformers for the task of image matting, which is the process of accurately estimating the foreground object in images and videos.
The abstract from the paper is the following:
Recently, plain vision Transformers (ViTs) have shown impressive performance on various computer vision tasks, thanks to their strong modeling capacity and large-scale pretraining. However, they have not yet conquered the problem of image matting. We hypothesize that image matting could also be boosted by ViTs and present a new efficient and robust ViT-based matting system, named ViTMatte. Our method utilizes (i) a hybrid attention mechanism combined with a convolution neck to help ViTs achieve an excellent performance-computation trade-off in matting tasks. (ii) Additionally, we introduce the detail capture module, which just consists of simple lightweight convolutions to complement the detailed information required by matting. To the best of our knowledge, ViTMatte is the first work to unleash the potential of ViT on image matting with concise adaptation. It inherits many superior properties from ViT to matting, including various pretraining strategies, concise architecture design, and flexible inference strategies. We evaluate ViTMatte on Composition-1k and Distinctions-646, the most commonly used benchmark for image matting, our method achieves state-of-the-art performance and outperforms prior matting works by a large margin.
This model was contributed by nielsr.
The original code can be found here.
ViTMatte high-level overview. Taken from the original paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViTMatte.
A demo notebook regarding inference with [VitMatteForImageMatting], including background replacement, can be found here.
The model expects both the image and trimap (concatenated) as input. Use [ViTMatteImageProcessor] for this purpose.
VitMatteConfig
[[autodoc]] VitMatteConfig
VitMatteImageProcessor
[[autodoc]] VitMatteImageProcessor
- preprocess
VitMatteForImageMatting
[[autodoc]] VitMatteForImageMatting
- forward |
BLOOM
Overview
The BLOOM model has been proposed with its various versions through the BigScience Workshop. BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact.
The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages.
Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions:
bloom-560m
bloom-1b1
bloom-1b7
bloom-3b
bloom-7b1
bloom (176B parameters)
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
[BloomForCausalLM] is supported by this causal language modeling example script and notebook.
See also:
- Causal language modeling task guide
- Text classification task guide
- Token classification task guide
- Question answering task guide
⚡️ Inference
- A blog on Optimization story: Bloom inference.
- A blog on Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate.
⚙️ Training
- A blog on The Technology Behind BLOOM Training.
BloomConfig
[[autodoc]] BloomConfig
- all
BloomTokenizerFast
[[autodoc]] BloomTokenizerFast
- all
BloomModel
[[autodoc]] BloomModel
- forward
BloomForCausalLM
[[autodoc]] BloomForCausalLM
- forward
BloomForSequenceClassification
[[autodoc]] BloomForSequenceClassification
- forward
BloomForTokenClassification
[[autodoc]] BloomForTokenClassification
- forward
BloomForQuestionAnswering
[[autodoc]] BloomForQuestionAnswering
- forward
FlaxBloomModel
[[autodoc]] FlaxBloomModel
- call
FlaxBloomForCausalLM
[[autodoc]] FlaxBloomForCausalLM
- call
|
Speech2Text2
Overview
The Speech2Text2 model is used together with Wav2Vec2 for Speech Translation models proposed in
Large-Scale Self- and Semi-Supervised Learning for Speech Translation by
Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
Speech2Text2 is a decoder-only transformer model that can be used with any speech encoder-only, such as
Wav2Vec2 or HuBERT for Speech-to-Text tasks. Please refer to the
SpeechEncoderDecoder class on how to combine Speech2Text2 with any speech encoder-only
model.
This model was contributed by Patrick von Platen.
The original code can be found here.
Usage tips
Speech2Text2 achieves state-of-the-art results on the CoVoST Speech Translation dataset. For more information, see
the official models .
Speech2Text2 is always used within the SpeechEncoderDecoder framework.
Speech2Text2's tokenizer is based on fastBPE.
Inference
Speech2Text2's [SpeechEncoderDecoderModel] model accepts raw waveform input values from speech and
makes use of [~generation.GenerationMixin.generate] to translate the input speech
autoregressively to the target language.
The [Wav2Vec2FeatureExtractor] class is responsible for preprocessing the input speech and
[Speech2Text2Tokenizer] decodes the generated target tokens to the target string. The
[Speech2Text2Processor] wraps [Wav2Vec2FeatureExtractor] and
[Speech2Text2Tokenizer] into a single instance to both extract the input features and decode the
predicted token ids.
Step-by-step Speech Translation
thon
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
import soundfile as sf
model = SpeechEncoderDecoderModel.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt")
generated_ids = model.generate(inputs=inputs["input_values"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids)
Speech Translation via Pipelines
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
thon
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline(
"automatic-speech-recognition",
model="facebook/s2t-wav2vec2-large-en-de",
feature_extractor="facebook/s2t-wav2vec2-large-en-de",
)
translation_de = asr(librispeech_en[0]["file"])
See model hub to look for Speech2Text2 checkpoints.
Resources
Causal language modeling task guide
Speech2Text2Config
[[autodoc]] Speech2Text2Config
Speech2TextTokenizer
[[autodoc]] Speech2Text2Tokenizer
- batch_decode
- decode
- save_vocabulary
Speech2Text2Processor
[[autodoc]] Speech2Text2Processor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
Speech2Text2ForCausalLM
[[autodoc]] Speech2Text2ForCausalLM
- forward |
LayoutXLM
Overview
LayoutXLM was proposed in LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha
Zhang, Furu Wei. It's a multilingual extension of the LayoutLMv2 model trained
on 53 languages.
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document
understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In
this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to
bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also
introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in
7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled
for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA
cross-lingual pre-trained models on the XFUN dataset.
This model was contributed by nielsr. The original code can be found here.
Usage tips and examples
One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so:
thon
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")
Note that LayoutXLM has its own tokenizer, based on
[LayoutXLMTokenizer]/[LayoutXLMTokenizerFast]. You can initialize it as
follows:
thon
from transformers import LayoutXLMTokenizer
tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")
Similar to LayoutLMv2, you can use [LayoutXLMProcessor] (which internally applies
[LayoutLMv2ImageProcessor] and
[LayoutXLMTokenizer]/[LayoutXLMTokenizerFast] in sequence) to prepare all
data for the model.
As LayoutXLM's architecture is equivalent to that of LayoutLMv2, one can refer to LayoutLMv2's documentation page for all tips, code examples and notebooks.
LayoutXLMTokenizer
[[autodoc]] LayoutXLMTokenizer
- call
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LayoutXLMTokenizerFast
[[autodoc]] LayoutXLMTokenizerFast
- call
LayoutXLMProcessor
[[autodoc]] LayoutXLMProcessor
- call |
MegatronBERT
Overview
The MegatronBERT model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
This model was contributed by jdemouth. The original code can be found here.
That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular,
it contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques.
Usage tips
We have provided pretrained BERT-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
BERT-345M-uncased:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
-O megatron_bert_345m_v0_1_uncased.zip
BERT-345M-cased:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
megatron_bert_345m_v0_1_cased.zip
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
easily be loaded by Hugging Face Transformers and our port of the BERT code.
The following commands allow you to do the conversion. We assume that the folder models/megatron_bert contains
megatron_bert_345m_v0_1_{cased, uncased}.zip and that the commands are run from inside that folder:
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
MegatronBertConfig
[[autodoc]] MegatronBertConfig
MegatronBertModel
[[autodoc]] MegatronBertModel
- forward
MegatronBertForMaskedLM
[[autodoc]] MegatronBertForMaskedLM
- forward
MegatronBertForCausalLM
[[autodoc]] MegatronBertForCausalLM
- forward
MegatronBertForNextSentencePrediction
[[autodoc]] MegatronBertForNextSentencePrediction
- forward
MegatronBertForPreTraining
[[autodoc]] MegatronBertForPreTraining
- forward
MegatronBertForSequenceClassification
[[autodoc]] MegatronBertForSequenceClassification
- forward
MegatronBertForMultipleChoice
[[autodoc]] MegatronBertForMultipleChoice
- forward
MegatronBertForTokenClassification
[[autodoc]] MegatronBertForTokenClassification
- forward
MegatronBertForQuestionAnswering
[[autodoc]] MegatronBertForQuestionAnswering
- forward |
XLM-ProphetNet
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The XLM-ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of
just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual
"wiki100" Wikipedia dump. XLM-ProphetNet's model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
The Authors' code can be found here.
Resources
Causal language modeling task guide
Translation task guide
Summarization task guide
XLMProphetNetConfig
[[autodoc]] XLMProphetNetConfig
XLMProphetNetTokenizer
[[autodoc]] XLMProphetNetTokenizer
XLMProphetNetModel
[[autodoc]] XLMProphetNetModel
XLMProphetNetEncoder
[[autodoc]] XLMProphetNetEncoder
XLMProphetNetDecoder
[[autodoc]] XLMProphetNetDecoder
XLMProphetNetForConditionalGeneration
[[autodoc]] XLMProphetNetForConditionalGeneration
XLMProphetNetForCausalLM
[[autodoc]] XLMProphetNetForCausalLM |
Open-Llama
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.31.0.
You can do so by running the following command: pip install -U transformers==4.31.0.
This model differs from the OpenLLaMA models on the Hugging Face Hub, which primarily use the LLaMA architecture.
Overview
The Open-Llama model was proposed in the open source Open-Llama project by community developer s-JoL.
The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM.
And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.
This model was contributed by s-JoL.
The original code was released on GitHub by s-JoL, but is now removed.
OpenLlamaConfig
[[autodoc]] OpenLlamaConfig
OpenLlamaModel
[[autodoc]] OpenLlamaModel
- forward
OpenLlamaForCausalLM
[[autodoc]] OpenLlamaForCausalLM
- forward
OpenLlamaForSequenceClassification
[[autodoc]] OpenLlamaForSequenceClassification
- forward |
Phi
Overview
The Phi-1 model was proposed in Textbooks Are All You Need by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li.
The Phi-1.5 model was proposed in Textbooks Are All You Need II: phi-1.5 technical report by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
Summary
In Phi-1 and Phi-1.5 papers, the authors showed how important the quality of the data is in training relative to the model size.
They selected high quality "textbook" data alongside with synthetically generated data for training their small sized Transformer
based model Phi-1 with 1.3B parameters. Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP.
They follow the same strategy for Phi-1.5 and created another 1.3B parameter model with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs. Phi-1.5 exhibits many of the traits of much larger LLMs such as the ability
to “think step by step” or perform some rudimentary in-context learning.
With these two experiments the authors successfully showed the huge impact of quality of training data when training machine learning models.
The abstract from the Phi-1 paper is the following:
We introduce phi-1, a new large language model for code, with significantly smaller size than
competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on
8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically
generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent
properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding
exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as
phi-1 that still achieves 45% on HumanEval.
The abstract from the Phi-1.5 paper is the following:
We continue the investigation into the power of smaller Transformer-based language models as
initiated by TinyStories – a 10 million parameter model that can produce coherent English – and
the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close
to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to
generate “textbook quality” data as a way to enhance the learning process compared to traditional
web data. We follow the “Textbooks Are All You Need” approach, focusing this time on common
sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5,
with performance on natural language tasks comparable to models 5x larger, and surpassing most
non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic
coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good –such
as the ability to “think step by step” or perform some rudimentary in-context learning– and bad,
including hallucinations and the potential for toxic and biased generations –encouragingly though, we
are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to
promote further research on these urgent topics.
This model was contributed by Susnato Dhar.
The original code for Phi-1, Phi-1.5 and Phi-2 can be found here, here and here, respectively.
Usage tips
This model is quite similar to Llama with the main difference in [PhiDecoderLayer], where they used [PhiAttention] and [PhiMLP] layers in parallel configuration.
The tokenizer used for this model is identical to the [CodeGenTokenizer].
How to use Phi-2
Phi-2 has been integrated in the development version (4.37.0.dev) of transformers. Until the official version is released through pip, ensure that you are doing one of the following:
When loading the model, ensure that trust_remote_code=True is passed as an argument of the from_pretrained() function.
Update your local transformers to the development version: pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers. The previous command is an alternative to cloning and installing from the source.
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
inputs = tokenizer('Can you help me write a formal email to a potential business partner proposing a joint venture?', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=30)
text = tokenizer.batch_decode(outputs)[0]
print(text)
'Can you help me write a formal email to a potential business partner proposing a joint venture?\nInput: Company A: ABC Inc.\nCompany B: XYZ Ltd.\nJoint Venture: A new online platform for e-commerce'
Example :
thon
from transformers import PhiForCausalLM, AutoTokenizer
define the model and tokenizer.
model = PhiForCausalLM.from_pretrained("microsoft/phi-1_5")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5")
feel free to change the prompt to your liking.
prompt = "If I were an AI that had just achieved"
apply the tokenizer.
tokens = tokenizer(prompt, return_tensors="pt")
use the model to generate new tokens.
generated_output = model.generate(**tokens, use_cache=True, max_new_tokens=10)
tokenizer.batch_decode(generated_output)[0]
'If I were an AI that had just achieved a breakthrough in machine learning, I would be thrilled'
Combining Phi and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon
import torch
from transformers import PhiForCausalLM, AutoTokenizer
define the model and tokenizer and push the model and tokens to the GPU.
model = PhiForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5")
feel free to change the prompt to your liking.
prompt = "If I were an AI that had just achieved"
apply the tokenizer.
tokens = tokenizer(prompt, return_tensors="pt").to("cuda")
use the model to generate new tokens.
generated_output = model.generate(**tokens, use_cache=True, max_new_tokens=10)
tokenizer.batch_decode(generated_output)[0]
'If I were an AI that had just achieved a breakthrough in machine learning, I would be thrilled'
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using microsoft/phi-1 checkpoint and the Flash Attention 2 version of the model using a sequence length of 2048.
PhiConfig
[[autodoc]] PhiConfig
PhiModel
[[autodoc]] PhiModel
- forward
PhiForCausalLM
[[autodoc]] PhiForCausalLM
- forward
- generate
PhiForSequenceClassification
[[autodoc]] PhiForSequenceClassification
- forward
PhiForTokenClassification
[[autodoc]] PhiForTokenClassification
- forward
|
Auto Classes
In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you
are supplying to the from_pretrained() method. AutoClasses are here to do this job for you so that you
automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary.
Instantiating one of [AutoConfig], [AutoModel], and
[AutoTokenizer] will directly create a class of the relevant architecture. For instance
python
model = AutoModel.from_pretrained("google-bert/bert-base-cased")
will create a model that is an instance of [BertModel].
There is one class of AutoModel for each task, and for each backend (PyTorch, TensorFlow, or Flax).
Extending the Auto Classes
Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a
custom class of model NewModel, make sure you have a NewModelConfig then you can add those to the auto
classes like this:
thon
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
You will then be able to use the auto classes like you would usually do!
If your NewModelConfig is a subclass of [~transformers.PretrainedConfig], make sure its
model_type attribute is set to the same key you use when registering the config (here "new-model").
Likewise, if your NewModel is a subclass of [PreTrainedModel], make sure its
config_class attribute is set to the same class you use when registering the model (here
NewModelConfig).
AutoConfig
[[autodoc]] AutoConfig
AutoTokenizer
[[autodoc]] AutoTokenizer
AutoFeatureExtractor
[[autodoc]] AutoFeatureExtractor
AutoImageProcessor
[[autodoc]] AutoImageProcessor
AutoProcessor
[[autodoc]] AutoProcessor
Generic model classes
The following auto classes are available for instantiating a base model class without a specific head.
AutoModel
[[autodoc]] AutoModel
TFAutoModel
[[autodoc]] TFAutoModel
FlaxAutoModel
[[autodoc]] FlaxAutoModel
Generic pretraining classes
The following auto classes are available for instantiating a model with a pretraining head.
AutoModelForPreTraining
[[autodoc]] AutoModelForPreTraining
TFAutoModelForPreTraining
[[autodoc]] TFAutoModelForPreTraining
FlaxAutoModelForPreTraining
[[autodoc]] FlaxAutoModelForPreTraining
Natural Language Processing
The following auto classes are available for the following natural language processing tasks.
AutoModelForCausalLM
[[autodoc]] AutoModelForCausalLM
TFAutoModelForCausalLM
[[autodoc]] TFAutoModelForCausalLM
FlaxAutoModelForCausalLM
[[autodoc]] FlaxAutoModelForCausalLM
AutoModelForMaskedLM
[[autodoc]] AutoModelForMaskedLM
TFAutoModelForMaskedLM
[[autodoc]] TFAutoModelForMaskedLM
FlaxAutoModelForMaskedLM
[[autodoc]] FlaxAutoModelForMaskedLM
AutoModelForMaskGeneration
[[autodoc]] AutoModelForMaskGeneration
TFAutoModelForMaskGeneration
[[autodoc]] TFAutoModelForMaskGeneration
AutoModelForSeq2SeqLM
[[autodoc]] AutoModelForSeq2SeqLM
TFAutoModelForSeq2SeqLM
[[autodoc]] TFAutoModelForSeq2SeqLM
FlaxAutoModelForSeq2SeqLM
[[autodoc]] FlaxAutoModelForSeq2SeqLM
AutoModelForSequenceClassification
[[autodoc]] AutoModelForSequenceClassification
TFAutoModelForSequenceClassification
[[autodoc]] TFAutoModelForSequenceClassification
FlaxAutoModelForSequenceClassification
[[autodoc]] FlaxAutoModelForSequenceClassification
AutoModelForMultipleChoice
[[autodoc]] AutoModelForMultipleChoice
TFAutoModelForMultipleChoice
[[autodoc]] TFAutoModelForMultipleChoice
FlaxAutoModelForMultipleChoice
[[autodoc]] FlaxAutoModelForMultipleChoice
AutoModelForNextSentencePrediction
[[autodoc]] AutoModelForNextSentencePrediction
TFAutoModelForNextSentencePrediction
[[autodoc]] TFAutoModelForNextSentencePrediction
FlaxAutoModelForNextSentencePrediction
[[autodoc]] FlaxAutoModelForNextSentencePrediction
AutoModelForTokenClassification
[[autodoc]] AutoModelForTokenClassification
TFAutoModelForTokenClassification
[[autodoc]] TFAutoModelForTokenClassification
FlaxAutoModelForTokenClassification
[[autodoc]] FlaxAutoModelForTokenClassification
AutoModelForQuestionAnswering
[[autodoc]] AutoModelForQuestionAnswering
TFAutoModelForQuestionAnswering
[[autodoc]] TFAutoModelForQuestionAnswering
FlaxAutoModelForQuestionAnswering
[[autodoc]] FlaxAutoModelForQuestionAnswering
AutoModelForTextEncoding
[[autodoc]] AutoModelForTextEncoding
TFAutoModelForTextEncoding
[[autodoc]] TFAutoModelForTextEncoding
Computer vision
The following auto classes are available for the following computer vision tasks.
AutoModelForDepthEstimation
[[autodoc]] AutoModelForDepthEstimation
AutoModelForImageClassification
[[autodoc]] AutoModelForImageClassification
TFAutoModelForImageClassification
[[autodoc]] TFAutoModelForImageClassification
FlaxAutoModelForImageClassification
[[autodoc]] FlaxAutoModelForImageClassification
AutoModelForVideoClassification
[[autodoc]] AutoModelForVideoClassification
AutoModelForMaskedImageModeling
[[autodoc]] AutoModelForMaskedImageModeling
TFAutoModelForMaskedImageModeling
[[autodoc]] TFAutoModelForMaskedImageModeling
AutoModelForObjectDetection
[[autodoc]] AutoModelForObjectDetection
AutoModelForImageSegmentation
[[autodoc]] AutoModelForImageSegmentation
AutoModelForImageToImage
[[autodoc]] AutoModelForImageToImage
AutoModelForSemanticSegmentation
[[autodoc]] AutoModelForSemanticSegmentation
TFAutoModelForSemanticSegmentation
[[autodoc]] TFAutoModelForSemanticSegmentation
AutoModelForInstanceSegmentation
[[autodoc]] AutoModelForInstanceSegmentation
AutoModelForUniversalSegmentation
[[autodoc]] AutoModelForUniversalSegmentation
AutoModelForZeroShotImageClassification
[[autodoc]] AutoModelForZeroShotImageClassification
TFAutoModelForZeroShotImageClassification
[[autodoc]] TFAutoModelForZeroShotImageClassification
AutoModelForZeroShotObjectDetection
[[autodoc]] AutoModelForZeroShotObjectDetection
Audio
The following auto classes are available for the following audio tasks.
AutoModelForAudioClassification
[[autodoc]] AutoModelForAudioClassification
AutoModelForAudioFrameClassification
[[autodoc]] TFAutoModelForAudioClassification
TFAutoModelForAudioFrameClassification
[[autodoc]] AutoModelForAudioFrameClassification
AutoModelForCTC
[[autodoc]] AutoModelForCTC
AutoModelForSpeechSeq2Seq
[[autodoc]] AutoModelForSpeechSeq2Seq
TFAutoModelForSpeechSeq2Seq
[[autodoc]] TFAutoModelForSpeechSeq2Seq
FlaxAutoModelForSpeechSeq2Seq
[[autodoc]] FlaxAutoModelForSpeechSeq2Seq
AutoModelForAudioXVector
[[autodoc]] AutoModelForAudioXVector
AutoModelForTextToSpectrogram
[[autodoc]] AutoModelForTextToSpectrogram
AutoModelForTextToWaveform
[[autodoc]] AutoModelForTextToWaveform
Multimodal
The following auto classes are available for the following multimodal tasks.
AutoModelForTableQuestionAnswering
[[autodoc]] AutoModelForTableQuestionAnswering
TFAutoModelForTableQuestionAnswering
[[autodoc]] TFAutoModelForTableQuestionAnswering
AutoModelForDocumentQuestionAnswering
[[autodoc]] AutoModelForDocumentQuestionAnswering
TFAutoModelForDocumentQuestionAnswering
[[autodoc]] TFAutoModelForDocumentQuestionAnswering
AutoModelForVisualQuestionAnswering
[[autodoc]] AutoModelForVisualQuestionAnswering
AutoModelForVision2Seq
[[autodoc]] AutoModelForVision2Seq
TFAutoModelForVision2Seq
[[autodoc]] TFAutoModelForVision2Seq
FlaxAutoModelForVision2Seq
[[autodoc]] FlaxAutoModelForVision2Seq |
InstructBLIP
Overview
The InstructBLIP model was proposed in InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
InstructBLIP leverages the BLIP-2 architecture for visual instruction tuning.
The abstract from the paper is the following:
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models.
InstructBLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
InstructBLIP uses the same architecture as BLIP-2 with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
InstructBlipConfig
[[autodoc]] InstructBlipConfig
- from_vision_qformer_text_configs
InstructBlipVisionConfig
[[autodoc]] InstructBlipVisionConfig
InstructBlipQFormerConfig
[[autodoc]] InstructBlipQFormerConfig
InstructBlipProcessor
[[autodoc]] InstructBlipProcessor
InstructBlipVisionModel
[[autodoc]] InstructBlipVisionModel
- forward
InstructBlipQFormerModel
[[autodoc]] InstructBlipQFormerModel
- forward
InstructBlipForConditionalGeneration
[[autodoc]] InstructBlipForConditionalGeneration
- forward
- generate |
MPNet
Overview
The MPNet model was proposed in MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
masked language modeling and permuted language modeling for natural language understanding.
The abstract from the paper is the following:
BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
BERT, XLNet, RoBERTa) under the same model setting.
The original code can be found here.
Usage tips
MPNet doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or [sep]).
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
MPNetConfig
[[autodoc]] MPNetConfig
MPNetTokenizer
[[autodoc]] MPNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
MPNetTokenizerFast
[[autodoc]] MPNetTokenizerFast
MPNetModel
[[autodoc]] MPNetModel
- forward
MPNetForMaskedLM
[[autodoc]] MPNetForMaskedLM
- forward
MPNetForSequenceClassification
[[autodoc]] MPNetForSequenceClassification
- forward
MPNetForMultipleChoice
[[autodoc]] MPNetForMultipleChoice
- forward
MPNetForTokenClassification
[[autodoc]] MPNetForTokenClassification
- forward
MPNetForQuestionAnswering
[[autodoc]] MPNetForQuestionAnswering
- forward
TFMPNetModel
[[autodoc]] TFMPNetModel
- call
TFMPNetForMaskedLM
[[autodoc]] TFMPNetForMaskedLM
- call
TFMPNetForSequenceClassification
[[autodoc]] TFMPNetForSequenceClassification
- call
TFMPNetForMultipleChoice
[[autodoc]] TFMPNetForMultipleChoice
- call
TFMPNetForTokenClassification
[[autodoc]] TFMPNetForTokenClassification
- call
TFMPNetForQuestionAnswering
[[autodoc]] TFMPNetForQuestionAnswering
- call
|
ConvNeXT
Overview
The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
The abstract from the paper is the following:
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers
(e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide
variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive
biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design
of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models
dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy
and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets.
ConvNeXT architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by ariG23498,
gante, and sayakpaul (equal contribution). The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT.
[ConvNextForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextConfig
[[autodoc]] ConvNextConfig
ConvNextFeatureExtractor
[[autodoc]] ConvNextFeatureExtractor
ConvNextImageProcessor
[[autodoc]] ConvNextImageProcessor
- preprocess
ConvNextModel
[[autodoc]] ConvNextModel
- forward
ConvNextForImageClassification
[[autodoc]] ConvNextForImageClassification
- forward
TFConvNextModel
[[autodoc]] TFConvNextModel
- call
TFConvNextForImageClassification
[[autodoc]] TFConvNextForImageClassification
- call
|
SegFormer
Overview
The SegFormer model was proposed in SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping
Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great
results on image segmentation benchmarks such as ADE20K and Cityscapes.
The abstract from the paper is the following:
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with
lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel
hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding,
thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution
differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from
different layers, and thus combining both local attention and global attention to render powerful representations. We
show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our
approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance
and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters,
being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on
Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.
The figure below illustrates the architecture of SegFormer. Taken from the original paper.
This model was contributed by nielsr. The TensorFlow version
of the model was contributed by sayakpaul. The original code can be found here.
Usage tips
SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decoder head.
[SegformerModel] is the hierarchical Transformer encoder (which in the paper is also referred to
as Mix Transformer or MiT). [SegformerForSemanticSegmentation] adds the all-MLP decoder head on
top to perform semantic segmentation of images. In addition, there's
[SegformerForImageClassification] which can be used to - you guessed it - classify images. The
authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw
away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on
ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be
found on the hub.
The quickest way to get started with SegFormer is by checking the example notebooks (which showcase both inference and
fine-tuning on custom data). One can also check out the blog post introducing SegFormer and illustrating how it can be fine-tuned on custom data.
TensorFlow users should refer to this repository that shows off-the-shelf inference and fine-tuning.
One can also check out this interactive demo on Hugging Face Spaces
to try out a SegFormer model on custom images.
SegFormer works on any input size, as it pads the input to be divisible by config.patch_sizes.
One can use [SegformerImageProcessor] to prepare images and corresponding segmentation maps
for the model. Note that this image processor is fairly basic and does not include all data augmentations used in
the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found here. The most
important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size,
such as 512x512 or 640x640, after which they are normalized.
One additional thing to keep in mind is that one can initialize [SegformerImageProcessor] with
reduce_labels set to True or False. In some datasets (like ADE20k), the 0 index is used in the annotated
segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels.
Therefore, reduce_labels is used to reduce all labels by 1, and to make sure no loss is computed for the
background class (i.e. it replaces 0 in the annotated maps by 255, which is the ignore_index of the loss function
used by [SegformerForSemanticSegmentation]). However, other datasets use the 0 index as
background class and include this class as part of all labels. In that case, reduce_labels should be set to
False, as loss should also be computed for the background class.
As most models, SegFormer comes in different sizes, the details of which can be found in the table below
(taken from Table 7 of the original paper).
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
| :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: |
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For
SegFormer's results on the segmentation datasets like ADE20k, refer to the paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer.
[SegformerForImageClassification] is supported by this example script and notebook.
Image classification task guide
Semantic segmentation:
[SegformerForSemanticSegmentation] is supported by this example script.
A blog on fine-tuning SegFormer on a custom dataset can be found here.
More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found here.
[TFSegformerForSemanticSegmentation] is supported by this example notebook.
Semantic segmentation task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SegformerConfig
[[autodoc]] SegformerConfig
SegformerFeatureExtractor
[[autodoc]] SegformerFeatureExtractor
- call
- post_process_semantic_segmentation
SegformerImageProcessor
[[autodoc]] SegformerImageProcessor
- preprocess
- post_process_semantic_segmentation
SegformerModel
[[autodoc]] SegformerModel
- forward
SegformerDecodeHead
[[autodoc]] SegformerDecodeHead
- forward
SegformerForImageClassification
[[autodoc]] SegformerForImageClassification
- forward
SegformerForSemanticSegmentation
[[autodoc]] SegformerForSemanticSegmentation
- forward
TFSegformerDecodeHead
[[autodoc]] TFSegformerDecodeHead
- call
TFSegformerModel
[[autodoc]] TFSegformerModel
- call
TFSegformerForImageClassification
[[autodoc]] TFSegformerForImageClassification
- call
TFSegformerForSemanticSegmentation
[[autodoc]] TFSegformerForSemanticSegmentation
- call
|
XLS-R
Overview
The XLS-R model was proposed in XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman
Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
The abstract from the paper is the following:
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.
We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128
languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range
of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation
benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into
English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as
VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107
language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform
English-only pretraining when translating English speech into other languages, a setting which favors monolingual
pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.
Relevant checkpoints can be found under https://huggingface.co/models?other=xls_r.
The original code can be found here.
Usage tips
XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
XLS-R's architecture is based on the Wav2Vec2 model, refer to Wav2Vec2's documentation page for API reference.
|
XLM-V
Overview
XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R).
It was introduced in the XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
From the abstract of the XLM-V paper:
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.
As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged.
This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R.
In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by
de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity
to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically
more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,
a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we
tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and
named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
This model was contributed by stefan-it, including detailed experiments with XLM-V on downstream tasks.
The experiments repository can be found here.
Usage tips
XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from fairseq
library had to be converted.
The XLMTokenizer implementation is used to load the vocab and performs tokenization.
A XLM-V (base size) model is available under the facebook/xlm-v-base identifier.
XLM-V architecture is the same as XLM-RoBERTa, refer to XLM-RoBERTa documentation for API reference, and examples.
|
RemBERT
Overview
The RemBERT model was proposed in Rethinking Embedding Coupling in Pre-trained Language Models by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.
The abstract from the paper is the following:
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
allocating additional capacity to the output embedding provides benefits to the model that persist through the
fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger
output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage
Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these
findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the
number of parameters at the fine-tuning stage.
Usage tips
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
also similar to the Albert one rather than the BERT one.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RemBertConfig
[[autodoc]] RemBertConfig
RemBertTokenizer
[[autodoc]] RemBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
RemBertTokenizerFast
[[autodoc]] RemBertTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
RemBertModel
[[autodoc]] RemBertModel
- forward
RemBertForCausalLM
[[autodoc]] RemBertForCausalLM
- forward
RemBertForMaskedLM
[[autodoc]] RemBertForMaskedLM
- forward
RemBertForSequenceClassification
[[autodoc]] RemBertForSequenceClassification
- forward
RemBertForMultipleChoice
[[autodoc]] RemBertForMultipleChoice
- forward
RemBertForTokenClassification
[[autodoc]] RemBertForTokenClassification
- forward
RemBertForQuestionAnswering
[[autodoc]] RemBertForQuestionAnswering
- forward
TFRemBertModel
[[autodoc]] TFRemBertModel
- call
TFRemBertForMaskedLM
[[autodoc]] TFRemBertForMaskedLM
- call
TFRemBertForCausalLM
[[autodoc]] TFRemBertForCausalLM
- call
TFRemBertForSequenceClassification
[[autodoc]] TFRemBertForSequenceClassification
- call
TFRemBertForMultipleChoice
[[autodoc]] TFRemBertForMultipleChoice
- call
TFRemBertForTokenClassification
[[autodoc]] TFRemBertForTokenClassification
- call
TFRemBertForQuestionAnswering
[[autodoc]] TFRemBertForQuestionAnswering
- call
|
SeamlessM4T
Overview
The SeamlessM4T model was proposed in SeamlessM4T — Massively Multilingual & Multimodal Machine Translation by the Seamless Communication team from Meta AI.
This is the version 1 release of the model. For the updated version 2 release, refer to the Seamless M4T v2 docs.
SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text.
SeamlessM4T enables multiple tasks without relying on separate models:
Speech-to-speech translation (S2ST)
Speech-to-text translation (S2TT)
Text-to-speech translation (T2ST)
Text-to-text translation (T2TT)
Automatic speech recognition (ASR)
[SeamlessM4TModel] can perform all the above tasks, but each task also has its own dedicated sub-model.
The abstract from the paper is the following:
What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems that perform translation progressively, putting high-performing unified systems out of reach. To address these gaps, we introduce SeamlessM4T, a single model that supports speech-to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations. Filtered and combined with human-labeled and pseudo-labeled data, we developed the first multilingual system capable of translating from and into English for both speech and text. On FLEURS, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous SOTA in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks compared to the current SOTA model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Finally, all contributions in this work are open-sourced and accessible at https://github.com/facebookresearch/seamless_communication
Usage
First, load the processor and a checkpoint of the model:
thon
from transformers import AutoProcessor, SeamlessM4TModel
processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-medium")
model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-medium")
You can seamlessly use this model on text or on audio, to generated either translated text or translated audio.
Here is how to use the processor to process text and audio:
thon
let's load an audio sample from an Arabic speech corpus
from datasets import load_dataset
dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True)
audio_sample = next(iter(dataset))["audio"]
now, process it
audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt")
now, process some English test as well
text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt")
Speech
[SeamlessM4TModel] can seamlessly generate text or speech with few or no changes. Let's target Russian voice translation:
thon
audio_array_from_text = model.generate(text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze()
audio_array_from_audio = model.generate(audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze()
With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
Text
Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass generate_speech=False to [SeamlessM4TModel.generate].
This time, let's translate to French.
thon
from audio
output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False)
translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
from text
output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False)
translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
Tips
1. Use dedicated models
[SeamlessM4TModel] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint.
For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code:
thon
from transformers import SeamlessM4TForSpeechToSpeech
model = SeamlessM4TForSpeechToSpeech.from_pretrained("facebook/hf-seamless-m4t-medium")
Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove generate_speech=False.
thon
from transformers import SeamlessM4TForTextToText
model = SeamlessM4TForTextToText.from_pretrained("facebook/hf-seamless-m4t-medium")
Feel free to try out [SeamlessM4TForSpeechToText] and [SeamlessM4TForTextToSpeech] as well.
2. Change the speaker identity
You have the possibility to change the speaker used for speech synthesis with the spkr_id argument. Some spkr_id works better than other for some languages!
3. Change the generation strategy
You can use different generation strategies for speech and text generation, e.g .generate(input_ids=input_ids, text_num_beams=4, speech_do_sample=True) which will successively perform beam-search decoding on the text model, and multinomial sampling on the speech model.
4. Generate speech and text at the same time
Use return_intermediate_token_ids=True with [SeamlessM4TModel] to return both speech and text !
Model architecture
SeamlessM4T features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text.
Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the HiFi-GAN architecture is placed on top of the second seq2seq model.
Here's how the generation process works:
Input text or speech is processed through its specific encoder.
A decoder creates text tokens in the desired language.
If speech generation is required, the second seq2seq model, following a standard encoder-decoder structure, generates unit tokens.
These unit tokens are then passed through the final vocoder to produce the actual speech.
This model was contributed by ylacombe. The original code can be found here.
SeamlessM4TModel
[[autodoc]] SeamlessM4TModel
- generate
SeamlessM4TForTextToSpeech
[[autodoc]] SeamlessM4TForTextToSpeech
- generate
SeamlessM4TForSpeechToSpeech
[[autodoc]] SeamlessM4TForSpeechToSpeech
- generate
SeamlessM4TForTextToText
[[autodoc]] transformers.SeamlessM4TForTextToText
- forward
- generate
SeamlessM4TForSpeechToText
[[autodoc]] transformers.SeamlessM4TForSpeechToText
- forward
- generate
SeamlessM4TConfig
[[autodoc]] SeamlessM4TConfig
SeamlessM4TTokenizer
[[autodoc]] SeamlessM4TTokenizer
- call
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SeamlessM4TTokenizerFast
[[autodoc]] SeamlessM4TTokenizerFast
- call
SeamlessM4TFeatureExtractor
[[autodoc]] SeamlessM4TFeatureExtractor
- call
SeamlessM4TProcessor
[[autodoc]] SeamlessM4TProcessor
- call
SeamlessM4TCodeHifiGan
[[autodoc]] SeamlessM4TCodeHifiGan
SeamlessM4THifiGan
[[autodoc]] SeamlessM4THifiGan
SeamlessM4TTextToUnitModel
[[autodoc]] SeamlessM4TTextToUnitModel
SeamlessM4TTextToUnitForConditionalGeneration
[[autodoc]] SeamlessM4TTextToUnitForConditionalGeneration |
ImageGPT
Overview
The ImageGPT model was proposed in Generative Pretraining from Pixels by Mark
Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like
model trained to predict the next pixel value, allowing for both unconditional and conditional image generation.
The abstract from the paper is the following:
Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models
can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels,
without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels,
we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and
low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide
ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also
competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0%
top-1 accuracy on a linear probe of our features.
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr, based on this issue. The original code can be found
here.
Usage tips
ImageGPT is almost exactly the same as GPT-2, with the exception that a different activation
function is used (namely "quick gelu"), and the layer normalization layers don't mean center the inputs. ImageGPT
also doesn't have tied input- and output embeddings.
As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence
length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a
sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors
applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long
sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger
embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special "start of sentence" (SOS)
token, used at the beginning of every sequence. One can use [ImageGPTImageProcessor] to prepare
images for the model.
Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly
performant image features useful for downstream tasks, such as image classification. The authors showed that the
features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as
a sklearn logistic regression model for example). This is also referred to as "linear probing". Features can be
easily obtained by first forwarding the image through the model, then specifying output_hidden_states=True, and
then average-pool the hidden states at whatever layer you like.
Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can
use [ImageGPTForImageClassification].
ImageGPT comes in different sizes: there's ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also
train an XL variant, which they didn't release. The differences in size are summarized in the following table:
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
|---|---|---|---|---|---|
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ImageGPT.
Demo notebooks for ImageGPT can be found here.
[ImageGPTForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ImageGPTConfig
[[autodoc]] ImageGPTConfig
ImageGPTFeatureExtractor
[[autodoc]] ImageGPTFeatureExtractor
- call
ImageGPTImageProcessor
[[autodoc]] ImageGPTImageProcessor
- preprocess
ImageGPTModel
[[autodoc]] ImageGPTModel
- forward
ImageGPTForCausalImageModeling
[[autodoc]] ImageGPTForCausalImageModeling
- forward
ImageGPTForImageClassification
[[autodoc]] ImageGPTForImageClassification
- forward |
Nezha
Overview
The Nezha model was proposed in NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei et al.
The abstract from the paper is the following:
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks
due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed
representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks.
The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional
Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy,
Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA
achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including
named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti)
and natural language inference (XNLI).
This model was contributed by sijunhe. The original code can be found here.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
NezhaConfig
[[autodoc]] NezhaConfig
NezhaModel
[[autodoc]] NezhaModel
- forward
NezhaForPreTraining
[[autodoc]] NezhaForPreTraining
- forward
NezhaForMaskedLM
[[autodoc]] NezhaForMaskedLM
- forward
NezhaForNextSentencePrediction
[[autodoc]] NezhaForNextSentencePrediction
- forward
NezhaForSequenceClassification
[[autodoc]] NezhaForSequenceClassification
- forward
NezhaForMultipleChoice
[[autodoc]] NezhaForMultipleChoice
- forward
NezhaForTokenClassification
[[autodoc]] NezhaForTokenClassification
- forward
NezhaForQuestionAnswering
[[autodoc]] NezhaForQuestionAnswering
- forward |
Audio Spectrogram Transformer
Overview
The Audio Spectrogram Transformer model was proposed in AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass.
The Audio Spectrogram Transformer applies a Vision Transformer to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results
for audio classification.
The abstract from the paper is the following:
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
Audio Spectrogram Transformer architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it's recommended to take care of the input normalization (to make
sure the input has mean of 0 and std of 0.5). [ASTFeatureExtractor] takes care of this. Note that it uses the AudioSet
mean and std by default. You can check ast/src/get_norm_stats.py to see how
the authors compute the stats for a downstream dataset.
Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the
PSLA paper) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer.
A notebook illustrating inference with AST for audio classification can be found here.
[ASTForAudioClassification] is supported by this example script and notebook.
See also: Audio classification.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ASTConfig
[[autodoc]] ASTConfig
ASTFeatureExtractor
[[autodoc]] ASTFeatureExtractor
- call
ASTModel
[[autodoc]] ASTModel
- forward
ASTForAudioClassification
[[autodoc]] ASTForAudioClassification
- forward |
Mask2Former
Overview
The Mask2Former model was proposed in Masked-attention Mask Transformer for Universal Image Segmentation by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over MaskFormer.
The abstract from the paper is the following:
Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice
of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).
Mask2Former architecture. Taken from the original paper.
This model was contributed by Shivalika Singh and Alara Dirik. The original code can be found here.
Usage tips
Mask2Former uses the same preprocessing and postprocessing steps as MaskFormer. Use [Mask2FormerImageProcessor] or [AutoImageProcessor] to prepare images and optional targets for the model.
To get the final segmentation, depending on the task, you can call [~Mask2FormerImageProcessor.post_process_semantic_segmentation] or [~Mask2FormerImageProcessor.post_process_instance_segmentation] or [~Mask2FormerImageProcessor.post_process_panoptic_segmentation]. All three tasks can be solved using [Mask2FormerForUniversalSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mask2Former.
Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found here.
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
Mask2FormerConfig
[[autodoc]] Mask2FormerConfig
MaskFormer specific outputs
[[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerModelOutput
[[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput
Mask2FormerModel
[[autodoc]] Mask2FormerModel
- forward
Mask2FormerForUniversalSegmentation
[[autodoc]] Mask2FormerForUniversalSegmentation
- forward
Mask2FormerImageProcessor
[[autodoc]] Mask2FormerImageProcessor
- preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation |
PatchTSMixer
Overview
The PatchTSMixer model was proposed in TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting by Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.
PatchTSMixer is a lightweight time-series modeling approach based on the MLP-Mixer architecture. In this HuggingFace implementation, we provide PatchTSMixer's capabilities to effortlessly facilitate lightweight mixing across patches, channels, and hidden features for effective multivariate time-series modeling. It also supports various attention mechanisms starting from simple gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and subsequently used for various downstream tasks such as forecasting, classification and regression.
The abstract from the paper is the following:
TSMixer is a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules designed for multivariate forecasting and representation learning on patched time series. Our model draws inspiration from the success of MLP-Mixer models in computer vision. We demonstrate the challenges involved in adapting Vision MLP-Mixer for time series and introduce empirically validated components to enhance accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a Hybrid channel modeling approach to effectively handle noisy channel interactions and generalization across diverse datasets, a common challenge in existing patch channel-mixing methods. Additionally, a simple gated attention mechanism is introduced in the backbone to prioritize important features. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X).
This model was contributed by ajati, vijaye12,
gsinthong, namctin,
wmgifford, kashif.
Usage example
The code snippet below shows how to randomly initialize a PatchTSMixer model. The model is compatible with the Trainer API.
thon
from transformers import PatchTSMixerConfig, PatchTSMixerForPrediction
from transformers import Trainer, TrainingArguments,
config = PatchTSMixerConfig(context_length = 512, prediction_length = 96)
model = PatchTSMixerForPrediction(config)
trainer = Trainer(model=model, args=training_args,
train_dataset=train_dataset,
eval_dataset=valid_dataset)
trainer.train()
results = trainer.evaluate(test_dataset)
Usage tips
The model can also be used for time series classification and time series regression. See the respective [PatchTSMixerForTimeSeriesClassification] and [PatchTSMixerForRegression] classes.
Resources
A blog post explaining PatchTSMixer in depth can be found here. The blog can also be opened in Google Colab.
PatchTSMixerConfig
[[autodoc]] PatchTSMixerConfig
PatchTSMixerModel
[[autodoc]] PatchTSMixerModel
- forward
PatchTSMixerForPrediction
[[autodoc]] PatchTSMixerForPrediction
- forward
PatchTSMixerForTimeSeriesClassification
[[autodoc]] PatchTSMixerForTimeSeriesClassification
- forward
PatchTSMixerForPretraining
[[autodoc]] PatchTSMixerForPretraining
- forward
PatchTSMixerForRegression
[[autodoc]] PatchTSMixerForRegression
- forward |
GPTBigCode
Overview
The GPTBigCode model was proposed in SantaCoder: don't reach for the stars! by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
The abstract from the paper is the following:
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at this https URL.
The model is an optimized GPT2 model with support for Multi-Query Attention.
Implementation details
The main differences compared to GPT2.
- Added support for Multi-Query Attention.
- Use gelu_pytorch_tanh instead of classic gelu.
- Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn't in the reference codebase).
- Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible).
- Merge _attn and _upcast_and_reordered_attn. Always merge the matmul with scaling. Rename reorder_and_upcast_attn->attention_softmax_in_fp32
- Cache the attention mask value to avoid recreating it every time.
- Use jit to fuse the attention fp32 casting, masking, softmax, and scaling.
- Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer.
- Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?)
- Use the memory layout (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim) for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original openai-community/gpt2 model).
You can read more about the optimizations in the original pull request
Combining Starcoder and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("bigcode/gpt_bigcode-santacoder", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder")
prompt = "def hello_world():"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
'def hello_world():\n print("hello world")\n\nif name == "main":\n print("hello world")\n<|endoftext|>'
Expected speedups
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using bigcode/starcoder checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
GPTBigCodeConfig
[[autodoc]] GPTBigCodeConfig
GPTBigCodeModel
[[autodoc]] GPTBigCodeModel
- forward
GPTBigCodeForCausalLM
[[autodoc]] GPTBigCodeForCausalLM
- forward
GPTBigCodeForSequenceClassification
[[autodoc]] GPTBigCodeForSequenceClassification
- forward
GPTBigCodeForTokenClassification
[[autodoc]] GPTBigCodeForTokenClassification
- forward |
Nyströmformer
Overview
The Nyströmformer model was proposed in Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn
Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component
that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or
dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the
input sequence length has limited its application to longer sequences -- a topic being actively studied in the
community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a
function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention
with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard
sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than
standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs
favorably relative to other efficient self-attention methods. Our code is available at this https URL.
This model was contributed by novice03. The original code can be found here.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
NystromformerConfig
[[autodoc]] NystromformerConfig
NystromformerModel
[[autodoc]] NystromformerModel
- forward
NystromformerForMaskedLM
[[autodoc]] NystromformerForMaskedLM
- forward
NystromformerForSequenceClassification
[[autodoc]] NystromformerForSequenceClassification
- forward
NystromformerForMultipleChoice
[[autodoc]] NystromformerForMultipleChoice
- forward
NystromformerForTokenClassification
[[autodoc]] NystromformerForTokenClassification
- forward
NystromformerForQuestionAnswering
[[autodoc]] NystromformerForQuestionAnswering
- forward |