|
| Pascal Zoleko | | |
|
|Classify text with DistilBERT and Tensorflow | How to fine-tune DistilBERT for text classification in TensorFlow | Peter Bayerle | | |
|
|Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail | How to warm-start a EncoderDecoderModel with a google-bert/bert-base-uncased checkpoint for summarization on CNN/Dailymail | Patrick von Platen | | |
|
|Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum | How to warm-start a shared EncoderDecoderModel with a FacebookAI/roberta-base checkpoint for summarization on BBC/XSum | Patrick von Platen | | |
|
|Fine-tune TAPAS on Sequential Question Answering (SQA) | How to fine-tune TapasForQuestionAnswering with a tapas-base checkpoint on the Sequential Question Answering (SQA) dataset | Niels Rogge | | |
|
|Evaluate TAPAS on Table Fact Checking (TabFact) | How to evaluate a fine-tuned TapasForSequenceClassification with a tapas-base-finetuned-tabfact checkpoint using a combination of the 🤗 datasets and 🤗 transformers libraries | Niels Rogge | | |
|
|Fine-tuning mBART for translation | How to fine-tune mBART using Seq2SeqTrainer for Hindi to English translation | Vasudev Gupta | | |
|
|Fine-tune LayoutLM on FUNSD (a form understanding dataset) | How to fine-tune LayoutLMForTokenClassification on the FUNSD dataset for information extraction from scanned documents | Niels Rogge | | |
|
|Fine-Tune DistilGPT2 and Generate Text | How to fine-tune DistilGPT2 and generate text | Aakash Tripathi | | |
|
|Fine-Tune LED on up to 8K tokens | How to fine-tune LED on pubmed for long-range summarization | Patrick von Platen | | |
|
|Evaluate LED on Arxiv | How to effectively evaluate LED on long-range summarization | Patrick von Platen | | |
|
|Fine-tune LayoutLM on RVL-CDIP (a document image classification dataset) | How to fine-tune LayoutLMForSequenceClassification on the RVL-CDIP dataset for scanned document classification | Niels Rogge | | |
|
|Wav2Vec2 CTC decoding with GPT2 adjustment | How to decode CTC sequence with language model adjustment | Eric Lam | | |
|
|Fine-tune BART for summarization in two languages with Trainer class | How to fine-tune BART for summarization in two languages with Trainer class | Eliza Szczechla | | |
|
|Evaluate Big Bird on Trivia QA | How to evaluate BigBird on long document question answering on Trivia QA | Patrick von Platen | | |
|
| Create video captions using Wav2Vec2 | How to create YouTube captions from any video by transcribing the audio with Wav2Vec | Niklas Muennighoff | | |
|
| Fine-tune the Vision Transformer on CIFAR-10 using PyTorch Lightning | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and PyTorch Lightning | Niels Rogge | | |
|
| Fine-tune the Vision Transformer on CIFAR-10 using the 🤗 Trainer | How to fine-tune the Vision Transformer (ViT) on CIFAR-10 using HuggingFace Transformers, Datasets and the 🤗 Trainer | Niels Rogge | | |
|
| Evaluate LUKE on Open Entity, an entity typing dataset | How to evaluate LukeForEntityClassification on the Open Entity dataset | Ikuya Yamada | | |
|
| Evaluate LUKE on TACRED, a relation extraction dataset | How to evaluate LukeForEntityPairClassification on the TACRED dataset | Ikuya Yamada | | |
|
| Evaluate LUKE on CoNLL-2003, an important NER benchmark | How to evaluate LukeForEntitySpanClassification on the CoNLL-2003 dataset | Ikuya Yamada | | |
|
| Evaluate BigBird-Pegasus on PubMed dataset | How to evaluate BigBirdPegasusForConditionalGeneration on PubMed dataset | Vasudev Gupta | | |
|
| Speech Emotion Classification with Wav2Vec2 | How to leverage a pretrained Wav2Vec2 model for Emotion Classification on the MEGA dataset | Mehrdad Farahani | | |
|
| Detect objects in an image with DETR | How to use a trained DetrForObjectDetection model to detect objects in an image and visualize attention | Niels Rogge | | |
|
| Fine-tune DETR on a custom object detection dataset | How to fine-tune DetrForObjectDetection on a custom object detection dataset | Niels Rogge | | |
|
| Finetune T5 for Named Entity Recognition | How to fine-tune T5 on a Named Entity Recognition Task | Ogundepo Odunayo | | |