Edit model card

Model description

This is a fine-tuned version of distilgpt2 which is intended to be used with the promptgen extension inside the AUTOMATIC1111 WebUI. It is trained on the raw tags of e621 with underscores and spaces

Training

This model is a fine-tuned version of distilgpt2 on a dataset of the tags of 116k random posts of e621.net. It achieves the following results on the evaluation set:

  • Loss: 4.3983
  • Accuracy: 0.3865

Training and evaluation data

Use this collab notebook to train your own model. Also used to train this model Open In Colab

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 6
  • eval_batch_size: 6
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0

Intended uses & limitations

Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.

The developers of GPT-2 state in their model card that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:

  • Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
  • Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
  • Entertainment: Creation of games, chat bots, and amusing generations.

Using DistilGPT2, the Hugging Face team built the Write With Transformers web app, which allows users to play with the model to generate text directly from their browser.

Out-of-scope Uses

OpenAI states in the GPT-2 model card:

Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.

Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
41
Safetensors
Model size
81.9M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 0Tick/e621TagAutocomplete

Finetuned
(547)
this model

Dataset used to train 0Tick/e621TagAutocomplete

Space using 0Tick/e621TagAutocomplete 1