--- library_name: transformers tags: [natural-language-processing, causal-lm, gpt, transformers, distilgpt2] --- # Model Card for `tesolnet/tari01` ## Model Details ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** TARI - **Model type:** GPT-2 variant (distilled version) - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** distilgpt2 ## Uses ### Direct Use This model can be used for text generation tasks such as generating text based on a prompt and creating chatbots. ### Downstream Use [optional] This model can be further fine-tuned for specific tasks such as sentiment analysis, question answering, or other NLP tasks requiring text generation. ### Out-of-Scope Use The model should not be used for generating harmful, misleading, or malicious content. It may not perform well on tasks requiring understanding of context beyond a few sentences or paragraphs. ## Bias, Risks, and Limitations This model, like all language models, can produce biased or harmful text based on the data it was trained on. Users should be aware of these limitations and use the model with caution. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information is needed for further recommendations. ## How to Get Started with the Model To get started with the model, use the `transformers` library from Hugging Face. Load the model and tokenizer with the following identifiers: `tesolnet/tari01`. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("tesolnet/tari01") tokenizer = AutoTokenizer.from_pretrained("tesolnet/tari01") inputs = tokenizer("Hello, my name is", return_tensors="pt") outputs = model.generate(inputs.input_ids, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Details ### Training Data The model was fine-tuned on 100 ebooks about computational linguistics, preprocessed and tokenized for training. ### Training Procedure #### Preprocessing [optional] The text data was tokenized using the `AutoTokenizer` from the `transformers` library with a maximum token length of 128. #### Training Hyperparameters - **Training regime:** Mixed precision (fp16) - **Learning rate:** 2e-5 - **Batch size:** 2 - **Epochs:** 1 - **Weight decay:** 0.01 #### Speeds, Sizes, Times [optional] - **Training time:** Approximately 3.85 hours ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data Evaluation was performed on a subset of the training data held out for validation purposes. #### Factors Evaluation factors included token accuracy and perplexity on the validation dataset. #### Metrics Evaluation metrics included perplexity, as it measures the model's ability to predict the next token in a sequence. ### Results [More Information Needed] #### Summary The model achieved satisfactory results for text generation tasks based on the validation metrics. ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** NVIDIA GeForce RTX 4090 (2 GPUs) - **Hours used:** 3.85 hours ## Technical Specifications [optional] ### Model Architecture and Objective The model is a distilled version of GPT-2, fine-tuned for text generation tasks. ### Compute Infrastructure #### Hardware Training was performed on two NVIDIA GeForce RTX 4090 GPUs. #### Software - **OS:** Ubuntu 22.04 - **Libraries:** `transformers`, `torch`, `safetensors` ```