Edit model card

Model Card for Tiny Hinglish-Chat-21M

A Tiny Hinglish Speaking text completion model. It can carry out conversations on everyday-life topics in Hinglish. Try it now through its Hugging Face Space!

More Information

For more information about this model, its training process, or related resources, you can check the GitHub repository Tiny-Hinglish-Chat-21M-Scripts.

How to Get Started with the Model

To get started with the model, simply run the below Python code

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load the model and tokenizer
model, tokenizer = (AutoModelForCausalLM.from_pretrained("Abhishekcr448/Tiny-Hinglish-Chat-21M")
                    .to("cuda" if torch.cuda.is_available() else "cpu"), 
                    AutoTokenizer.from_pretrained("Abhishekcr448/Tiny-Hinglish-Chat-21M"))

# Function to generate text
def generate_text(prompt):
    inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
    print(tokenizer.decode(model.generate(inputs['input_ids'], 
                                          max_length=inputs['input_ids'].shape[-1] + 25, 
                                          no_repeat_ngram_size=2, temperature=0.8, 
                                          top_k=50, top_p=0.9, do_sample=True)[0], 
                           skip_special_tokens=True))

while (prompt := input("Enter prompt ('exit' to quit): ").lower()) != "exit":
    generate_text(prompt)

Model Details

Model Description

The Tiny Hinglish-Chat-21M is a small conversational text generation model trained on a Hinglish-based conversation dataset. It can generate responses in Hinglish (a mixture of Hindi and English) for everyday conversation topics. The model is based on the GPT-2 architecture, making it suitable for text completion tasks in a conversational setting. The model was trained using a synthetically created Hinglish dataset with GPT4o-mini and fine-tuned to provide responses that mimic casual dialogues between two people. This model is designed for edge devices, ensuring it remains lightweight and fast while maintaining response relevance. The complete process and code used to build this model can be found in my GitHub repository: Tiny-Hinglish-Chat-21M-Scripts

  • Developed by: Abhishek Khatri
  • Model Type: Text-to-Text Generation (Conversational AI)
  • License: MIT

Model Architecture

This model is built on the GPT-2 architecture, a well-known transformer-based model designed for generating coherent text in a variety of contexts. The model was fine-tuned on a dataset of Hinglish conversations, ensuring it understands both Hindi and English mixed text.

Performance

  • Language Support: Hinglish (a mix of Hindi and English)
  • Primary Use Case: Text completion for conversational chatbots
  • Model Size: ~21 million parameters

Uses

This model can be used for generating text in Hinglish, making it ideal for small-scale chatbots or applications that need conversational models with limited computational resources. It is particularly suitable for edge devices where both the model size and response time matter.

Direct Use

You can directly use the model through Hugging Face's Space or by integrating it into your own application. Currently, there is no live implementation available, but you can easily implement it for use in chatbots or conversational systems.

Bias, Risks, and Limitations

Like any AI model, this model may sometimes generate irrelevant or biased outputs. Since it was trained on synthetic data generated by GPT4o-mini, there may be instances where the outputs reflect the biases inherent in that data. Users should always review generated text to ensure its relevance and appropriateness.

Risks

  • The model may sometimes provide contextually incorrect or biased responses.
  • Hinglish being a non-standard language mixture, some responses may be difficult for users unfamiliar with the language blend.

Recommendations

It is advised to monitor and review the generated outputs before deployment, especially in sensitive applications, to avoid any undesirable or inappropriate responses.

Training Details

The training process, including data collection, preprocessing, and model fine-tuning, is explained in the following GitHub repository: Tiny-Hinglish-Chat-21M-Scripts. The model was trained on a custom Hinglish dataset created using GPT4o-mini.

Environmental Impact

The model was trained on a private infrastructure using an NVIDIA RTX 4090 GPU, and the total computation lasted for about 10 hours. The carbon emissions during the training were calculated using the Machine Learning COâ‚‚ Impact calculator.

Hardware Type: RTX 4090 Hours used: ~10 hours Cloud Provider: Vast.ai Compute Region: Germany Carbon Emitted: 1.3 kg COâ‚‚ (estimated) These values were calculated using the MLCO2 Impact Calculator presented in the paper by Lacoste et al. (2019).

Technical Specifications

Model Architecture and Objective

Architecture: GPT-2 (small model) Objective: Text generation based on conversational prompts in Hinglish

Software

Frameworks Used: PyTorch, Transformers (Hugging Face) Environment: Python 3.8, torch 2.5.1, transformers 4.46.3

Citation

If you use this model or dataset in your work, please cite the following:

BibTeX:

@misc{Tiny-Hinglish-Chat-21M,
  author = {Abhishek Khatri},
  title = {Tiny Hinglish-Chat-21M: A Small Hinglish Conversational Model},
  year = {2024},
  url = {https://huggingface.co/Abhishekcr448/Tiny-Hinglish-Chat-21M},
}

APA: Khatri, A. (2024). Tiny Hinglish-Chat-21M: A Small Hinglish Conversational Model. Retrieved from https://huggingface.co/Abhishekcr448/Tiny-Hinglish-Chat-21M

Glossary

Hinglish: A blend of Hindi and English, widely used in everyday communication in India and surrounding regions. It involves mixing the two languages, often within the same sentence. GPT-2: A transformer-based language model for text generation developed by OpenAI.

Model Card Authors

Author: Abhishek Khatri

Downloads last month
189
Safetensors
Model size
21M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Abhishekcr448/Tiny-Hinglish-Chat-21M

Quantizations
1 model

Dataset used to train Abhishekcr448/Tiny-Hinglish-Chat-21M

Space using Abhishekcr448/Tiny-Hinglish-Chat-21M 1