GPT-2 Fine-Tuned Model
This is a fine-tuned version of the GPT-2 model designed for text generation tasks. The model has been fine-tuned to improve its performance on generating coherent and contextually relevant text.
Model Details
- Model Name: GPT-2 Fine-Tuned
- Base Model: gpt2
- Architecture: GPT2LMHeadModel
- Tokenization: Supported
pad_token_id
: 50256bos_token_id
: 50256eos_token_id
: 50256
Supported Tasks
This model supports the following task:
- Text Generation
Configuration
Model Configuration (config.json)
- Hidden Size: 768
- Number of Layers: 12
- Number of Attention Heads: 12
- Vocab Size: 50257
- Token Type IDs: Not used
Generation Configuration (generation_config.json)
- Sampling Temperature: 0.7
- Top-p (nucleus sampling): 0.9
- Pad Token ID: 50256
- Bos Token ID: 50256
- Eos Token ID: 50256
Usage
To use this model for text generation via the Hugging Face API, use the following Python code snippet:
import requests
api_url = "https://api-inference.huggingface.co/models/rahul77/gpt-2-finetune"
headers = {
"Authorization": "Bearer YOUR_API_TOKEN", # Replace with your Hugging Face API token
"Content-Type": "application/json"
}
data = {
"inputs": "What is a large language model?",
"parameters": {
"max_length": 50
}
}
response = requests.post(api_url, headers=headers, json=data)
if response.status_code == 200:
print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.json())
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for rahul77/gpt-2-finetune
Base model
openai-community/gpt2