metadata
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: Fake-news-gen
results: []
Fake-news-gen
This model is a fine-tuned version of google/flan-t5-base on the None dataset.
Here is one way to write a professional model card description for your fake news generator model on Hugging Face:
Summary
This model is a conditional text generation system fine-tuned to produce artificially generated news articles from short text summaries. It demonstrates the potential for AI systems to automatically synthesize false or misleading news content from limited input information.
Intended Uses
- Research on AI fake news generation capabilities and risks
- Educational purposes to increase awareness of AI fake content issues
- Testing automatic fake news detection systems
Factors
- Initially trained on summarization data (XSUM BBC news)
- Fine-tuned end-to-end to generate full articles from summaries
- Generates content token-by-token based on summary prompt
- No ground-truth real/fake labels or classifier included
- Outputs are raw model decodes without post-processing
Caveats and Recommendations
- Output quality can vary, may require multiple sampling or overrides
- Content reflects training data biases and model limitations
- Not intended for malicious uses or dissemination
- Users should manually review outputs before reliance
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Framework versions
- Transformers 4.36.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0