|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
metrics: |
|
- rouge |
|
base_model: google/pegasus-cnn_dailymail |
|
--- |
|
|
|
### Pegasus-based Text Summarization Model |
|
Model Name: pegsus-text-summarization |
|
|
|
### Model Description |
|
This model is a fine-tuned version of the Pegasus model, specifically adapted for the task of text summarization. It is trained on the SAMSum dataset, which is designed for summarizing conversations. |
|
|
|
### Usage |
|
This model can be used to generate concise summaries of input text, particularly for conversational text or dialogue-based inputs. |
|
|
|
### How to Use |
|
You can use this model with the Hugging Face transformers library. Below is an example code snippet: |
|
|
|
```bash |
|
|
|
from transformers import PegasusForConditionalGeneration, PegasusTokenizer |
|
|
|
# Load the pre-trained model and tokenizer |
|
model_name = "ailm/pegsus-text-summarization" |
|
model = PegasusForConditionalGeneration.from_pretrained(model_name) |
|
tokenizer = PegasusTokenizer.from_pretrained(model_name) |
|
|
|
# Define the input text |
|
text = "Your input text here" |
|
|
|
# Tokenize the input text |
|
tokens = tokenizer(text, truncation=True, padding="longest", return_tensors="pt") |
|
|
|
# Generate the summary |
|
summary = model.generate(**tokens) |
|
|
|
# Decode and print the summary |
|
print(tokenizer.decode(summary[0], skip_special_tokens=True)) |
|
|
|
|