Edit model card

T5 for question-generation

This is t5-small model trained for end-to-end question generation task. Simply input the text and the model will generate multile questions.

You can play with the model using the inference API, just put the text and see the results!

For more deatils see this repo.

Model in action πŸš€

You'll need to clone the repo.

Open In Colab

from pipelines import pipeline

text = "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum \
and first released in 1991, Python's design philosophy emphasizes code \
readability with its notable use of significant whitespace."

nlp = pipeline("e2e-qg")
nlp(text)
=> [
 'Who created Python?',
 'When was Python first released?',
 "What is Python's design philosophy?"
]
Downloads last month
7,960
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for valhalla/t5-small-e2e-qg

Finetunes
5 models

Dataset used to train valhalla/t5-small-e2e-qg

Space using valhalla/t5-small-e2e-qg 1