Edit model card

Pre-trained BART Model fine-tune on WikiLingua dataset

The repository for the fine-tuned BART model (by sshleifer) using the wiki_lingua dataset (English)

Purpose: Examine the performance of a fine-tuned model research purposes

Observation:

  • Pre-trained model was trained on the XSum dataset, which summarize a not-too-long documents into one-liner summary
  • Fine-tuning this model using WikiLingua is appropriate since the summaries for that dataset are also short
  • In the end, however, the model cannot capture much clearer key points, but instead it mostly extracts the opening sentence
  • Some data pre-processing and models' hyperparameter are also need to be tuned more properly.
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train datien228/distilbart-ftn-wiki_lingua