Original Issue:
The dataset comprises 6.365 million parallel sentences in Urdu and Roman-Urdu. Many Roman-Urdu sentences are just variations of the same Urdu sentence due to different transliteration styles. If we randomly split this dataset into training, validation, and test sets, there's a high chance that variations of the same Urdu sentence will appear in multiple sets. This overlap can lead to data leakage, causing the model to memorize specific sentence pairs rather than learning to generalize transliteration patterns. Consequently, evaluation metrics like BLEU scores may be artificially inflated, not accurately reflecting the model's true performance on unseen data.
Splitting Strategy: To address this issue, the dataset is split into training, validation, and test sets in a way that ensures no Urdu sentence (and its variations) appears in more than one set. The strategy involves grouping sentences by unique Urdu text and carefully selecting sentences based on the number of their variations.
- Load and Preprocess the Data
Load the Dataset: Read the CSV file containing Urdu and Roman-Urdu sentence pairs into a Pandas DataFrame. Remove Missing Entries: Drop any rows where the 'Urdu text' is missing. Group by Urdu Sentences: Group the data by 'Urdu text' and aggregate all corresponding 'Roman-Urdu text' variations into lists. Count Variations: Add a 'count' column representing the number of Roman-Urdu variations for each Urdu sentence.
- Select Unique Sentences for Validation and Test Sets
Validation Set: Select 1,000 Urdu sentences that occur only once in the dataset (i.e., sentences with a 'count' of 1). Include their corresponding Roman-Urdu text. Test Set: From the remaining Urdu sentences with a 'count' of 1 (excluding those in the validation set), select another 1,000 sentences. Include their corresponding Roman-Urdu text.
- Select Replicated Sentences with Variations for Validation and Test Sets
Validation Set: Select 2,000 Urdu sentences that have between 2 and 10 Roman-Urdu variations (i.e., 'count' > 1 and 'count' ≤ 10). Include all variations of these Urdu sentences in the validation set. Test Set: From the remaining Urdu sentences with 2 to 10 variations (excluding those in the validation set), select another 2,000 sentences. Include all variations of these Urdu sentences in the test set.
- Prepare the Training Set
Exclude Test and Validation Sentences: Remove all Urdu sentences (and their variations) present in the test and validation sets from the original dataset. Form the Training Set: The training set consists of all remaining Urdu sentences and their corresponding Roman-Urdu variations not included in the test or validation sets.
- Create Smaller Subsets for Quick Evaluation
Purpose: Facilitate faster testing and validation during model development. Validation Subset: From the unique Urdu sentences in the validation set, randomly select 1,000 sentences (they only have one variation). From the replicated Urdu sentences in the validation set, for each Urdu sentence, randomly select only one Roman-Urdu variation. Combine these to form a smaller validation set of 3,000 sentences. Test Subset: Repeat the same process for the test set to create a smaller test set of 3,000 sentences.
Key Points:
No Overlap Between Sets: By excluding any Urdu sentences used in the test and validation sets from the training set, the strategy ensures no overlap, preventing data leakage.
Inclusion of All Variations: The large test and validation sets include all variations of selected Urdu sentences to thoroughly evaluate the model's ability to handle different transliterations.
Smaller Subsets for Efficiency: Smaller test and validation sets contain only one variation per Urdu sentence, allowing for quicker evaluations during model development without compromising the integrity of the results.
Random Sampling with Fixed Seed: A fixed random_state (e.g., 42) is used in all random sampling steps to ensure reproducibility of the data splits.
Balanced Evaluation: The strategy includes both unique sentences and those with multiple variations, providing a comprehensive evaluation of the model's performance across different levels of sentence frequency and complexity.
Data Integrity Checks: After splitting, the sizes of the datasets are verified, and checks are performed to confirm that no Urdu sentences are shared between the training, validation, and test sets.
Generalization Focus:By ensuring the model does not see any test or validation sentences during training, the evaluation metrics will accurately reflect the model's ability to generalize to unseen data.
We also tested for checked for if the training sentences are made up entirely of (test sentences or their repetitions) and found that there were no matches. (file: Transliterate/RUP/finetuning/scripts/one_time_usage/filter_uniqueurdu_data.py)