Model description
Consistency training with supervision
In this example, we have trained an image classification model enforcing a sense of consistency inside it by doing the following:
- Train a standard image classification model.
- Train an equal or larger model on a noisy version of the dataset (augmented using RandAugment).
- To do this, we will first obtain predictions of the previous model on the clean images of the dataset.
- We will then use these predictions and train the second model to match these predictions on the noisy variant of the same images. This is identical to the workflow of Knowledge Distillation but since the student model is equal or larger in size this process is also referred to as Self-Training.
This overall training workflow finds its roots in works like FixMatch, Unsupervised Data Augmentation for Consistency Training, and Noisy Student Training. Since this training process encourages a model yield consistent predictions for clean as well as noisy images, it's often referred to as consistency training or training with consistency regularization. Although the example focuses on using consistency training to enhance the robustness of models to common corruptions this example can also serve a template for performing weakly supervised learning.
Full Credits to Sayak Paul for this work.
This repo contains only the Teacher Model of this training example.
Student Model Repo can be find at this Link .
Intended uses & limitations
More information needed
Training and evaluation data
Trained and evaluated on CIFAR-10 dataset.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
name | optimizer | average_period | start_averaging | training_precision |
---|---|---|---|---|
SWA | {'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': 1.0000001e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}} | 10 | 0 | float32 |
Model Plot
- Downloads last month
- 6