Edit model card

Paper

SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING

Aurthors: Moyan Mei, Rohit Sroch

Abstract

With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [1] [2] [3] and achieves comparable or superior performance to its teacher model such as BERT [4] on total 13 tasks for the GLUE [5] and SuperGLUE [6] benchmarks.

Moyan Mei and Rohit Sroch. 2022. SEAD: Simple ensemble and knowledge distillation framework for natural language understanding. Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).

SEAD-L-6_H-384_A-12-mnli

This is a student model distilled from BERT base as teacher by using SEAD framework on mnli task. For weights initialization, we used microsoft/xtremedistil-l6-h384-uncased

All SEAD Checkpoints

Other Community Checkpoints: here

Intended uses & limitations

More information needed

Training hyperparameters

Please take a look at the training_args.bin file

$ import torch
$ hyperparameters = torch.load(os.path.join('training_args.bin'))

Evaluation results

eval_m-accuracy eval_m-runtime eval_m-samples_per_second eval_m-steps_per_second eval_m-loss eval_m-samples eval_mm-accuracy eval_mm-runtime eval_mm-samples_per_second eval_mm-steps_per_second eval_mm-loss eval_mm-samples
0.8495 6.5443 1499.776 46.911 0.4366 9815 0.8508 5.6975 1725.678 54.059 0.4252 9832

Framework versions

  • Transformers >=4.8.0
  • Pytorch >=1.6.0
  • TensorFlow >=2.5.0
  • Flax >=0.3.5
  • Datasets >=1.10.2
  • Tokenizers >=0.11.6

If you use these models, please cite the following paper:

    ```
    @article{article, 
        author={Mei, Moyan and Sroch, Rohit}, 
        title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, 
        volume={3}, 
        number={1}, 
        journal={Lattice, The Machine Learning Journal by Association of Data Scientists},
        day={26},
        year={2022}, 
        month={Feb},
        url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63}                                                  
    } 
    ```
    
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train C5i/SEAD-L-6_H-384_A-12-mnli