Papers
arxiv:2402.09977
Fast Vocabulary Transfer for Language Model Compression
Published on Feb 15
Authors:
Abstract
Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.
Community
Amazing idea!
Thanks, if you are interested you can also look at the code here: https://github.com/LeonidasY/fast-vocabulary-transfer
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2402.09977 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2402.09977 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2402.09977 in a Space README.md to link it from this page.