Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration Paper • 2406.15765 • Published Jun 22 • 1
Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning Paper • 2306.15686 • Published Jun 23, 2023 • 1
Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning Paper • 2304.12520 • Published Apr 25, 2023 • 1