ReLU's Revival: On the Entropic Overload in Normalization-Free Large Language Models
Abstract
LayerNorm is a critical component in modern large language models (LLMs) for stabilizing training and ensuring smooth optimization. However, it introduces significant challenges in mechanistic interpretability, outlier feature suppression, faithful signal propagation, and computational and communication complexity of private inference. This work explores desirable activation functions in normalization-free decoder-only LLMs. Contrary to the conventional preference for the GELU in transformer-based models, our empirical findings demonstrate an {\em opposite trend} -- ReLU significantly outperforms GELU in LayerNorm-free models, leading to an {\bf 8.2\%} perplexity improvement. We discover a key issue with GELU, where early layers experience entropic overload, leading to the under-utilization of the representational capacity of attention heads. This highlights that smoother activations like GELU are {\em ill-suited} for LayerNorm-free architectures, whereas ReLU's geometrical properties -- specialization in input space and intra-class selectivity -- lead to improved learning dynamics and better information retention in the absence of LayerNorm. This study offers key insights for optimizing transformer architectures where LayerNorm introduces significant challenges.
Community
We've discovered that in normalization-free LLMs, employing the classic ReLU as an activation function in FFN significantly outperforms the commonly used GELU, leading to an 8.2% improvement in perplexity. Without LayerNorm, GELU causes "entropic overload" in early layers, underutilizing the representational capacity of attention heads. In contrast, the geometrical properties of ReLU (higher intra-class selectivity and specialization) enhance learning dynamics and attention head diversity. Our findings challenge the conventional preference for GELU in transformers and offer new insights for optimizing LayerNorm-free architectures.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stable Language Model Pre-training by Reducing Embedding Variability (2024)
- EchoAtt: Attend, Copy, then Adjust for More Efficient Large Language Models (2024)
- Adaptive Large Language Models By Layerwise Attention Shortcuts (2024)
- Small Language Models: Survey, Measurements, and Insights (2024)
- A Law of Next-Token Prediction in Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper