Default attention to eager implementation
#12
by
lysandre
HF staff
- opened
Most of the issues with Gemma 2 come from having the FA2/SDPA on by default.
In this PR, I'm changing the default to be the eager
attention implementation. Example of how it changes downstream usage:
In [12]: from transformers import AutoModelForCausalLM
...: from accelerate import init_empty_weights
...:
...: with init_empty_weights():
...: model = AutoModelForCausalLM.from_pretrained('google/gemma-2-27b', revision='main')
...: print(model.config._attn_implementation, model.model.layers[0].self_attn.__class__)
returns
sdpa <class 'transformers.models.gemma2.modeling_gemma2.Gemma2SdpaAttention'>
with the updated config:
With the update:
In [14]: from transformers import AutoModelForCausalLM
...: from accelerate import init_empty_weights
...:
...: with init_empty_weights():
...: model = AutoModelForCausalLM.from_pretrained('google/gemma-2-27b', revision='refs/pr/12')
...: print(model.config._attn_implementation, model.model.layers[0].self_attn.__class__)
returns
eager <class 'transformers.models.gemma2.modeling_gemma2.Gemma2Attention'>
lysandre
changed pull request title from
[WIP] Default attention to eager implementation
to Default attention to eager implementation
lysandre
changed pull request status to
merged