Default to 'eager' attention implementation

#22
by lysandre HF staff - opened
Google org
edited Jul 2

Most of the issues with Gemma 2 come from having the FA2/SDPA on by default.

In this PR, I'm changing the default to be the eager attention implementation. Example of how it changes downstream usage:

In [12]: from transformers import AutoModelForCausalLM
    ...: from accelerate import init_empty_weights
    ...:
    ...: with init_empty_weights():
    ...:     model = AutoModelForCausalLM.from_pretrained('google/gemma-2-27b', revision='main')
    ...: print(model.config._attn_implementation, model.model.layers[0].self_attn.__class__)

returns

sdpa <class 'transformers.models.gemma2.modeling_gemma2.Gemma2SdpaAttention'>
with the updated config:

With the update:

In [14]: from transformers import AutoModelForCausalLM
    ...: from accelerate import init_empty_weights
    ...:
    ...: with init_empty_weights():
    ...:     model = AutoModelForCausalLM.from_pretrained('google/gemma-2-27b', revision='refs/pr/12')
    ...: print(model.config._attn_implementation, model.model.layers[0].self_attn.__class__)

returns

eager <class 'transformers.models.gemma2.modeling_gemma2.Gemma2Attention'>
lysandre changed pull request title from [WIP] Default to 'eager' attention implementation to Default to 'eager' attention implementation
lysandre changed pull request status to merged

Does this require requanting?

Google org

Hey @oldmanhuggingface , unlikely! This just means that we use the attention with soft-capping. If not using attention with soft-capping, you would have had bad results so I believe you would have seen it :)

Is this still needed given that it supports flash_attention_2 now? It's causing issues for us and we have to put in workarounds everywhere.

Sign up or log in to comment