Gemma2-2b training uses much more momory!

#23
by bubbleseller - opened

I have been training gemma2-2b into VLM on 8 80G H800 with max batch size 4 by pytorch FSDP.And I think the batch size is strange because training llama2 into VLM with the same FSDP settings has batch size 32. So I wonder if there are some kernels or computation that are very memory consuming in he code of transformers model gemma2.

I have been training gemma2-2b into VLM

It is very interesting. I wonder how it works.

I think the batch size is strange because training llama2 into VLM with the same FSDP settings has batch size 32.

IIRC, FSDP doesn't support soft capping in Gemma2. You may need to use alternative settings.

Google org

Hi @bubbleseller , You are observing that LLaMA-2 can handle a batch size of 32, while Gemma2-2B is constrained to a batch size of 4, even though you are using the same FSDP settings. Could you please use PyTorch's torch.profiler and memory tools to inspect exactly where the memory bottleneck is. I think this will help identify if certain layers or computations are consuming an unusual amount of memory. Kindly try and let me know Thank you.

Sign up or log in to comment