A collection of observations in the code

#1
by AntonV - opened

Note: don't mind this if you're aware that the code is not finished

I went through the custom code given here (https://huggingface.co/nvidia/Hymba-1.5B-Base/blob/main/modeling_hymba.py) and I noticed a few things:

  • The fast path (one kernel for the whole mamba op) doesn't consider the attention given in the slow path
  • It seems like we only use one "head" of mamba, i.e. index is fixed to 0
  • self_attn_weights returns torch empty if I see it correctly
  • The batch size stuff seems like the problem we discovered in hf, i.e. conv1d + mamba need to zero padded tokens + left padding
NVIDIA org

Thank you for your interest in our work and for pointing these out!

  1. Yes, you are correct that we didn't use the fast path of Mamba since we have both attention and Mamba in one Hymba block. Building a fused kernel to support the entire Hymba block is our ongoing work.

  2. Yes, we use a single Mamba block as SSM heads, treating each inner dimension of Mamba as a separate head. As an analogy to Mamba2, where each head has a separate A value, each inner dimension in Mamba1 has its own distinct A value and thus can be viewed as a separate head.

  3. Good catch! We will not output self_attn_weights as we use FlexAttention and FlashAttention. We will remove this in the next version.

  4. Yes, you are correct. The batch size issue arises because of the order of meta tokens and padding, which may violate the padding rules of some operators. We will update this soon.

NVIDIA org

@AntonV thanks for looking into the code! Let us know if you have other observations, or have feedback. We appreciate you looking into this.

pmolchanov changed discussion status to closed

Sign up or log in to comment