Oh man

#1
by BoscoTheDog - opened

Not again ;-)

Does this mean Gemini nano can run without MediaPipe, through Transformers.js only?

If so, does it run with CPU, GPU, or both?

And does Transformers.js allow for the loading of lora extensions? I was toying with it because I was interested in how this experiment enabled that: https://www.reddit.com/r/LocalLLaMA/comments/1dsfpb4/gemini_nano_running_locally_in_brave_using/

Owner

Does this mean Gemini nano can run without MediaPipe, through Transformers.js only?

That's a goal, but for now this repo will only "signal" to the browser to use the window.ai functionality, if present.

If so, does it run with CPU, GPU, or both?

It will run on GPU

And does Transformers.js allow for the loading of lora extensions?

Not currently - this is a limitation of ONNX (/ ONNX Runtime Web), so feel free to open feature requests there! :)

Would my script, which converts the MediaPipe format Gemini Nano to fp32 safetensors, be helpful? https://github.com/ethanc8/Gemini-Nano/blob/master/playground/converter.py

I haven't really tested it, since it takes more than 2 hours to finish dequantizing, and runs out of memory while it tries to save to safetensors. I'm trying various mitigations to get around this.

Owner

That is indeed very useful! If you can get a gemma model running with those weights, I can convert to ONNX and get it running with transformers.js!

@ethanc8 Cool!

I tried running the script, but got an error:

python3 convert_gemini.py weights.bin gemini_nano.safetensors fp16

model: tflite.Model.Model = tflite.Model.Model.GetRootAs(buf)

I changed that to model: tflite.Model = tflite.Model.GetRootAs(buf) and got a bit further:

return packer_type.unpack_from(memoryview_type(buf), head)[0]
struct.error: unpack_from requires a buffer of at least 1802465126 bytes for unpacking 4 bytes at offset 1802465122 (actual buffer size is 824)

Which means I have ridiculously little memory available I take it? :-D

@BoscoTheDog You need to enter the conda environment and use converter.py. Also, tflite.Model is a module, not a class (it's located in playground/tflite/Model.py), so we need to use tflite.Model.Model. Finally, the fact that your buffer size is 824 means that you opened an 824-byte file instead of the Gemini Nano weights. Check what's actually inside weights.bin.

I am now running the dequantization at https://github.com/ethanc8/Gemini-Nano/actions. I kept running out of memory on my host machine, but hopefully GitHub Actions' 16GB RAM should allow the dequantization to finish successfully.

Do we have much of any knowledge about what it'd take to restore multimodal support to this model? I assume that they're using a ViT-VQGAN for their image decoder (the other ways I know about to use transformers for image generation use dVAE, VQVAE, or VQGAN, and the only image gen research they cited in the architecture paragraph was OpenAI DALL-E using dVAE and Google Parti using ViT-VQGAN), and I'd hope that the input tokens and output tokens are from the same vocabulary, so the image encoder should also be a ViT-VQGAN. They mentioned that they used a Google USM for the speech encoder. It might be useful if we could get the model to generate image tokens. I'm also thinking of trying to restore the image output on Meta Chameleon, which should be much easier because they released the VQGAN, so I think they must've just fine-tuned the model to avoid generating images, after giving it the ability to generate images. Maybe the LoRA adapter which ships with Gemini Nano does something similar, so maybe running the model without the LoRA adapter might cause it to generate image tokens if you prompt it to. I'm really not sure though.

Reviving this thread to say that I’ve actually made some rather significant progress! Turns out the conversion code was bugged and making all tensors 1D where they weren’t needed. This time, o1-preview made significant optimizations to the int# to FP and it now completes in at most a minute (minus saving the weights individually which was to save memory). I will be sharing this code as soon as I get the opportunity. But for now, take the repo.

You actually bought ChatGPT Plus just so o1 could fix it? Why o1 of all things?

Also read https://www.huggingface.co/QuietImpostor/Gemini-Nano-Safetensors-V2/discussions/1 for some minor issues.

I’ve had ChatGPT Plus for a while now. And o1-preview is extremely good at debugging in my experience. And I’ll take a look at the discussion.

@QuietImpostor Can you share the conversion code?

Reviving this thread to say that I’ve actually made some rather significant progress! Turns out the conversion code was bugged and making all tensors 1D where they weren’t needed. This time, o1-preview made significant optimizations to the int# to FP and it now completes in at most a minute (minus saving the weights individually which was to save memory). I will be sharing this code as soon as I get the opportunity. But for now, take the repo.

You actually bought ChatGPT Plus just so o1 could fix it? Why o1 of all things?

Also read https://www.huggingface.co/QuietImpostor/Gemini-Nano-Safetensors-V2/discussions/1 for some minor issues.

I’ve had ChatGPT Plus for a while now. And o1-preview is extremely good at debugging in my experience. And I’ll take a look at the discussion.

laughs in deep seek r1 lite preview :joy:

@QuietImpostor Can you share the conversion code?

Oh yes! I totally forgot. Give me a minute and it'll be in the updated repo.

Edit: Found it! updated convert.py

@QuietImpostor Can you share the conversion code?

Oh yes! I totally forgot. Give me a minute and it'll be in the updated repo.

Edit: Found it! updated convert.py

How's the RAM usage on this? Does it flatline your computer due to RAM in use?

@QuietImpostor Can you share the conversion code?

Oh yes! I totally forgot. Give me a minute and it'll be in the updated repo.

Edit: Found it! updated convert.py

How's the RAM usage on this? Does it flatline your computer due to RAM in use?

Depends on how much RAM you’ve got. I’d recommend 32GBs as I believe it took around ~27GBs when I ran it? You might be able to get away with it on Kaggle’s 30GBs if you wanted to reproduce it yourself.

@QuietImpostor Can you share the conversion code?

Oh yes! I totally forgot. Give me a minute and it'll be in the updated repo.

Edit: Found it! updated convert.py

How's the RAM usage on this? Does it flatline your computer due to RAM in use?

Depends on how much RAM you’ve got. I’d recommend 32GBs as I believe it took around ~27GBs when I ran it? You might be able to get away with it on Kaggle’s 30GBs if you wanted to reproduce it yourself.

is possible to optimize it for memory by doing only tensor at a time instead of as much as possible

@QuietImpostor Can you share the conversion code?

Oh yes! I totally forgot. Give me a minute and it'll be in the updated repo.

Edit: Found it! updated convert.py

How's the RAM usage on this? Does it flatline your computer due to RAM in use?

Depends on how much RAM you’ve got. I’d recommend 32GBs as I believe it took around ~27GBs when I ran it? You might be able to get away with it on Kaggle’s 30GBs if you wanted to reproduce it yourself.

is possible to optimize it for memory by doing only tensor at a time instead of as much as possible

Oh most definitely, I just went with what got it done quickest.

Sign up or log in to comment