M1 8G RAM macbook air run at 0.01token/second, something is wrong right?
According to the demo video at https://huggingface.co/blog/swift-coreml-llm, the speed of this model was 600token/second (pretty cool).
So, I have cloned https://github.com/huggingface/swift-chat, and downloaded this model successfully at coreml-projects/Llama-2-7b-chat-coreml, and it can run without error.
Although I don't know which chip M2 or M1 or something even more powerful is used in the demo, but my M1 8G RAM macbook air can only run at 0.01token/s seems very wrong.
What am I doing wrong?
Thank you so much!
The demo video you refer to indicates a generation speed of 6.42 tokens per second (as opposed to 600), but 0.01 tokens per second definitely seems wrong.
It might simply be an issue with timing, as if this was indeed the case, it would take 100 seconds per token (which would mean the text shown would take ~30 minutes (which I doubt is the case). Can you take a recording, or maybe just answer whether it is actually that slow?
Not sure if a "me too" is useful, but it's the same on my 2022 m2 16gb air. i get ~0.02 tokens/s. I've tried release builds as well as debug. Happy to provide more details/test stuff.
My computer is an M1 Max with 64 GB. I believe the main factor that affects performance in this case is the amount of RAM installed. Because the model is so large, the system will swap if there's not enough memory to hold everything in RAM, and that will kill performance.
I'll run some tests and answer in the github issue!