Llama-3.2-1B-Instruct To Android
A combination of o1 preview+https://medium.com/google-developer-experts/ml-story-mobilellama3-run-llama3-locally-on-mobile-36182fed3889 + https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/
yielded this instruction:
https://chatgpt.com/share/66f45fe7-a018-8004-bc4b-ab8a95940d42
and this notebook to convert weights and quantize for android deployment.
https://colab.research.google.com/drive/1skCIRG-NBBGCU3vK3DRf-M2idgEAxCW7?usp=sharing
Let me know what I am doing.
Cheers
H
You can also try downloading Q4_0_4_4 from here:
Cool! How does this plug into the medium post https://medium.com/google-developer-experts/ml-story-mobilellama3-run-llama3-locally-on-mobile-36182fed3889
if we want to get it to android?
You can find the quantized model here https://huggingface.co/Heigke/Llama-3.2-1B-Instruct-q4f16_1-android
i managed to get all the way until launching the app on the android:
The error logs indicate a critical issue related to loading native libraries for the com.example.mobilellama3 application. The primary error, java.lang.UnsatisfiedLinkError, suggests that the native library libtvm4j_runtime_packed.so could not be loaded due to a problem with its ELF (Executable and Linkable Format) magic number.
Here are the key takeaways from the log:
Missing or Corrupted Native Library:
The error has bad ELF magic: 76657273 points to the native library (libtvm4j_runtime_packed.so) being either corrupted or incorrectly packaged.
It is unable to find the libtvm4j_runtime_packed.so file, possibly due to packaging issues, a wrong architecture build, or the library being missing altogether.
Potential Solutions:
Check the architecture compatibility: Ensure the native library is correctly compiled for the target platform (e.g., arm64-v8a in this case).
Verify the library path: Check if the native library is properly packaged and available in the expected directory structure (/lib/arm64-v8a/).
Rebuild and Repackage: If the library appears to be corrupted, attempt to recompile the native library and repackage the APK.
Dependency Issues: If the project relies on external libraries, ensure all dependencies, especially those related to TVM (Tensor Virtual Machine), are properly included in the build.
Additional Warnings:
There are several warnings related to performance (Method failed lock verification and will run slower). These might indicate ProGuard optimization issues or unoptimized DEX code.
You may need to review ProGuard or R8 configurations to ensure correct optimizations are applied.
Fixing the native library loading issue should resolve the crash and allow the application to initialize properly.
You all might like this: https://github.com/pytorch/executorch/tree/main/examples/demo-apps/android/LlamaDemo