mxbai-embed-large-v1 - llamafile
This repository contains executable weights (which we call llamafiles) that run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
- Model creator: mixedbread-ai
- Original model: mixedbread-ai/mxbai-embed-large-v1
- Built with llamafile 0.8.4
Quickstart
Running the following on a desktop OS will launch a server on http://localhost:8080
to which you can send HTTP requests to in order to get embeddings:
chmod +x mxbai-embed-large-v1-f16.llamafile
./mxbai-embed-large-v1-f16.llamafile --server --nobrowser --embedding
Then, you can use your favorite HTTP client to call the server's /embedding
endpoint:
curl \
-X POST \
-H "Content-Type: application/json" \
-d '{"content": "Hello, world!"}' \
http://localhost:8080/embedding
For further information, please see the llamafile README and the llamafile server docs.
Having trouble? See the "Gotchas" section of the README or contact us on Discord.
About llamafile
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023. It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp binaries that run on the stock installs of six OSes for both ARM64 and AMD64.
Model Card
- Downloads last month
- 463
Model tree for Mozilla/mxbai-embed-large-v1-llamafile
Base model
mixedbread-ai/mxbai-embed-large-v1