Empower Functions Model v1.1
https://github.com/empower-ai/empower-functions
Empower Functions is a family of LLMs(large language models) that offer GPT-4 level capabilities for real-world "tool using" use cases, with full compatibility support to serve as a drop-in replacement.
Key Features
- Automatic tool using, able to decide when to use tools and when to converse, optimized for long conversations
- Parallel call, supports calling one function multiple times, multiple functions, or a combination of both
- Sequential calling, supports calling multiple functions sequentially to fulfill the user request
- Streaming
Family of Models
Model | Specs | Links | Notes |
---|---|---|---|
llama3-empower-functions-small | 128k context, based on Llama3.1 8B | model, gguf | Most cost-effective, locally runnable |
llama3-empower-functions-large | 128k context, based on Llama3.1 70B | model | Best accuracy |
Hardware Requirement
We have tested and the family of models in following setup:
- empower-functions-small: fp16 on 1xA100 40G, GGUF and 4bit GGUF on Macbook M2 Pro with 32G RAM, in minimal the 4bit GGUF version requires 7.56G RAM.
- empower-functions-medium: fp16 on 2xA100 80G
- empower-functions-large: fp16 on 4xA100 80G
Usage
There are three ways to use the empower-functions model. You can either directly prompt the raw model, run it locally through llama-cpp-python, or use our hosted API
Evaluation
v1.1 is the newer version trained based on meta llama3.1 with the newly updated dataset, it has achieved state-of-the-art performance on the Berkeley Function Calling leaderboard:
Demo App
Check our healthcare appointment booking demo
Want to customize the model? Please contact us at founders@empower.dev
- Downloads last month
- 49
Model tree for empower-dev/llama3-empower-functions-large-v1.1
Base model
meta-llama/Llama-3.1-70B