liuylhf commited on
Commit
0a1912a
1 Parent(s): 2a7bf9c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - function
5
+ - function-calling
6
+ - tool-using
7
+ ---
8
+
9
+ ## Empower Functions Model
10
+
11
+ [https://github.com/empower-ai/empower-functions](https://github.com/empower-ai/empower-functions)
12
+
13
+ Empower Functions is a family of LLMs(large language models) that offer GPT-4 level capabilities for real-world "tool using" use cases, with full compatibility support to be served as a drop-in replacement.
14
+
15
+ This is the `llama3-empower-functions-large` model, which requires 4XA100 to run. For smaller models that can run on more affordable devices, please visit [the empower-functions collection](https://huggingface.co/collections/empower-dev/empower-functions-663e9a22df93b46804df75a8)
16
+
17
+ ## Key Features
18
+ * Automatic tool using, able to decide when to use tools and when to converse, optimized for long conversations
19
+ * Parallel call, supports calling one function multiple times, multiple functions, or a combination of both
20
+ * Sequential calling, supports calling multiple functions sequentially to fulfill the user request
21
+ * Streaming
22
+
23
+ ## Family of Models
24
+
25
+ | Model | Specs | Links | Notes |
26
+ | ------------------------------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
27
+ | llama3-empower-functions-small | 8k context, based on [Llama3 8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-small), [GGUF](https://huggingface.co/empower-dev/llama3-empower-functions-small-gguf) | Most cost-effective, locally runnable |
28
+ | empower-functions-medium | 32k context, based on [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | [model](https://huggingface.co/empower-dev/empower-functions-medium) | Balance in accuracy and cost |
29
+ | llama3-empower-functions-large | 65k context, based on [Llama3 70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) | [model](https://huggingface.co/empower-dev/llama3-empower-functions-large) | Best accuracy |
30
+
31
+ ### Hardware Requirement
32
+
33
+ We have tested and the family of models in following setup:
34
+
35
+ - empower-functions-small: fp16 on 1xA100 40G, GGUF and 4bit GGUF on Macbook M2 Pro with 32G RAM, in minimal the 4bit GGUF version requires 7.56G RAM.
36
+ - empower-functions-medium: fp16 on 2xA100 80G
37
+ - empower-functions-large: fp16 on 4xA100 80G
38
+
39
+ ## Usage
40
+
41
+ There are three ways to use the empower-functions model. You can either directly [prompt the raw model](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#prompt-raw-model), run it [locally](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#running-locally) through llama-cpp-python, or use our [hosted API](https://github.com/empower-ai/empower-functions?tab=readme-ov-file#using-empower-api)
42
+
43
+ ## Evaluation
44
+
45
+ We benchmarked our model against a few other options, on [three datasets](https://huggingface.co/empower-dev):
46
+
47
+ - Single Turn Dataset: The model is evaluated for its ability to execute a precise function call, assessing both the accuracy of the selected function and the arguments.
48
+
49
+ - Parallel Call Dataset: In this scenario, the model demonstrates its capacity to handle multiple (2-6) function calls within a single message, a feature not supported by Fireworks and Anyscale.
50
+
51
+ - Multi-Turn Dataset: Designed to simulate a complex real-world environment, such as a healthcare appointment booking system, the model navigates between natural conversation, initiating function calls, asking clarifying questions, and, when necessary, transferring to customer service. The assessment focuses on the accuracy of intent classification and the correctness of function calls.
52
+
53
+ For more detailed evaluation result, please refer to our [github repo](https://github.com/empower-ai/empower-functions)
54
+
55
+ ## Demo App
56
+ Check our healthcare appointment booking [demo](https://app.empower.dev/chat-demo)
57
+
58
+ Want to customize the model? Please contact us at [founders@empower.dev](mailto:founders@empower.dev)