|
<!DOCTYPE html> |
|
<html> |
|
<head> |
|
<title>dolphin-2.2.1-mistral-7B-GGUF (Q6_K)</title> |
|
</head> |
|
<body> |
|
<h1>dolphin-2.2.1-mistral-7B-GGUF (Q6_K)</h1> |
|
<p> |
|
With the utilization of the |
|
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a> |
|
package, we are excited to introduce the GGUF model hosted in the Hugging |
|
Face Docker Spaces, made accessible through an OpenAI-compatible API. This |
|
space includes comprehensive API documentation to facilitate seamless |
|
integration. |
|
</p> |
|
<ul> |
|
<li> |
|
The API endpoint: |
|
<a href="https://jtatman-dolphin-2-2-1-mistral-7b-gguf.hf.space/v1" |
|
>https://limcheekin-dolphin-2-2-1-mistral-7b-gguf.hf.space/v1</a |
|
> |
|
</li> |
|
<li> |
|
The API doc: |
|
<a href="https://jtatman-dolphin-2-2-1-mistral-7b-gguf.hf.space/docs" |
|
>https://limcheekin-dolphin-2-2-1-mistral-7b-gguf.hf.space/docs</a |
|
> |
|
</li> |
|
</ul> |
|
<p> |
|
If you find this resource valuable, your support in the form of starring |
|
the space would be greatly appreciated. Your engagement plays a vital role |
|
in furthering the application for a community GPU grant, ultimately |
|
enhancing the capabilities and accessibility of this space. |
|
</p> |
|
</body> |
|
</html> |
|
|