File size: 1,625 Bytes
f8c9695
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f466021
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: llama3.1
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
---

# Dolphin 3.0 Llama 3.1 8B 🐬

Curated and trained by [Eric Hartford](https://huggingface.co/ehartford), [Ben Gitter](https://huggingface.co/bigstorm) and [Cognitive Computations](https://huggingface.co/cognitivecomputations)

[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations

<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/cNCs1TBD3FelWCJGkZ3cd.png" width="600" />

Our appreciation for the generous sponsors of Dolphin 3.0:
- [Crusoe Cloud](https://crusoe.ai/) - provided 16x L40s for training and evals
- [Akash](https://akash.network/) - provided on-demand 8x H100 for training
- [Lazarus](https://www.lazarusai.com/) - provided 16x H100 for training
- [Cerebrus](https://cerebras.ai/) - provided excellent and fast inference services
- [Andreessen Horowitz](https://a16z.com/) - provided a grant that make Dolphin 1.0 possible and enabled me to bootstrap my homelab