Text Generation
Transformers
Safetensors
falcon
text-generation-inference
Inference Endpoints
File size: 1,389 Bytes
a61db26
 
0bdc197
 
 
a61db26
0bdc197
 
 
0f5c016
d701c04
0f5c016
 
 
 
 
 
 
 
 
 
 
 
 
 
734ff93
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
---

# falcon-40b-megacode2-oasst

- wandb: stage 1: [run37_megacode_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run37_megacode_falcon40), stage 2: [run38_megacode_oasst_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run38_megacode_oasst_falcon40)
- stage 1 model: [andreaskoepf/falcon-40b-megacode2](https://huggingface.co/andreaskoepf/falcon-40b-megacode2)

## Prompt Template

[chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used:
"<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n" 

Multi-line:

```
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
```

### Credits & Special Thanks

- Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/)
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
- [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored).
- [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.