metadata
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
falcon-40b-megacode2-oasst
- wandb: stage 1: run37_megacode_falcon40, stage 2: run38_megacode_oasst_falcon40
- sampling report: 2023-08-17_OpenAssistant_falcon-40b-megacode2-oasst_sampling_noprefix2.json
- stage 1 model: andreaskoepf/falcon-40b-megacode2
Prompt Template
chatml format is used: "<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n"
Multi-line:
<|im_start|>user
{user prompt}<|im_end|>
<|im_start|>assistant
{Assistant answer}<|im_end|>
Credits & Special Thanks
- Compute was generously sponsored by the eplf Machine Learning and Optimization Laboratory
- The open-source epfLLM/Megatron-LLM trainer was used for fine-tuning.
- rombodawg curated and published LosslessMegaCodeTrainingV2_1m_Evol_Uncensored.
- andreaskoepf prepared & orchestrated the training.