--- license: apache-2.0 datasets: - OpenAssistant/oasst1 - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored --- # falcon-40b-megacode2-oasst - wandb: stage 1: [run37_megacode_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run37_megacode_falcon40), stage 2: [run38_megacode_oasst_falcon40](https://wandb.ai/open-assistant/epfl-mt-sft/runs/run38_megacode_oasst_falcon40) - stage 1 model: [andreaskoepf/falcon-40b-megacode2](https://huggingface.co/andreaskoepf/falcon-40b-megacode2) ## Prompt Template [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format is used: "<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n" Multi-line: ``` <|im_start|>user {user prompt}<|im_end|> <|im_start|>assistant {Assistant answer}<|im_end|> ``` ### Credits & Special Thanks - Compute was generously sponsored by the eplf [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/) - The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning. - [rombodawg](https://huggingface.co/rombodawg) curated and published [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored). - [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training.