Mulberry

Mulberry-llava-8b is a step-by-step reasoning model trained on the Mulberry-260K SFT dataset, which was generated through collective knowledge search using CoMCTS.

For reasoning inference, please refer to our GitHub.

Paper: https://arxiv.org/abs/2412.18319

Code: https://github.com/HJYao00/Mulberry

More Details

Base Model: https://huggingface.co/llava-hf/llama3-llava-next-8b-hf

Training Framework: LLaMA-Factory

Hardware: 8x NVIDIA H100

Downloads last month
17
Safetensors
Model size
8.36B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .