Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
bloom
Eval Results
text-generation-inference
Inference Endpoints

how to fine tuning

#105
by nora1008 - opened

Hi everyone I'm new to NLP I want to know how to use BLOOM(traditional Chinese) to fine tuning w/ my own data(.csv) (such as QA)

my data is collected by myself (e.g. prompt and completion --GPT3 format)

BigScience Workshop org

Hey 🤗

I see two options for fine-tuning:

  1. Transformers Checkpoint (this repo): You'd probably want to make use of the DeepSpeed integration for that, see https://huggingface.co/docs/transformers/main_classes/deepspeed
  2. Megatron-Deepspeed Checkpoint (available here: https://huggingface.co/bigscience/bloom-optimizer-states): You can fine-tune with the same repository used for pre-training available here: https://github.com/bigscience-workshop/Megatron-DeepSpeed
BigScience Workshop org

Update: We have finetuned BLOOM to produce BLOOMZ & our guide is here.

@Muennighoff how much gpu ram is used to fine tuning Bloom 560m ?
Thank you in advance my friend.

BigScience Workshop org

Depends if you're willing to fine-tune only a few parameters you can maybe even do it in a Colab Notebook with 15GB or so; Here are some sources that should help 🤗

Do you think is possible to do the same modification you did in Bloom but in Alpaca 7b for semantic similarity?

I'm currently working with a low-resource language that is a component of the ROOTS dataset, which Bloom is trained on. However, upon examining the vocabulary and attempting to tokenize it, I encountered a situation where there were no tokenizations available for the language.

Is it feasible to inject this language's vocabulary into Bloom's tokenizer?

Sign up or log in to comment