Future NeoX finetuning discussion
#4
by
mrsteyk
- opened
TL;DR
- Usage of Neo was a result of huge sleep depravation, I should've read like 2 more lines of Pythia github repo. J doesn't seem viable due to being TPU centric and no lesser parameters model in the wild that I can see.
- Use FP16/BF16 (local/colab) next time.
- Look into efficiency of "batching" the dataset. (finetune didn't use EOS at all)
- Look into getting 8BitAdam working to lower training VRAM for potential local finetunes or use of bigger number of params.
- Read more papers and talks related to current NLP tech.
- I have 0 IQ in modern AI stuff if not less, so I am more than open to any ideas or learning resources.
More training process details
Here's a snippet of how trainer was initialised (latest version I have in colab, this model is 5 epochs). This looks dumb to me now but trust me it looked good for me at the time.
from transformers import Trainer, TrainingArguments, default_data_collator
from transformers import DataCollatorWithPadding
import evaluate
def preprocess_logits_for_metrics(logits, labels):
if isinstance(logits, tuple):
# Depending on the model and config, logits may contain extra tensors,
# like past_key_values, but logits always come first
logits = logits[0]
return logits.argmax(dim=-1)
metric = evaluate.load("accuracy")
def compute_metrics(eval_preds):
preds, labels = eval_preds
# preds have the same shape as the labels, after the argmax(-1) has been calculated
# by preprocess_logits_for_metrics but we need to shift the labels
labels = labels[:, 1:].reshape(-1)
preds = preds[:, :-1].reshape(-1)
return metric.compute(predictions=preds, references=labels)
model.config.use_cache = False
data_collator_pad = DataCollatorWithPadding(tokenizer)
def data_collator(data_):
data = data_collator_pad(data_)
# print(data)
return {'input_ids': torch.stack([i for i in data['input_ids']]),
'attention_mask': torch.stack([i for i in data['attention_mask']]),
'labels': torch.stack([i for i in data['input_ids']])}
trainer = Trainer(
model=model,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
tokenizer=tokenizer,
# data_collator=default_data_collator,
compute_metrics=compute_metrics,
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
# data_collator=lambda data: {'input_ids': torch.stack([torch.tensor(f['input_ids']) for f in data]),
# 'attention_mask': torch.stack([torch.tensor(f['attention_mask']) for f in data]),
# 'labels': torch.stack([torch.tensor(f['input_ids']) for f in data])},
data_collator=data_collator,
args=TrainingArguments(
"openchatgpt-neo-r2",
do_train=True,
do_eval=True,
push_to_hub=False,
# Pulled from examples
evaluation_strategy="epoch",
#learning_rate=2e-5,
#weight_decay=0.01,
num_train_epochs=30,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
warmup_steps=100,
weight_decay=0.01,
logging_dir='./logs',
#gradient_accumulation_steps=2,
#gradient_checkpointing=True,
save_steps=5000,
),
)
I basically just banged my head until it started working (sleep depravation amirite). FP16 also was not utilised because I thought fp32 is the source. Dataset was not "batched" in any way - single entry meant single conversation, hence the need for padding (padding used was the separator). If this is not something I should do please let me know.
I also should try to look into getting 8BitAdam from bitsandbytes working, maybe it will let me finetune bigger Pythia models locally with just 4gb VRAM.
mrsteyk
changed discussion status to
closed