Edit model card

phi-2-alpaca-cleaned

This model is an instruction-tuned version of the microsoft/phi-2 model fine-tuned on the yahma/alpaca-cleaned dataset.

In the training, full parameter fine-tuning of phi-2 was performed, and LoRA was not used.

Text Format

Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Based on the information provided, rewrite the sentence by changing its tense from past to future.

### Input:
She played the piano beautifully for hours and then stopped as it was midnight.	

### Response:
She will play the piano beautifully for hours and then stop as it will be midnight.	

Training

  • GPUs: 8 × A6000 48GB
  • per_device_train_batch_size: 8
  • gradient_accumulation_steps: 8
  • per_device_eval_batch_size: 8
  • num_train_epochs: 3
  • learning_rate: 2e-5
  • warmup_ratio: 0.03

Software

  • pytorch: 2.1.2
  • transformers: 4.38.0.dev0
  • accelerate: 0.26.1
  • deepspeed: 0.13.1
Downloads last month
14
Safetensors
Model size
2.78B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ohashi56225/phi-2-alpaca-cleaned