chuyi777's picture
Create README.md
737e12d verified
|
raw
history blame
750 Bytes

Llama-3 8B RLHF checkpoint trained by OpenRLHF

Using the models and datasets:

Training Hyperparameters

Actor Learning Rate: 5e-7
Critic Learning Rate: 9e-6
Learning Rate Scheduler: Cosine with 0.03 Warmup
PPO epoch: 1
Training Batch Size: 128
Experience Buffer Size: 1024
Reward Normalization: True
Max Prompt Length: 2048
Max Response Length: 2048
Max Samples: 100k

Training logs