Safetensors
English
llama

Model Card

Model summary

This model is part of the πŸ“ FineMath ablations, we continue pretraining Llama-3.2-3B base on different math datasets for 60B tokens. The model has 3.21B parameters and 4096 context length. It was trained on 160B tokens using a mix of 40% FineWeb-Edu and 30% FineMath-3+ and 30% InfiWebMath-3+ from the πŸ“ FineMath dataset.

  • License: Apache-2
  • Languages: English

Use

Intended use

This model was trained on English math data and is not instruction-tuned, making it intended for text completion in English with a focus on math. It is important to note that the primary intended use case of this model is to compare its performance with other models trained under the same conditions. This model is not necessarily the best possible outcome achievable with the given dataset.

Generation

# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model = "HuggingFaceTB/finemath-ablation-3plus-160B"
device = "cuda" # for GPU usage or "cpu" for CPU usage

tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model).to(device)

inputs = tokenizer.encode("Machine Learning is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Intermediate checkpoints

We are releasing intermediate checkpoints for this model at intervals of every 10000 training steps (10B tokens) in separate branches. The naming convention is 10B.

You can load a specific model revision with transformers using the argument revision:

model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/finemath-ablation-3plus-160B", revision="10B")

You can access all the revisions for the models via the following code:

from huggingface_hub import list_repo_refs
out = list_repo_refs("HuggingFaceTB/finemath-ablation-3plus-160B")
print([b.name for b in out.branches])

Training

Model

  • Architecture: Llama3
  • Pretraining steps: 60k
  • Pretraining tokens: 60B
  • Precision: bfloat16

Hardware

  • GPUs: 64 H100

Software

Evaluation

We used the SmolLM2 setup to evaluate all our ablation models with lighteval. You can find the details here: https://github.com/huggingface/smollm/tree/main/evaluation#smollm2-base-models

Limitations

This model was predominantly trained on English math data, potentially limiting its performance in other languages. Furthermore, the model's behavior is influenced by the quality and diversity of its training data, which may include biases and harmful content.

Downloads last month
39
Safetensors
Model size
3.21B params
Tensor type
BF16
Β·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for HuggingFaceTB/finemath-ablation-3plus-160B

Finetuned
(66)
this model
Quantizations
1 model

Dataset used to train HuggingFaceTB/finemath-ablation-3plus-160B

Collection including HuggingFaceTB/finemath-ablation-3plus-160B