albert-albert-large-v2

This model is a fine-tuned version of albert/albert-large-v2 on the raw version of the dataset https://huggingface.co/datasets/siddharthl1293/engineering_design_facts. It achieves the following results on the evaluation set:

  • Loss: 0.0032

Model Intent

The model was trained to identify relationship tokens in a sentence when a pair of entities are marked. For more info, please go through the dataset description: https://huggingface.co/datasets/siddharthl1293/engineering_design_facts

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.0032 1.0 9378 0.0032

Testing Results

Testing accuracy was calculated on a test set wherein, all relationship tokens need to be identified in an example for the accuracy to be 1. The average testing accuracy across 37,509 testing examples is 0.997.

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
24
Safetensors
Model size
16.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for siddharthl1293/albert-albert-large-v2

Finetuned
(3)
this model