Edit model card

flan-t5-base_legal_ner_finetuned

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3290
  • Law Precision: 0.5522
  • Law Recall: 0.5968
  • Law F1: 0.5736
  • Law Number: 124
  • Violated by Precision: 0.0
  • Violated by Recall: 0.0
  • Violated by F1: 0.0
  • Violated by Number: 77
  • Violated on Precision: 0.0845
  • Violated on Recall: 0.0845
  • Violated on F1: 0.0845
  • Violated on Number: 71
  • Violation Precision: 0.2370
  • Violation Recall: 0.3800
  • Violation F1: 0.2919
  • Violation Number: 479
  • Overall Precision: 0.2352
  • Overall Recall: 0.3489
  • Overall F1: 0.2810
  • Overall Accuracy: 0.9247

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Law Precision Law Recall Law F1 Law Number Violated by Precision Violated by Recall Violated by F1 Violated by Number Violated on Precision Violated on Recall Violated on F1 Violated on Number Violation Precision Violation Recall Violation F1 Violation Number Overall Precision Overall Recall Overall F1 Overall Accuracy
No log 1.0 85 3.3698 0.0002 0.0081 0.0005 124 0.0004 0.0260 0.0008 77 0.0010 0.0423 0.0020 71 0.0046 0.0209 0.0076 479 0.0011 0.0213 0.0021 0.2756
No log 2.0 170 1.6151 0.0 0.0 0.0 124 0.0 0.0 0.0 77 0.0 0.0 0.0 71 0.0104 0.0355 0.0161 479 0.0029 0.0226 0.0051 0.6383
No log 3.0 255 0.9385 0.0 0.0 0.0 124 0.0 0.0 0.0 77 0.0 0.0 0.0 71 0.0129 0.0543 0.0209 479 0.0075 0.0346 0.0124 0.7646
No log 4.0 340 0.6876 0.0013 0.0081 0.0022 124 0.0 0.0 0.0 77 0.0 0.0 0.0 71 0.0371 0.1148 0.0561 479 0.0210 0.0746 0.0327 0.8109
No log 5.0 425 0.5094 0.0097 0.0645 0.0168 124 0.0 0.0 0.0 77 0.05 0.0141 0.0220 71 0.0824 0.2589 0.125 479 0.0511 0.1771 0.0793 0.8448
1.8667 6.0 510 0.4201 0.0325 0.25 0.0575 124 0.0 0.0 0.0 77 0.0526 0.0282 0.0367 71 0.1122 0.2985 0.1631 479 0.0726 0.2344 0.1109 0.8651
1.8667 7.0 595 0.3759 0.0525 0.4194 0.0934 124 0.0087 0.0130 0.0104 77 0.0698 0.0845 0.0764 71 0.1441 0.3111 0.1970 479 0.0935 0.2770 0.1398 0.8792
1.8667 8.0 680 0.3463 0.1856 0.5403 0.2763 124 0.0092 0.0130 0.0108 77 0.0510 0.0704 0.0592 71 0.1955 0.3800 0.2582 479 0.1701 0.3395 0.2267 0.9090
1.8667 9.0 765 0.3315 0.4516 0.5645 0.5018 124 0.0 0.0 0.0 77 0.0769 0.0845 0.0805 71 0.2240 0.3779 0.2813 479 0.2176 0.3422 0.2660 0.9221
1.8667 10.0 850 0.3290 0.5522 0.5968 0.5736 124 0.0 0.0 0.0 77 0.0845 0.0845 0.0845 71 0.2370 0.3800 0.2919 479 0.2352 0.3489 0.2810 0.9247

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
110M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for khalidrajan/flan-t5-base_legal_ner_finetuned

Finetuned
this model