Edit model card

SpanMarker for Named Entity Recognition

This is a SpanMarker model that can be used for Named Entity Recognition. In particular, this SpanMarker model uses bert-base-cased as the underlying encoder. See train.py for the training script. It is trained on P3ps/Cross_ner, which I believe is a variant of DFKI-SLT/cross_ner that merged the validation set into the training set and applied deduplication.

Is your data not (always) capitalized correctly? Then consider using the uncased variant of this model instead for better performance: tomaarsen/span-marker-bert-base-uncased-cross-ner.

Labels & Metrics

Label Examples Precision Recall F1
all - 88.25 87.46 87.85
academicjournal "New Journal of Physics", "EPL", "European Physical Journal B" 84.04 96.34 89.77
album "Tellin' Stories", "Generation Terrorists", "Country Airs" 90.71 85.81 88.19
algorithm "LDA", "PCA", "gradient descent" 76.27 79.65 77.92
astronomicalobject "Earth", "Sun", "Halley's comet" 92.00 93.24 92.62
award "Nobel Prize for Literature", "Acamedy Award for Best Actress", "Mandelbrot's awards" 87.14 92.51 89.74
band "Clash", "Parliament Funkadelic", "Sly and the Family Stone" 83.44 86.62 85.00
book "Nietzsche contra Wagner" , "Dionysian-Dithyrambs", "The Rebel" 73.71 82.69 77.95
chemicalcompound "hydrogen sulfide", "Starch", "Lactic acid" 71.21 71.21 71.21
chemicalelement "potassium", "Fluorine", "Chlorine" 84.00 70.00 76.36
conference "SIGGRAPH", "IJCAI", "IEEE Transactions on Speech and Audio Processing" 80.00 68.57 73.85
country "United Arab Emirates", "U.S.", "Canada" 81.72 86.81 84.19
discipline "physics", "meteorology", "geography" 48.39 55.56 51.72
election "2004 Canadian federal election", "2006 Canadian federal election", "1999 Scottish Parliament election" 96.61 97.85 97.23
enzyme "RNA polymerase", "Phosphoinositide 3-kinase", "Protein kinase C" 77.27 91.89 83.95
event "Cannes Film Festival", "2019 Special Olympics World Summer Games", "2017 Western Iraq campaign" 75.00 66.30 70.38
field "computational imaging", "electronics", "information theory" 89.80 83.02 86.27
literarygenre "novel", "satire", "short story" 70.24 68.60 69.41
location "China", "BOMBAY", "Serbia" 95.21 93.72 94.46
magazine "The Atlantic", "The American Spectator", "Astounding Science Fiction" 81.48 78.57 80.00
metrics "BLEU", "precision", "DCG" 72.53 81.48 76.74
misc "Serbian", "Belgian", "The Birth of a Nation" 81.69 74.08 77.70
musicalartist "Chuck Burgi", "John Miceli", "John O'Reilly" 79.67 87.11 83.23
musicalinstrument "koto", "bubens", "def" 66.67 22.22 33.33
musicgenre "Christian rock", "Punk rock", "romantic melodicism" 86.49 90.57 88.48
organisation "IRISH TIMES", "Comintern", "Wimbledon" 91.37 90.85 91.11
person "Gong Zhichao", "Liu Lufung", "Margret Crowley" 94.15 92.31 93.22
poem "Historia destructionis Troiae", "I Am Joaquin", "The Snow Man" 83.33 68.63 75.27
politicalparty "New Democratic Party", "Bloc Québécois", "Liberal Party of Canada" 87.50 90.17 88.82
politician "Susan Kadis", "Simon Strelchik", "Lloyd Helferty" 86.16 88.93 87.52
product "AlphaGo", "WordNet", "Facial recognition system" 60.82 70.24 65.19
programlang "R", "C++", "Java" 92.00 71.88 80.70
protein "DNA methyltransferase", "tau protein", "Amyloid beta" 60.29 59.42 59.85
researcher "Sirovich", "Kirby", "Matthew Turk" 87.50 78.65 82.84
scientist "Matjaž Perc", "Cotton", "Singer" 82.04 88.48 85.14
song "Right Where I'm Supposed to Be", "Easy", "Three Times a Lady" 84.78 90.70 87.64
task "robot control", "elevator scheduling", "telecommunications" 76.19 74.42 75.29
theory "Big Bang", "general theory of relativity", "Ptolemaic planetary theories" 100.00 16.67 28.57
university "University of Göttingen", "Duke", "Imperial Academy of Sciences" 77.14 91.01 83.51
writer "Thomas Mann", "George Bernard Shaw", "Thomas Hardy" 76.29 82.84 79.43

Usage

To use this model for inference, first install the span_marker library:

pip install span_marker

You can then run inference with this model like so:

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-cross-ner")
# Run inference
entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris.")

See the SpanMarker repository for documentation and additional information on this library.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Overall Precision Overall Recall Overall F1 Overall Accuracy
0.0521 0.25 200 0.0375 0.7149 0.6033 0.6544 0.8926
0.0225 0.5 400 0.0217 0.8001 0.7878 0.7939 0.9400
0.0189 0.75 600 0.0168 0.8526 0.8288 0.8405 0.9534
0.0157 1.01 800 0.0160 0.8481 0.8366 0.8423 0.9543
0.0116 1.26 1000 0.0158 0.8570 0.8568 0.8569 0.9582
0.0119 1.51 1200 0.0145 0.8752 0.8550 0.8650 0.9607
0.0102 1.76 1400 0.0145 0.8766 0.8555 0.8659 0.9601
0.01 2.01 1600 0.0139 0.8744 0.8718 0.8731 0.9629
0.0072 2.26 1800 0.0144 0.8748 0.8684 0.8716 0.9625
0.0066 2.51 2000 0.0140 0.8803 0.8738 0.8770 0.9645
0.007 2.76 2200 0.0138 0.8831 0.8739 0.8785 0.9644

Framework versions

  • SpanMarker 1.2.4
  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.3
  • Tokenizers 0.13.2
Downloads last month
13
Safetensors
Model size
108M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train tomaarsen/span-marker-bert-base-cross-ner

Collection including tomaarsen/span-marker-bert-base-cross-ner

Evaluation results