Edit model card

SMM4H-2024 Task 2 Japanese RE

Overview

This is a relation extraction model created by fine-tuning daisaku-s/medtxt_ner_roberta on SMM4H 2024 Task 2b corpus.

Tag set:

  • CAUSED
  • TREATMENT_FOR

Usage

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

text = "銈点兂銉椼儷銉嗐偔銈广儓"
model_name = "yseop/SMM4H2024_Task2b_ja"
id2label = ['O', 'CAUSED', 'TREATMENT_FOR']

with torch.inference_mode():
    model = AutoModelForSequenceClassification.from_pretrained(model_name).eval()
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    encoded_input = tokenizer(text, return_tensors='pt', max_length=512)
    output = re_model(**encoded_input).logits
    class_id = output.argmax().item()
    print(id2label[class_id])

Results

Relation tp fp fn precision recall f1
CAUSED|DISORDER|DISORDER 1 163 38 0.0061 0.0256 0.0099
CAUSED|DISORDER|FUNCTION 0 70 13 0 0 0
CAUSED|DRUG|DISORDER 9 196 105 0.0439 0.0789 0.0564
CAUSED|DRUG|FUNCTION 2 59 7 0.0328 0.2222 0.0571
TREATMENT_FOR|DISORDER|DISORDER 0 12 0 0 0 0
TREATMENT_FOR|DISORDER|FUNCTION 0 3 0 0 0 0
TREATMENT_FOR|DRUG|DISORDER 0 15 91 0 0 0
TREATMENT_FOR|DRUG|FUNCTION 0 0 1 0 0 0
all 12 518 255 0.0226 0.0449 0.0301
Downloads last month
5
Safetensors
Model size
124M params
Tensor type
F32
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.