What

製品に対するユーザーからのフィードバックを改善要望感想の二値に振り分ける分類モデルです。 bert-base-japaneseをファインチューニングして作成しています。

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("hfunakura/bert-feedback-classifier")
model = AutoModelForSequenceClassification.from_pretrained("hfunakura/bert-feedback-classifier")

classifier = pipeline(
    "text-classification",
    model=model,
    tokenizer=tokenizer
)

classifier("アプリが頻繁にクラッシュするので使いづらいです。")
Downloads last month
55
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hfunakura/bert-feedback-classifier

Finetuned
(7)
this model