migueladarlo's picture
Update README.md
8afaaaf
|
raw
history blame
2.06 kB
metadata
language:
  - en
license: mit
tags:
  - text
  - Twitter
datasets:
  - CLPsych 2015
metrics:
  - accuracy, f1, precision, recall, AUC
model-index:
  - name: distilbert-depression-base
    results: []

distilbert-depression-base

This model is a fine-tuned version of base-uncased trained on CLPsych 2015 and evaluated on a scraped dataset from Twitter. It achieves the following results on the evaluation set:

  • Evaluation Loss: 0.64
  • Accuracy: 0.65
  • F1: 0.70
  • Precision: 0.61
  • Recall: 0.83
  • AUC: 0.65

Intended uses & limitations

Feed a corpus of tweets to the model to generate label if input is indicative of depression or not.

Limitation: All token sequences longer than 512 are automatically truncated.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.39e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • weight_decay: 0.13
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.68 1.0 625 0.1385 0.9745
0.60 2.0 1250 0.1385 0.9745
0.52 3.0 1875 0.1385 0.9745
Epoch Training Loss Validation Loss Accuracy F1 Precision Recall AUC
1.0 0.68 0.66 0.59 0.63 0.56 0.73 0.59
2.0 0.60 0.68 0.63 0.69 0.59 0.83 0.63
3.0 0.52 0.67 0.64 0.66 0.62 0.72 0.65