Model
SciBERT text classification model for positive and negative results prediction in scientific abstracts of clinical psychology and psychotherapy. The preprint "Classifying Positive Results in Clinical Psychology Using Natural Language Processing" by Louis Schiekiera, Jonathan Diederichs & Helen Niemeyer is available on PsyArxiv.
Data
We annotated over 1,900 clinical psychology abstracts into two categories: 'positive results only' and 'mixed or negative results', and trained models using SciBERT. The SciBERT model was validated against one in-domain (clinical psychology) and two out-of-domain data sets (psychotherapy). Further information on documentation, code and data for the preprint "Classifying Positive Results in Clinical Psychology Using Natural Language Processing" can be found on this GitHub repository.
Results
Table 1
Different metric scores for model evaluation of test data from the annotated MAIN
corpus, consisting of n = 198 abstracts authored by researchers affiliated with German clinical psychology departments and published between 2012 and 2022
Accuracy | Mixed & Negative Results | Positive Results Only | |||||
---|---|---|---|---|---|---|---|
F1 | Recall | Precision | F1 | Recall | Precision | ||
SciBERT | 0.864 | 0.867 | 0.907 | 0.830 | 0.860 | 0.822 | 0.902 |
Random Forest | 0.803 | 0.810 | 0.856 | 0.769 | 0.796 | 0.752 | 0.844 |
Extracted p-values | 0.515 | 0.495 | 0.485 | 0.505 | 0.534 | 0.545 | 0.524 |
Extracted NL Indicators | 0.530 | 0.497 | 0.474 | 0.523 | 0.559 | 0.584 | 0.536 |
Number of Words | 0.475 | 0.441 | 0.423 | 0.461 | 0.505 | 0.525 | 0.486 |
Figure 1
Comparing model performances across in-domain and out-of-domain data; Colored bars represent different model types; Samples: MAIN
test: n = 198 abstracts; VAL1
: n = 150 abstracts; VAL2
: n = 150 abstracts.
Using the model on Huggingface
The model can be used on Hugginface utilizing the "Hosted inference API" in the window on the right. Click 'Compute' to predict the class labels for an example abstract or an abstract inserted by yourself. The class label 'positive' corresponds to 'positive results only', while 'negative' represents 'mixed or negative results'.
Using the model for larger data
from transformers import AutoTokenizer, Trainer, AutoModelForSequenceClassification
## 1. Load tokenizer
tokenizer = AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased')
## 2. Apply preprocess function to data
def preprocess_function(examples):
return tokenizer(examples["text"],
truncation=True,
max_length=512,
padding='max_length'
)
tokenized_data = dataset.map(preprocess_function, batched=True)
# 3. Load Model
NegativeResultDetector = AutoModelForSequenceClassification.from_pretrained("ClinicalMetaScience/NegativeResultDetector")
## 4. Initialize the trainer with the model and tokenizer
trainer = Trainer(
model=NegativeResultDetector,
tokenizer=tokenizer,
)
# 5. Apply NegativeResultDetector for prediction on inference data
predict_test=trainer.predict(tokenized_data["inference"])
Further information on analyzing your own or our example data can be found in this script from our GitHub repository.
Disclaimer
This tool is developed to analyze and predict the prevalence of positive and negative results in scientific abstracts based on the SciBERT model. While publication bias is a plausible explanation for certain patterns of results observed in scientific literature, the analyses conducted by this tool do not conclusively establish the presence of publication bias or any other underlying factors. It's essential to understand that this tool evaluates data but does not delve into the underlying reasons for the observed trends. The validation of this tool has been conducted on primary studies from the field of clinical psychology and psychotherapy. While it might yield insights when applied to abstracts of other fields or other types of studies (such as meta-analyses), its applicability and accuracy in such contexts have not been thoroughly tested yet. The developers of this tool are not responsible for any misinterpretation or misuse of the tool's results, and encourage users to have a comprehensive understanding of the limitations inherent in statistical analysis and prediction models.
Funding & Project
This study was conducted as part of the PANNE Project (German acronym for “publication bias analysis of non-publication and non-reception of results in a disciplinary comparison”) at Freie Universität Berlin and was funded by the Berlin University Alliance. The authors are members of the Berlin University Alliance.
- Downloads last month
- 18