Edit model card

Contrastive Learning of Sociopragmatic Meaning in Social Media

Chiyu Zhang, Muhammad Abdul-Mageed, Ganesh Jarwaha

Publish at Findings of ACL 2023

Paper

[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)]() [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)]()

Title

Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.

Checkpoints of Models Pre-Trained with InfoDCL

Model Performance

main table

Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
Downloads last month
5
Inference API
Unable to determine this model’s pipeline type. Check the docs .