[NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach'.
text-classification
weak-supervision
dataset
self-training
language-model
slot-filling
weakly-supervised-learning
fine-tuning
pseudo-labeling
roberta
learning-with-noisy-labels
agnews
sentence-pair-classification
contrastive-learning
roberta-model
-
Updated
Aug 17, 2022 - Python