Tag
This paper presents an empirical evaluation of LLM-guided semi-supervised learning for classifying social media crisis data. It demonstrates that LG-CoTrain outperforms classical baselines in low-resource settings and highlights the potential of transferring knowledge from LLMs to smaller, deployable models for disaster response.
OpenAI presents PATE (Private Aggregation of Teacher Ensembles), a privacy-preserving approach that trains a student model on noisy outputs from multiple teacher models trained on disjoint datasets, providing strong differential privacy guarantees without exposing sensitive training data.
This paper presents adversarial and virtual adversarial training methods adapted for text classification by applying perturbations to word embeddings in RNNs rather than raw inputs. The approach achieves state-of-the-art results on semi-supervised and supervised text classification benchmarks while reducing overfitting.