model-security

Tag

Cards List
#model-security

Protecting Language Models Against Unauthorized Distillation through Trace Rewriting

arXiv cs.CL · 2026-04-20 Cached

This paper proposes methods for protecting large language models against unauthorized knowledge distillation by rewriting reasoning traces to degrade training usefulness while preserving correctness, and embedding verifiable watermarks in distilled student models. The approach uses instruction-based and gradient-based rewriting techniques to achieve anti-distillation effects without compromising teacher model performance.

0 favorites 0 likes
#model-security

Maximal Brain Damage Without Data or Optimization: Disrupting Neural Networks via Sign-Bit Flips

Hugging Face Daily Papers · 2026-04-16 Cached

This paper demonstrates that deep neural networks are catastrophically vulnerable to minimal sign-bit flips in parameters, introducing DNL and 1P-DNL methods to identify critical vulnerable parameters without data or optimization. The vulnerability spans multiple domains including image classification, object detection, instance segmentation, and language models, with practical implications for model security.

0 favorites 0 likes
← Back to home

Submit Feedback