Tag
This paper demonstrates that deep neural networks are catastrophically vulnerable to minimal sign-bit flips in parameters, introducing DNL and 1P-DNL methods to identify critical vulnerable parameters without data or optimization. The vulnerability spans multiple domains including image classification, object detection, instance segmentation, and language models, with practical implications for model security.