Backdoors

Detecting Trojaned DNNs using counterfactual attributions

We target the problem of detecting Trojans or backdoors in DNNs. Such models behave normally with typical inputs but produce targeted mispredictions for inputs poisoned with a Trojan trigger. Our approach is based on a novel intuition that the trigger behavior is dependent on a few ghost neurons that are activated for both input classes and trigger pattern. We use counterfactual explanations, implemented as neuron attributions, to measure significance of each neuron in switching predictions to a counter-class. We then incrementally excite these neurons and observe that the model’s accuracy drops sharply for Trojaned models as compared to benign models. We support this observation through a theoretical result that shows the attributions for a Trojaned model are concentrated in a small number of features. We encode the accuracy patterns by using a deep temporal set encoder for trojan detection that enables invariance to model architecture and a number of classes. We evaluate our approach on four US IARPA/NIST-TrojAI benchmarks with high diversity in model architectures and trigger patterns. We show consistent gains over state-of-the-art adversarial attack based model diagnosis (+5.8%absolute) and trigger reconstruction based methods (+23.5%), which often require strong assumptions on the nature of the attack.