Deep Learning

Internet Of Battlefield Things (Army Research Lab)

Counterexample Guided Inductive Synthesis Using Large Language Models and Satisfiability Solving

Generative large language models (LLMs) with instruct training such as GPT-4 can follow human-provided instruction prompts and generate human-like responses to these prompts. Apart from natural language responses, they have also been found to be effective at generating formal artifacts such as code, plans, and logical specifications from natural language prompts. Despite their remarkably improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results despite their syntactic coherence -- a phenomenon often referred to as {\em hallucinations}. This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safety-critical applications. Unlike tasks such as text summarization and question-answering, bugs in code, plan, and other formal artifacts produced by LLMs can be catastrophic. We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs. This interaction between inductive LLMs and deductive SMT solvers can iteratively steer the LLM to generate the correct response. In our experiments, we use planning over the domain of blocks as our synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo, Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our method allows the user to communicate the planning problem in natural language; even the formulation of queries to SMT solvers is automatically generated from natural language. Thus, the proposed technique can enable non-expert users to describe their problems in natural language, and the combination of LLMs and SMT solvers can produce provably correct solutions.

Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting

Large language models (LLMs) such as ChatGPT have been trained to generate human-like responses to natural language prompts. LLMs use a vast corpus of text data for training, and can generate coherent and contextually relevant responses to a wide range of questions and statements. Despite this remarkable progress, LLMs are prone to hallucinations making their application to safety-critical applications such as autonomous systems difficult. The hallucinations in LLMs refer to instances where the model generates responses that are not factually accurate or contextually appropriate. These hallucinations can occur due to a variety of factors, such as the model’s lack of real-world knowledge, the influence of biased or inaccurate training data, or the model’s tendency to generate responses based on statistical patterns rather than a true understanding of the input. While these hallucinations are a nuisance in tasks such as text summarization and question-answering, they can be catastrophic when LLMs are used in autonomy-relevant applications such as planning. In this paper, we focus on the application of LLMs in autonomous systems and sketch a novel self-monitoring and iterative prompting architecture that uses formal methods to detect these errors in the LLM response automatically. We exploit the dialog capability of LLMs to iteratively steer them to responses that are consistent with our correctness specification. We report preliminary experiments that show the promise of the proposed approach on tasks such as automated planning.

Detecting Trojaned DNNs using counterfactual attributions

We target the problem of detecting Trojans or backdoors in DNNs. Such models behave normally with typical inputs but produce targeted mispredictions for inputs poisoned with a Trojan trigger. Our approach is based on a novel intuition that the trigger behavior is dependent on a few ghost neurons that are activated for both input classes and trigger pattern. We use counterfactual explanations, implemented as neuron attributions, to measure significance of each neuron in switching predictions to a counter-class. We then incrementally excite these neurons and observe that the model’s accuracy drops sharply for Trojaned models as compared to benign models. We support this observation through a theoretical result that shows the attributions for a Trojaned model are concentrated in a small number of features. We encode the accuracy patterns by using a deep temporal set encoder for trojan detection that enables invariance to model architecture and a number of classes. We evaluate our approach on four US IARPA/NIST-TrojAI benchmarks with high diversity in model architectures and trigger patterns. We show consistent gains over state-of-the-art adversarial attack based model diagnosis (+5.8%absolute) and trigger reconstruction based methods (+23.5%), which often require strong assumptions on the nature of the attack.

Dual-Key Multimodal Backdoors for Visual Question Answering

The success of deep learning has enabled advances in multimodal tasks that require non-trivial fusion of multiple input domains. Although multimodal models have shown potential in many problems, their increased complexity makes them more vulnerable to attacks. A Backdoor (or Trojan) attack is a class of security vulnerability wherein an attacker embeds a malicious secret behavior into a network (e.g. targeted misclassification) that is activated when an attacker-specified trigger is added to an input. In this work, we show that multimodal networks are vulnerable to a novel type of attack that we refer to as Dual-Key Multimodal Backdoors. This attack exploits the complex fusion mechanisms used by state-of-the-art networks to embed backdoors that are both effective and stealthy. Instead of using a single trigger, the proposed attack embeds a trigger in each of the input modalities and activates the malicious behavior only when both the triggers are present. We present an extensive study of multimodal backdoors on the Visual Question Answering (VQA) task with multiple architectures and visual feature backbones. A major challenge in embedding backdoors in VQA models is that most models use visual features extracted from a fixed pretrained object detector. This is challenging for the attacker as the detector can distort or ignore the visual trigger entirely, which leads to models where backdoors are over-reliant on the language trigger. We tackle this problem by proposing a visual trigger optimization strategy designed for pretrained object detectors. Through this method, we create Dual-Key Backdoors with over a 98% attack success rate while only poisoning 1% of the training data. Finally, we release TrojVQA, a large collection of clean and trojan VQA models to enable research in defending against multimodal backdoors.

Neural SDEs for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions

In this paper, we present a comprehensive evaluation of the robustness and explainability of ResNet-like models in the context of Unintended Radiated Emission (URE) classification and suggest a new approach leveraging Neural Stochastic Differential Equations (SDEs) to address identified limitations. We provide an empirical demonstration of the fragility of ResNet-like models to Gaussian noise perturbations, where the model performance deteriorates sharply and its F1-score drops to near insignificance at 0.008 with a Gaussian noise of only 0.5 standard deviation. We also highlight a concerning discrepancy where the explanations provided by ResNet-like models do not reflect the inherent periodicity in the input data, a crucial attribute in URE detection from stable devices.In response to these findings, we propose a novel application of Neural SDEs to build models for URE classification that are not only robust to noise but also provide more meaningful and intuitive explanations. Neural SDE models maintain a high F1-score of 0.93 even when exposed to Gaussian noise with a standard deviation of 0.5, demonstrating superior resilience to ResNet models. Neural SDE models successfully recover the time-invariant or periodic horizontal bands from the input data, a feature that was conspicuously missing in the explanations generated by ResNet-like models. This advancement presents a small but significant step in the development of robust and interpretable models for real-world URE applications where data is inherently noisy and assurance arguments demand interpretable machine learning predictions.

Predicting Out-of-Distribution Performance of Deep Neural Networks Using Model Conformance

With the increasingly high interest in using Deep Neural Networks (DNN) in safety-critical cyber-physical systems, such as autonomous vehicles, providing assurance about the safe deployment of these models becomes ever more important. The safe deployment of deep learning models in the real world where the inputs can vary from the training environment of the models requires characterizing the performance and the uncertainty in the prediction of these models, particularly on novel and out-of-distribution (OOD) inputs. This has motivated the development of methods to predict the accuracy of DNN in novel (unseen during training) environments. These methods, however, assume access to some labeled data from the novel environment which is unrealistic in many real-world settings. We propose an approach for predicting the accuracy of a DNN classifier under a shift from its training distribution without assuming access to labels of the inputs drawn from the shifted distribution. We demonstrate the efficacy of the proposed approach on two autonomous driving datasets namely the GTSRB dataset for image classification, and the ONCE dataset with synchronized feeds from LiDAR and cameras used for object detection. We show that the proposed approach is applicable for predicting accuracy on different modalities (image from camera, and point cloud from LiDAR) of the input data.

Principled OOD Detection via Multiple Testing

We study the problem of Out-of-Distribution (OOD) detection, that is, detecting whether a learning algorithm's output can be trusted at inference time. While a number of tests for OOD detection have been proposed in prior work, a formal framework for studying this problem is lacking. We propose a definition for the notion of OOD that includes both the input distribution and the learning algorithm, which provides insights for the construction of powerful tests for OOD detection. We propose a multiple hypothesis testing inspired procedure to systematically combine any number of different statistics from the learning algorithm using conformal p-values. We further provide strong guarantees on the probability of incorrectly classifying an in-distribution sample as OOD. In our experiments, we find that threshold-based tests proposed in prior work perform well in specific settings, but not uniformly well across different types of OOD instances. In contrast, our proposed method that combines multiple statistics performs uniformly well across different datasets and neural networks.

TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored Models

We present a Multimodal Backdoor Defense technique TIJO (Trigger Inversion using Joint Optimization). Recent work https://arxiv.org/abs/2112.07668 has demonstrated successful backdoor attacks on multimodal models for the Visual Question Answering task. Their dual-key backdoor trigger is split across two modalities (image and text), such that the backdoor is activated if and only if the trigger is present in both modalities. We propose TIJO that defends against dual-key attacks through a joint optimization that reverse-engineers the trigger in both the image and text modalities. This joint optimization is challenging in multimodal models due to the disconnected nature of the visual pipeline which consists of an offline feature extractor, whose output is then fused with the text using a fusion module. The key insight enabling the joint optimization in TIJO is that the trigger inversion needs to be carried out in the object detection box feature space as opposed to the pixel space. We demonstrate the effectiveness of our method on the TrojVQA benchmark, where TIJO improves upon the state-of-the-art unimodal methods from an AUC of 0.6 to 0.92 on multimodal dual-key backdoors. Furthermore, our method also improves upon the unimodal baselines on unimodal backdoors. We present ablation studies and qualitative results to provide insights into our algorithm such as the critical importance of overlaying the inverted feature triggers on all visual features during trigger inversion. The prototype implementation of TIJO is available at https://github.com/SRI-CSL/TIJO.

Responsible Reasoning with Large Language Models and the Impact of Proper Nouns

Language models with billions of parameters have shown remarkable emergent properties, including the ability to reason on unstructured data. We show that open-science multi-lingual large language models can perform the task of spatial reasoning on two or more entities with significant accuracy. A responsible large language model would perform this spatial reasoning task with the same accuracy regardless of the choice of the names of the entities over which the spatial relationships are defined. However, we show that the accuracies of contemporary large language models are impacted by the choice of proper nouns even when the underlying task ought to be independent of the choice of proper nouns. In this context, we observe that the conditional log probabilities or beam scores of open-science multi-lingual large language model predictions are not well-calibrated, and the beam scores do not discriminate well between correct and wrong responses in this context.