Formal Methods

Assured Neuro Symbolic Learning and Reasoning (DARPA) [SRI funding: 5.4M]

β€―FLASH: Functionality-based Logic-driven Anchors with Semantic Hierarchy (DARPA) [SRI funding: 2.1M]

Neuro-symbolic Co-designer for Symbiotic Design of Cyber Physical Systems (DARPA) : [SRI funding: 5.63M ]

Trojans in Artificial Intelligence (IARPA) [SRI funding: 7.22M]

ALES in Assured Autonomy (DARPA) [SRI funding: 2.02M]

Quantum-Inspired Classical Computing (QuICC) (DARPA) [SRI funding: 12.5M]

Intent-Defined Adaptive Software (DARPA) [SRI funding: 3.99M]

Self-Improving Cyber-Physical Systems (NSF CPS Small) [SRI funding: 500K]

Duality-Based Algorithm Synthesis (NSF EAGER) [SRI funding: 250K]

Concept-based Analysis of Neural Networks via Vision-Language Models

The analysis of vision-based deep neural networks (DNNs) is highly desirable but it is very challenging due to the difficulty of expressing formal specifications for vision tasks and the lack of efficient verification procedures. In this paper, we propose to leverage emerging multimodal, vision-language, foundation models (VLMs) as a lens through which we can reason about vision models. VLMs have been trained on a large body of images accompanied by their textual description, and are thus implicitly aware of high-level, human-understandable concepts describing the images. We describe a logical specification language π™²πš˜πš—πšœπš™πšŽπšŒ designed to facilitate writing specifications in terms of these concepts. To define and formally check π™²πš˜πš—πšœπš™πšŽπšŒ specifications, we build a map between the internal representations of a given vision model and a VLM, leading to an efficient verification procedure of natural-language properties for vision models. We demonstrate our techniques on a ResNet-based classifier trained on the RIVAL-10 dataset using CLIP as the multimodal model.