Verified Artificial Intelligence (AI) is the goal of designing AI-based systems that have strong, ideally provable, assurances of correctness with respect to mathematically-specified requirements. Our paper Towards Verified Artificial Intelligence, first published in 2016, presents our view of the challenges for applying formal methods to improve assurance in AI-based systems, identifying five major categories: environment modeling, formal specification, modeling learning systems, scalable computational engines, and correct-by-construction design methods. The paper also proposes principles for tackling these challenges.
In this website, we describe our ongoing progress towards the goal of Verified AI, providing links to publications, tools, and industrial case studies. Our work is being applied to the use of AI in several domains, including autonomous driving, aircraft and aerospace systems, and robotics including human-robot systems. The paper cited above provides more details on our perspective of the field, including links to the work by other groups.
ResearchWe briefly describe the main challenges along with some of the research we have been doing to address them. For a more detailed description of the challenges and principles for Verified AI, please consult our paper.
Environment ModelingThe environment of AI-based systems can be extremely complex --- indeed, the use of AI and Machine Learning (ML) is often introduced to deal with this complexity. Some key challenges are: finding the right model fidelity, modeling human behavior, and dealing with the "unknown variables" problem, where we do not know all the agents and objects in the environment, let alone have models of their behaviors. Here are the principles we are applying to address these challenges:
- Probabilistic Modeling: We have designed Scenic, a probabilistic programming language for modeling environments of systems with AI-based autonomy.
See Scenic: A Language for Scenario Specification and Scene Generation, Fremont et al., PLDI 2019.
- Active Data-Driven Model Inference: We are developing techniques for learning models of human behavior from data gathered actively.
See Information Gathering Actions Over Human Internal State, Sadigh et al., IROS 2016.
- Introspective Environment Modeling: To deal with the "unknown variables" problem, we propose the use of algorithmic methods to extract assumptions on the environment, monitorable by reliable sensors, under which safe and correct operation can be guaranteed.
See Introspective Environment Modeling, Seshia, RV 2019.
Formal SpecificationThis crucial aspect of formal methods faces multiple challenges, including "hard-to-formalize" tasks such as perception, combining quantitative and Boolean specification formalisms, and bridging the gap between data and logic-based specifications. Here are some of the ways we are addressing these:
- Specification for ML: This paper surveys the various types of properties of interest for ML models/systems, especially for deep neural networks: Formal Specification for Deep Neural Networks, Seshia et al., ATVA 2018.
- Start at the System Level: We are developing methods that start with specifications at the system level, i.e., for the entire AI/ML-based system, and then derive component-level properties from those, an approach particularly relevant for hard-to-formalize tasks.
See Compositional Falsification of Cyber-Physical Systems with Machine Learning Components, Dreossi et al., JAR 2019 (first version in NFM 2017).
- Hybrid Quantitative-Boolean Specifications: Here is a paper formalizing the various flavors of "robustness" that people have considered in the literature, especially on Adversarial ML, giving a unifying quantitative-Boolean formulation of robustness:
A Formalization of Robustness for Deep Neural Networks, Dreossi et al., VNN 2019.
- Specification Mining: We believe the paradigm of learning formal (logic/automata-based) specifications from data is a promising approach to bridge the gap with how tasks are specified in ML using data. Here are some papers exploring this approach:
Learning Task Specifications from Demonstrations, Vazquez-Chanlatte et al, NeurIPS 2018.
Logical Clustering and Learning for Time-Series Data, Vazquez-Chanlatte et al, CAV 2017.
Modeling Learning SystemsThe high-dimensionality of input and state spaces for AI-based systems, illustrated, e.g., by deep neural networks used for perception, require new approaches for abstraction and modeling of learning systems. Here are some representative results addressing this need:
- Semantic Feature Spaces: Modeling the underlying semantic context in which an AI/ML system operates can be crucial to scale up analysis and also provide more meaningful results.
See Semantic Adversarial Deep Learning, Dreossi et al, CAV 2018.
- Abstractions for ML Models: Even simple abstractions of ML models can help a great deal, as shown in this paper: Compositional Falsification of Cyber-Physical Systems with Machine Learning Components, Dreossi et al., JAR 2019 (first version in NFM 2017).
- Generating Interpretations/Explanations for ML Models: Complex ML models such as deep neural networks become more amenable to formal analysis when they can be lifted to more interpretable/explainable representations. Here is a representative publication on this topic:
A Programmatic and Semantic Approach to Explaining and Debugging Neural Network Based Object Detectors, Kim et al, CVPR 2020.
Scalable Computational EnginesSeveral advances are needed in creating computational engines for efficient and scalable training, testing, design, and verification of AI-based systems. Here are some results along this direction:
- Compositional Analysis: A modular approach is central to scaling up formal methods to large systems. We are developing compositional methods for AI-based systems, even in the case where components (such as perception) do not themselves have precise, formal specifications. See this sample paper: Compositional Falsification of Cyber-Physical Systems with Machine Learning Components, Dreossi et al., JAR 2019 (NFM 2017).
- Quantitative Semantic Analysis: We are leveraging optimization-driven methods to search in an efficient goal-directed manner over semantic feature spaces, see sample papers:
Analyzing and Improving Neural Networks by Generating Semantic Counterexamples through Differentiable Rendering, Jain et al., 2020.
VerifAI: A Toolkit for the Formal Design and Analysis of Artificial Intelligence-Based Systems, Dreossi et al., CAV 2019.
- Data Set Design: We are developing methods for efficiently generating data satisfying hard, soft, and distributional constraints, based on the theory of algorithmic (controlled) improvisation, implemented in the back end of the Scenic programming system:
See Scenic: A Language for Scenario Specification and Scene Generation, Fremont et al., PLDI 2019.
Correct-by-Construction DesignThe techniques described in the above thrusts must be augmented by methods to design correct and robust AI/ML-based systems. Here is some of our work on such methods:
- Formal Inductive Synthesis: Learning systems can be coupled with oracles (such as verifiers) that provide formal guarantees on the learned models or produce counterexamples and other data that they can be improved with. This leads us to develop a theory of oracle-guided inductive synthesis described and applied in the papers listed below:
A Theory of Formal Synthesis via Inductive Learning, Jha and Seshia, Acta Informatica, 2017.
Counterexample-Guided Data Augmentation, Dreossi et al., IJCAI 2018.
- Run-time Assurance Systems: Techniques for run-time assurance compose a component that might be unverified
or only verified to operate safety under certain conditions with a fall-back component that can take over
when necessary to provide a minimal assurance of safe operation at all times, along with associated switching logic.
In this regard, we have developed a runtime assurance system called SOTER, described in the following papers:
SOTER: A Runtime Assurance Framework for Programming Safe Robotics Systems, Desai et al., DSN 2019.
SOTER on ROS: A Run-Time Assurance Framework on the Robot Operating System, Shivakumar et al., RV 2020.
- Verified Artificial Intelligence and Autonomy (PDF), Sanjit A. Seshia, Keynote Talk at the NASA Formal Methods Conference (NFM), May 2020. (watch the video on youtube)
- Verified Artificial Intelligence: A Runtime Verification Perspective (PDF), Sanjit A. Seshia, Keynote Talk at the International Conference on Runtime Verification (RV), October 2019. (watch the video on youtube)
- Towards Verified Deep Learning (video), Sanjit A. Seshia, Invited Talk at Simons Institute Workshop on Emerging Challenges in Deep Learning, August 2019.
- Scenic and VerifAI: Tools for Assured AI-Based Autonomy (video), Sanjit A. Seshia, Edward Kim, Daniel J. Fremont, and Atul Acharya, Webinar, August 2020. (Overview Slides and Tutorial Slides, both in PDF)
ToolsOur research is mainly implemented in two open-source tools, VerifAI and Scenic:
- VerifAI: A software toolkit for the formal design and analysis of systems that include artificial intelligence (AI) and machine learning (ML) components. (for code and more details, follow this link)
- Scenic: A compiler and scene generator for the Scenic probabilistic scenario description language, with interfaces to several simulators (in domains spanning autonomous driving, robotics, aircraft). (for code and more details, follow this link)
Industrial Case StudiesScenic and VerifAI have been used in industrial case studies, here are two documented instances involving collaborations with industrial partners:
- Formal Analysis and Redesign of a Neural Network-Based Aircraft Taxiing System with VerifAI, Fremont, Chiu, Margineantu, Osipychev, and Seshia, CAV 2020.
- Formal Scenario-Based Testing of Autonomous Vehicles: From Simulation to the Real World, Fremont, Kim, Pant, Seshia, Acharya, Bruso, Wells, Lemke, Lu, and Mehta, ITSC 2020. [See also accompanying white paper and associated blog post.]