Verified AI

About

Verified Artificial Intelligence (AI) is the goal of designing AI-based systems that have strong, ideally provable, assurances of correctness with respect to mathematically-specified requirements. Our paper Towards Verified Artificial Intelligence, first published in 2016, presents our view of the challenges for applying formal methods to improve assurance in AI-based systems, identifying five major categories: environment modeling, formal specification, modeling learning systems, scalable computational engines, and correct-by-construction design methods. The paper also proposes principles for tackling these challenges.

In this website, we describe our ongoing progress towards the goal of Verified AI, providing links to publications, tools, and industrial case studies. Our work is being applied to the use of AI in several domains, including autonomous driving, aircraft and aerospace systems, and robotics including human-robot systems. The paper cited above provides more details on our perspective of the field, including links to the work by other groups.

Research

We briefly describe the main challenges along with some of the research we have been doing to address them. For a more detailed description of the challenges and principles for Verified AI, please consult our paper.

Environment Modeling

The environment of AI-based systems can be extremely complex --- indeed, the use of AI and Machine Learning (ML) is often introduced to deal with this complexity. Some key challenges are: finding the right model fidelity, modeling human behavior, and dealing with the "unknown variables" problem, where we do not know all the agents and objects in the environment, let alone have models of their behaviors. Here are the principles we are applying to address these challenges:

Formal Specification

This crucial aspect of formal methods faces multiple challenges, including "hard-to-formalize" tasks such as perception, combining quantitative and Boolean specification formalisms, and bridging the gap between data and logic-based specifications. Here are some of the ways we are addressing these:

Modeling Learning Systems

The high-dimensionality of input and state spaces for AI-based systems, illustrated, e.g., by deep neural networks used for perception, require new approaches for abstraction and modeling of learning systems. Here are some representative results addressing this need:

Scalable Computational Engines

Several advances are needed in creating computational engines for efficient and scalable training, testing, design, and verification of AI-based systems. Here are some results along this direction:

Correct-by-Construction Design

The techniques described in the above thrusts must be augmented by methods to design correct and robust AI/ML-based systems. Here is some of our work on such methods:

Selected Talks

Tools

Our research is mainly implemented in two open-source tools, VerifAI and Scenic: VerifAI and Scenic are designed to be used in tandem: Scenic is a modeling language for the back-end algorithms implemented in VerifAI.

Industrial Case Studies

Scenic and VerifAI have been used in industrial case studies, here are two documented instances involving collaborations with industrial partners:

Contact

For further information on the Verified AI effort, please contact Professor Sanjit Seshia and his research group.

Sponsors

We gratefully acknowledge our funding sponsors: the National Science Foundation (NSF) under projects such as VeHICaL, the Defense Advanced Research Projects Agency (DARPA), and industrial sponsors providing support to centers such as the Industrial Cyber-Physical Systems (iCyPhy) Center and Berkeley Deep Drive.