Just-in-time Self-Verification of Autonomous Systems (justITSELF)


Engineers and computer scientists are currently developing autonomous systems whose entire set of behaviors in future, untested situations is unknown: How can a designer foresee all situations that an autonomous road vehicle, a robot in a human environment, an agricultural robot, or an unmanned aerial vehicle will face? Keeping in mind that all these examples are safety-critical, it is irresponsible to deploy such systems without testing all possible situations—this, however, seems impossible since even the most important possible situations are unmanageably many.

We propose a paradigm shift that will make it possible to guarantee safety in unforeseeable situations: Instead of verifying the correctness of a system before deployment, I propose just-in-time verification, a new, to-be-developed verification paradigm where a system continuously checks the correctness of its next action by itself in its current environment (and only in it) in a just-in-time manner. Just-in-time verification will substantially cut development costs, increase the autonomy of systems (e.g., the range of deployment of automated driving systems), and reduce or even eliminate certain liability claims.

Main Objective

Realizing a paradigm shift in formal verification: Instead of formally verify-ing a system before deployment, we develop algorithms for just-in-time verification of autonomous systems in which each action is only executed if it is formally verified during the operation of the system. The envisioned approach continuously repeats this process so that the current situation is always considered, making it possible to react to unexpected situations. These verification algorithms will have to be highly efficient, but they only have to be performed for the current situation (drastically reduced verification space).

Developed Tools and Benchmarks