Verifiably Safe and Correct Deep Neural Networks
This project aims to develop scalable verification techniques for large deep neural networks to ensure their safety and correctness in critical systems, enhancing reliability and societal benefits.
Projectdetails
Introduction
Deep machine learning is revolutionizing computer science. Instead of manually creating complex software, engineers now use automatically generated deep neural networks (DNNs) in critical financial, medical, and transportation systems, obtaining previously unimaginable results.
Challenges of DNNs
Despite their remarkable achievements, DNNs remain opaque. We do not understand their decision-making and cannot prove their correctness, thus risking potentially devastating outcomes.
Example of Risks
For example, it has been shown that DNNs that navigate autonomous aircraft with the goal of avoiding collisions could produce incorrect turning advisories. Thus, the lack of formal guarantees regarding DNN behavior is preventing their safe deployment in critical systems and could jeopardize human lives. Consequently, there is a crucial need to ensure that DNNs operate correctly.
Recent Developments
Recent and exciting developments in formal verification allow us to automatically reason about DNNs. However, this is a nascent technology, which currently only scales to medium-sized DNNs, whereas real-world systems are much larger. Additionally, it is unclear how to apply this technology in practice.
Proposed Solutions
I propose to bridge this crucial gap through the development of novel, scalable, and groundbreaking techniques for verifying the correctness of large DNNs, and by applying them to real systems of interest. I will do this by:
- Developing search-space pruning techniques, which will enable us to verify larger DNNs.
- Creating novel abstraction-refinement techniques, which will allow us to scale to even larger DNNs.
- Identifying new kinds of relevant specifications and key domains where DNNs are used, demonstrating the verification of real-world DNNs.
Expected Outcomes
This project will result in a sound and expressive framework for automatically reasoning about DNNs, orders of magnitude larger than is possible today. This framework will ensure the safety and correctness of DNNs deployed in critical systems, greatly benefiting users and society.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.500.000 |
Totale projectbegroting | € 1.500.000 |
Tijdlijn
Startdatum | 1-11-2023 |
Einddatum | 31-10-2028 |
Subsidiejaar | 2023 |
Partners & Locaties
Projectpartners
- THE HEBREW UNIVERSITY OF JERUSALEMpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Dynamics-Aware Theory of Deep LearningThis project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications. | ERC Starting... | € 1.498.410 | 2022 | Details |
Data-Driven Verification and Learning Under UncertaintyThe DEUCE project aims to enhance reinforcement learning by developing novel verification methods that ensure safety and correctness in complex, uncertain environments through data-driven abstractions. | ERC Starting... | € 1.500.000 | 2023 | Details |
Holistic Rigorous Numerical VerificationThe project aims to develop an automated verification and debugging framework for numerical programs that ensures accuracy in finite-precision computations while enhancing usability for developers. | ERC Starting... | € 1.498.976 | 2025 | Details |
Reading Minds and MachinesThe project aims to decode training data from Deep Neural Networks and brain activity, enhancing data privacy and communication for locked-in patients while improving insights in both fields. | ERC Advanced... | € 2.499.333 | 2024 | Details |
From reconstructions of neuronal circuits to anatomically realistic artificial neural networksThis project aims to enhance artificial neural networks by extracting wiring principles from brain connectomics to improve efficiency and reduce training data needs for deep learning applications. | ERC Proof of... | € 150.000 | 2022 | Details |
Dynamics-Aware Theory of Deep Learning
This project aims to create a robust theoretical framework for deep learning, enhancing understanding and practical tools to improve model performance and reduce complexity in various applications.
Data-Driven Verification and Learning Under Uncertainty
The DEUCE project aims to enhance reinforcement learning by developing novel verification methods that ensure safety and correctness in complex, uncertain environments through data-driven abstractions.
Holistic Rigorous Numerical Verification
The project aims to develop an automated verification and debugging framework for numerical programs that ensures accuracy in finite-precision computations while enhancing usability for developers.
Reading Minds and Machines
The project aims to decode training data from Deep Neural Networks and brain activity, enhancing data privacy and communication for locked-in patients while improving insights in both fields.
From reconstructions of neuronal circuits to anatomically realistic artificial neural networks
This project aims to enhance artificial neural networks by extracting wiring principles from brain connectomics to improve efficiency and reduce training data needs for deep learning applications.