Incremental Neural Network Verification via Learned Conflicts

TL;DR

Incremental neural network verification via learned conflicts achieves up to 1.9x speedup in Marabou verifier.

cs.LO 🔴 Advanced 2026-03-13 16 views
Raya Elsaleh Liam Davis Haoze Wu Guy Katz
neural networks verification incremental learning conflict reuse Marabou

Key Findings

Methodology

This paper proposes an incremental verification technique that accelerates the neural network verification process by reusing learned conflicts. This technique can be integrated into any branch-and-bound-based neural network verifier. During verification, the verifier records conflicts corresponding to learned infeasible combinations of activation phases and retains them across runs. We formalize a refinement relation between verification queries and show that conflicts learned for a query remain valid under refinement, enabling sound conflict inheritance. Inherited conflicts are handled using a SAT solver to perform consistency checks and propagation, allowing infeasible subproblems to be detected and pruned early during search.

Key Results

  • Experimental results show that incremental conflict reuse reduces verification effort and yields speedups of up to 1.9x over a non-incremental baseline across three verification tasks.
  • In the local robustness radius determination task, the incremental method achieved an average solving time 1.3x faster than the non-incremental method on the MNIST dataset.
  • In the verification with input splitting task, the incremental method effectively reduced verification time during the training of Lyapunov neural certificates.

Significance

Neural networks are increasingly deployed in safety-critical applications such as autonomous driving, medical diagnosis, and aerospace systems. However, the complexity and opacity of neural networks make it difficult to ensure their reliability and safety in these critical settings. By introducing an incremental verification technique, this study provides a more efficient method for neural network verification, capable of reusing information across multiple related queries and reducing redundant computation. This not only improves verification efficiency but also provides stronger guarantees for the application of neural networks in safety-critical fields.

Technical Contribution

The technical contributions of this paper include the proposal of a novel incremental verification framework that can reuse learned conflicts across multiple related verification queries. This method is compatible with existing branch-and-bound-based verifiers and significantly reduces verification time by using a SAT solver for conflict propagation and consistency checks. Additionally, the paper formalizes the refinement relation between verification queries, ensuring the soundness of conflict inheritance.

Novelty

This paper is the first to propose the concept of reusing learned conflicts in neural network verification. Unlike existing incremental solving techniques, this method focuses on reusing lemmas across multiple invocations of branch-and-bound-based complete verifiers. This approach not only improves verification efficiency but also provides new theoretical guarantees for neural network verification.

Limitations

  • The method may encounter memory limitations when dealing with very large-scale neural networks, as it requires storing a large amount of conflict information.
  • The effectiveness of conflict reuse may not meet expectations for certain types of neural network architectures.
  • In some cases, the inheritance of conflicts may lead to additional computational overhead.

Future Work

Future research directions include exploring how to implement incremental verification techniques on larger-scale neural networks and how to apply this technique to other types of neural network architectures. Additionally, further optimization of conflict storage and propagation mechanisms could enhance verification efficiency.

AI Executive Summary

Neural networks are increasingly deployed in safety-critical applications such as autonomous driving, medical diagnosis, and aerospace systems. However, due to their complexity and opacity, ensuring the reliability and safety of neural networks in these critical environments raises serious concerns. To address this issue, numerous neural network verification methods have been proposed, aiming to provide rigorous guarantees about network behavior. However, existing verification tools are often inefficient when handling multiple related verification queries, as each query is solved independently, and information learned during previous runs is discarded, leading to repeated exploration of the same infeasible regions of the search space.

This paper proposes an incremental verification technique that accelerates the neural network verification process by reusing learned conflicts. This technique can be integrated into any branch-and-bound-based neural network verifier. During verification, the verifier records conflicts corresponding to learned infeasible combinations of activation phases and retains them across runs. We formalize a refinement relation between verification queries and show that conflicts learned for a query remain valid under refinement, enabling sound conflict inheritance. Inherited conflicts are handled using a SAT solver to perform consistency checks and propagation, allowing infeasible subproblems to be detected and pruned early during search.

We implement the proposed technique in the Marabou verifier and evaluate it on three verification tasks: local robustness radius determination, verification with input splitting, and minimal sufficient feature set extraction. Experimental results show that incremental conflict reuse reduces verification effort and yields speedups of up to 1.9x over a non-incremental baseline.

In the local robustness radius determination task, we conducted experiments on the MNIST dataset, and the results show that the incremental method achieved an average solving time 1.3x faster than the non-incremental method. In the verification with input splitting task, the incremental method effectively reduced verification time during the training of Lyapunov neural certificates.

This study provides a more efficient method for neural network verification, capable of reusing information across multiple related queries and reducing redundant computation. This not only improves verification efficiency but also provides stronger guarantees for the application of neural networks in safety-critical fields. Future research directions include exploring how to implement incremental verification techniques on larger-scale neural networks and how to apply this technique to other types of neural network architectures.

Deep Analysis

Background

As deep neural networks (DNNs) are increasingly applied in safety-critical applications such as autonomous driving, medical diagnosis, and aerospace systems, ensuring their reliability and safety becomes crucial. Despite their strong empirical performance in tasks such as control and image recognition, the complexity and opacity of neural networks make it difficult to ensure their reliability and safety in these critical settings. To address this issue, numerous neural network verification methods have been proposed, aiming to provide rigorous guarantees about network behavior. However, for networks with piecewise-linear activation functions, such as ReLUs, the verification problem is NP-complete, which hinders scalability. As neural networks are adopted across increasingly diverse and high-stakes domains, the demand for scalable verification techniques continues to grow. Therefore, improving the efficiency of neural network verification is imperative.

Core Problem

In practice, DNN verification is often not performed as a single, isolated query, but rather invoked repeatedly within larger analysis procedures. For example, in formal explainability, verification queries are issued repeatedly to reason about the contribution of different input features to a given prediction under progressively refined constraints; similarly, in robustness radius computation, verification queries are invoked iteratively to narrow down the maximum safe perturbation radius around a given input. Such analyses naturally give rise to sequences of closely related verification queries that differ in limited aspects of their specifications, such as refined input domains or strengthened output constraints. Despite this, current verification tools do not explicitly exploit this structural similarity: each query is restarted from scratch, and information derived during previous runs is discarded.

Innovation

This paper proposes an incremental verification technique that accelerates the neural network verification process by reusing learned conflicts. This technique can be integrated into any branch-and-bound-based neural network verifier. During verification, the verifier records conflicts corresponding to learned infeasible combinations of activation phases and retains them across runs. We formalize a refinement relation between verification queries and show that conflicts learned for a query remain valid under refinement, enabling sound conflict inheritance. Inherited conflicts are handled using a SAT solver to perform consistency checks and propagation, allowing infeasible subproblems to be detected and pruned early during search. We develop a framework for conflict recording and sound reuse that integrates directly with branch-and-bound-based verifiers and employ a SAT solver to efficiently manage and apply large collections of learned conflicts during solving.

Methodology

  • �� Incremental Verification Framework: Records conflicts that arise during branch-and-bound verification, where each conflict captures an infeasible combination of branching decisions.
  • �� Conflict Retention: These conflicts are preserved beyond individual verification runs and reused in subsequent queries with refined specifications.
  • �� Conflict Inheritance: We formally define a refinement condition for the conflict to remain valid and show that this condition can be established in several important applications of neural network verification.
  • �� SAT Solver Usage: During branch-and-bound search, a SAT solver is used to reason about inherited conflict clauses as an additional pruning and propagation step.
  • �� Implementation: The proposed method is implemented in the Marabou verifier and evaluated on three representative verification tasks.

Experiments

We implement the proposed technique in the Marabou verifier and evaluate it on three verification tasks: local robustness radius determination, verification with input splitting, and minimal sufficient feature set extraction. In the local robustness radius determination task, we conducted experiments on the MNIST dataset, and the results show that the incremental method achieved an average solving time 1.3x faster than the non-incremental method. In the verification with input splitting task, the incremental method effectively reduced verification time during the training of Lyapunov neural certificates. We used the CaDiCaL SAT solver for conflict reasoning, and all verification queries induced by the same task share a single ICA instance. Propagation from inherited conflicts may detect that the current partial assignment is infeasible, allowing the verifier to prune the subproblem immediately, or it may derive additional ReLU phase assignments via unit propagation.

Results

Experimental results show that incremental conflict reuse reduces verification effort and yields speedups of up to 1.9x over a non-incremental baseline across three verification tasks. In the local robustness radius determination task, the incremental method achieved an average solving time 1.3x faster than the non-incremental method on the MNIST dataset. In the verification with input splitting task, the incremental method effectively reduced verification time during the training of Lyapunov neural certificates. Propagation from inherited conflicts may detect that the current partial assignment is infeasible, allowing the verifier to prune the subproblem immediately, or it may derive additional ReLU phase assignments via unit propagation.

Applications

The proposed incremental verification technique can be directly applied to neural network models requiring efficient verification, especially in safety-critical applications such as autonomous driving and medical diagnosis. By reducing redundant computation, this technique can significantly improve verification efficiency and reduce computational costs. Additionally, this technique can be used in other scenarios requiring multiple verifications, such as model optimization and tuning.

Limitations & Outlook

Despite the impressive performance of the proposed incremental verification technique across multiple verification tasks, it may encounter memory limitations when dealing with very large-scale neural networks, as it requires storing a large amount of conflict information. Additionally, the effectiveness of conflict reuse may not meet expectations for certain types of neural network architectures. In some cases, the inheritance of conflicts may lead to additional computational overhead. Future research directions include exploring how to implement incremental verification techniques on larger-scale neural networks and how to apply this technique to other types of neural network architectures.

Plain Language Accessible to non-experts

Imagine you're cooking in a kitchen. Every time you make a new dish, you have to start from scratch, preparing all the ingredients and tools. This is like traditional neural network verification methods, where each new query is verified from scratch. However, the method proposed in this paper is like having some commonly used spices and tools already prepared in your kitchen, which you can reuse when making different dishes. This way, you don't have to prepare everything from scratch each time, but can directly use the materials you have already prepared, greatly improving efficiency. This method reduces redundant computation by recording and reusing conflict information learned during previous verifications, making the verification process more efficient. Just like in the kitchen, you can make each dish faster because you already have some prepared materials and tools.

ELI14 Explained like you're 14

Hey, imagine you're playing a game. Every time you pass a level, you have to start over, completely forgetting what you learned before. Isn't that annoying? That's what traditional neural network verification is like, where each new query is verified from scratch. But the method proposed in this paper is like having save points in a game. Every time you pass a level, it records what you've learned, so next time you play, you can start from the save point instead of starting over. This method reduces redundant computation by recording and reusing conflict information learned during previous verifications, making the verification process more efficient. Just like in the game, you can pass levels faster because you already have some save points.

Glossary

Neural Network

A computational model that simulates the connections of neurons in the human brain, widely used in pattern recognition and machine learning.

Used in models for autonomous driving, medical diagnosis, etc.

Verification

A process used to ensure that a system or model behaves as expected, often involving mathematical proofs.

Used to ensure the reliability of neural networks in critical applications.

Incremental Verification

A technique that accelerates verification by reusing information learned during previous verifications.

The method proposed in this paper aims to improve verification efficiency.

Conflict

An infeasible combination of decisions discovered during verification, often used to prune the search space.

Recorded during branch-and-bound to reduce redundant computation.

SAT Solver

An algorithmic tool used to solve Boolean satisfiability problems, widely used in verification and optimization.

Used to handle inherited conflict clauses for consistency checks.

Branch-and-Bound

An algorithm that solves optimization problems by recursively partitioning the problem space.

A core technique used in neural network verification.

ReLU

A commonly used activation function defined as f(x)=max(0,x), used in neural networks.

Introduces branching behavior during verification.

Local Robustness

A measure of the stability of a neural network's output under input perturbations.

One of the verification tasks, determining the maximum safe perturbation radius.

Feature Set

A set of input variables used for training and verifying models.

Used in the minimal sufficient feature set extraction task.

Marabou

A tool for neural network verification that supports various verification tasks.

The platform where the incremental verification technique is implemented in this paper.

Open Questions Unanswered questions from this research

  • 1 How can incremental verification techniques be implemented on larger-scale neural networks? Current methods may encounter memory limitations when dealing with very large-scale neural networks, requiring further research on optimizing conflict storage and propagation mechanisms.
  • 2 How can incremental verification techniques be applied to other types of neural network architectures? The method proposed in this paper primarily targets networks with ReLU activation functions, and other types of networks may require different handling strategies.
  • 3 How can the efficiency of conflict reuse be further improved? Although the method in this paper has significantly reduced verification time, in some cases, the inheritance of conflicts may lead to additional computational overhead.
  • 4 How can the reliability of incremental verification be ensured in safety-critical applications? While incremental verification improves efficiency, the accuracy and reliability of verification remain the primary considerations in critical applications.
  • 5 How can incremental verification be implemented in distributed systems? As the scale of neural networks expands, distributed computing becomes a potential solution, requiring research on how to implement incremental verification in distributed environments.

Applications

Immediate Applications

Autonomous Driving Systems

Ensuring the reliability and safety of autonomous driving systems in various environments by improving neural network verification efficiency.

Medical Diagnosis

Applying incremental verification techniques in medical diagnosis to ensure the accuracy and stability of diagnostic models.

Aerospace Systems

Applying incremental verification techniques in aerospace systems to enhance system safety and reliability.

Long-term Vision

Smart Cities

Supporting the safe operation of complex systems in smart cities by improving neural network verification efficiency.

Industrial Automation

Applying incremental verification techniques in industrial automation to enhance production line efficiency and safety.

Abstract

Neural network verification is often used as a core component within larger analysis procedures, which generate sequences of closely related verification queries over the same network. In existing neural network verifiers, each query is typically solved independently, and information learned during previous runs is discarded, leading to repeated exploration of the same infeasible regions of the search space. In this work, we aim to expedite verification by reducing this redundancy. We propose an incremental verification technique that reuses learned conflicts across related verification queries. The technique can be added on top of any branch-and-bound-based neural network verifier. During verification, the verifier records conflicts corresponding to learned infeasible combinations of activation phases, and retains them across runs. We formalize a refinement relation between verification queries and show that conflicts learned for a query remain valid under refinement, enabling sound conflict inheritance. Inherited conflicts are handled using a SAT solver to perform consistency checks and propagation, allowing infeasible subproblems to be detected and pruned early during search. We implement the proposed technique in the Marabou verifier and evaluate it on three verification tasks: local robustness radius determination, verification with input splitting, and minimal sufficient feature set extraction. Our experiments show that incremental conflict reuse reduces verification effort and yields speedups of up to $1.9\times$ over a non-incremental baseline.

cs.LO cs.AI

References (20)

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Guy Katz, Clark W. Barrett, D. Dill et al.

2017 2045 citations ⭐ Influential View Analysis →

The Marabou Framework for Verification and Analysis of Deep Neural Networks

Guy Katz, Derek A. Huang, D. Ibeling et al.

2019 582 citations ⭐ Influential

CaDiCaL 2.0

Armin Biere, Tobias Faller, Katalin Fazekas et al.

2024 73 citations

Improved Geometric Path Enumeration for Verifying ReLU Neural Networks

Stanley Bak, Hoang-Dung Tran, Kerianne L. Hobbs et al.

2020 109 citations

General Cutting Planes for Bound-Propagation-Based Neural Network Verification

Huan Zhang, Shiqi Wang, Kaidi Xu et al.

2022 130 citations View Analysis →

The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results

Stanley Bak, Changliu Liu, Taylor T. Johnson

2021 125 citations View Analysis →

Concrete Problems in AI Safety

Dario Amodei, Chris Olah, J. Steinhardt et al.

2016 2916 citations View Analysis →

NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems

Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez et al.

2020 269 citations View Analysis →

An abstract domain for certifying neural networks

Gagandeep Singh, T. Gehr, Markus Püschel et al.

2019 792 citations

Proof Minimization in Neural Network Verification

Omri Isac, Idan Refaeli, Haoze Wu et al.

2025 4 citations View Analysis →

Dermatologist-level classification of skin cancer with deep neural networks

A. Esteva, Brett Kuprel, R. Novoa et al.

2017 11962 citations

Marabou 2.0: A Versatile Formal Analyzer of Neural Networks

Haoze Wu, Omri Isac, Aleksandar Zelji'c et al.

2024 79 citations View Analysis →

Satisfiability modulo theories

Leonardo Mendonça de Moura, Nikolaj S. Bjørner

2011 1712 citations

Neural Network Verification with Proof Production

Omri Isac, Clark W. Barrett, M. Zhang et al.

2022 26 citations View Analysis →

Verification-Aided Deep Ensemble Selection

Guy Amir, Guy Katz, Michael Schapira

2022 19 citations View Analysis →

Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing

Y. Elboher, Raya Elsaleh, Omri Isac et al.

2024 11 citations View Analysis →

Delivering Trustworthy AI through Formal XAI

Joao Marques-Silva, Alexey Ignatiev

2022 142 citations

Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks

Shahaf Bassan, Guy Katz

2022 52 citations View Analysis →

Global optimization of objective functions represented by ReLU networks

C. Strong, Haoze Wu, Aleksandar Zelji'c et al.

2020 37 citations View Analysis →

Compositional Verification for Autonomous Systems with Deep Learning Components

C. Păsăreanu, D. Gopinath, Huafeng Yu

2018 25 citations View Analysis →