Note on Neural Network Verification / Reluplex Paper

ReviewPLSE


Problem to Solve

Given bound of input and output, whether the neural network model is satisfiable.

By directly applying existing solver on neural networks, the activation functions, which are encoded as disjunctions could lead to state explosion (suppose there are n activation function, solver has to explore $2^n$ states).

Background

Theory in SMT: A theory is $T = (\Sigma, I)$ where $\Sigma$ is called signature and $I$ is a list of closed interpretation. A $\tau$ is said to be $T$ satisfiable if it is satisfiable by $i \in I$.

DPLL(T): Embedding theories into DPLL algorithms (Details: https://www.cs.utexas.edu/~isil/cs389L/lecture16-6up.pdf)

Linear Real Arithmetic (LRA) Theory: $\Sigma = \{+,-,\cdot,\le, \ge,R\}$. And $\tau$ can be written as $\sum_{x_i} c_ix_i \square d$, $\square \in \{=,\le, \ge\}$.

ReLU: $ReLU(x) = max(0,x)$

Simplex for LRA: The Simplex algorithm proposed here is different from standard version. It is here for being a decision procedure for LRA theory. A configuration of can either be $<SAT, UNSAT>$ or $<\Beta, T, l,u,\alpha>$. $\Beta \sube X$ is called basic variables. $T$ is called tableau, which are a set of equations, $x_i = c_jx_j$, $x_i \in \Beta \wedge x_j \notin \Beta$. $x_j$ is known as non-basic variables. $l, u, \alpha$ are lower bound, upper bound, and real value assignment for each basic and non-basic variables.

Init: for each $\sum_{x_i} c_ix_i \square d$, make it $b =\sum_{x_i} c_i x_i$ where b is basic variable. Then $d$ is then applied to $l, u$. Make $\alpha$ with all zero.

Iteration:

Termination: If no bound is violated, SAT. When there is no non-basic variable can be selected to exchange with the basic variable, UNSAT.

Implementation

Simplex derivation rules:

Screen Shot 2020-11-14 at 7.13.39 PM

Screen Shot 2020-11-14 at 7.13.48 PM

Reluplex new derivation rules for ReLU:

Screen Shot 2020-11-14 at 7.13.56 PM

Optimization

Application

ACAS Xu Verification: ACAS Xu is the unmanned variant of Airborne Collision Avoidance System X. Previous certification methodologies included exhaustively testing the system in 1.5 million simulated encounters, but this is insufficient for proving that faulty behaviors do not exist within the continuous DNNs.

Adversarial attack: Let $\delta$ be the fluctuation. $||x - x'|| = \delta \implies \forall y_i, y_i^{'} \ge y_i$ , where $\xi^{'}$ signifies input and output after applying fluctuation. Intuitively, such a constraint asserts that for a threshold of fluctuation $\delta$ , the output would not be affected negatively. If this statement is valid, then the fluctuation applied would not affect the result, meaning that the adversarial attack has failed. With this in mind, we could then leverage the concept of binary search to find the maximal threshold of fluctuation efficiently.

Example

Simplex for LRA:

PXL_20201114_215142680_2

PXL_20201114_215238047_2

Reluplex:

PXL_20201115_000657924_2

PXL_20201115_000708668_2