Deep learning using neural networks is increasingly popular, but neural networks come with few built-in guarantees of correctness. This talk will discuss how I use JuMP to compute verified upper bounds on the error of a neural network trained as an inverse model. Using JuMP together with Distributed.jl allows me to run a large number of verification queries in parallel with minimal time spent on non-research development.
The talk is based on https://arxiv.org/abs/2202.02429