Static analysis has long served as a foundation for scalable code analysis and verification by extracting and reasoning about abstract representations of programs. In this talk, we explore how the principles of static analysis, such as those embodied in our SVF framework, can naturally extend to new domains, including the verification of neural networks. We highlight a shared foundation between program analysis and neural network verification, grounded in abstraction, approximation, and sound reasoning over large input spaces. In particular, we present two recent works: an order-leading approach to robustness verification of neural networks and a counterexample-guided incremental verification method. Our initial efforts suggest that static reasoning techniques could offer useful insights for addressing emerging challenges in the quality and reliability of AI systems.