On complexity of model based derivative free optimization
- Series
- School of Mathematics Colloquium
- Time
- Thursday, April 30, 2026 - 11:00 for 1 hour (actually 50 minutes)
- Location
- Skiles 005
- Speaker
- Katya Scheinberg – Georgia Tech ISYE
In many applications of mathematical optimization, one may wish to optimize an objective function without access to its derivatives. These situations call for derivative-free optimization (DFO) methods. Among the most successful approaches in practice are model-based trust-region methods, such as those pioneered by M.J.D Powell. These methods rely on function approximations via low degree polynomials and carefully adapt the local geometry of interpolation points to balance exploration and exploitation. While relatively complex to implement, these methods are now available in standard scientific computing platforms, including MATLAB and SciPy. However, theoretical analysis of their computational complexity lags behind practice. In particular, it is important to bound the number of function evaluations required to achieve a desired level of accuracy. Using concepts from Lagrangian interpolation and linear algebra we systematically derive complexity bounds for classical model-based trust-region methods and their modern variations. We establish, for the first time, that these methods can have the same worst case complexity than any other known DFO method.