Function approximation with one-bit Bernstein polynomials and one-bit neural networks

Series
Applied and Computational Mathematics Seminar
Time
Monday, March 25, 2024 - 2:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005 and https://gatech.zoom.us/j/98355006347
Speaker
Weilin Li – City College of New York – wli6@ccny.cuny.eduhttps://weilinli.ccny.cuny.edu/
Organizer
Wenjing Liao
The celebrated universal approximation theorems for neural networks typically state that every sufficiently nice function can be arbitrarily well approximated by a neural network with carefully chosen real parameters. With the emergence of large neural networks and a desire to use them on low power devices, there has been increased interest in neural network quantization (i.e., the act of replacing its real parameters with ones from a much smaller finite set). In this talk, we ask whether it is even possible to quantize neural networks without sacrificing their approximation power, especially in the extreme one-bit {+1,-1} case? We present several naive quantization strategies that yield universal approximation theorems by quantized neural networks, and discuss their advantages/disadvantages. From there, we offer an alternative approach based on Bernstein polynomials and show that {+1,-1} linear combinations of multivariate Bernstein polynomials can efficiently approximate smooth functions. This strategy can be implemented by means of a one-bit neural network and computed from point samples/queries. Joint work with Sinan Gunturk.