Sebastien Bubeck is a researcher at OpenAI. He recently shared an experiment where he posed an open problem from a convex optimization paper to GPT-5-Pro, asking it to improve a known bound on step sizes in gradient descent.
The issue was determining when the curve of function values remains convex. The original paper showed it holds below 1/L but fails above 1.75/L, with a gap in between. GPT-5-Pro produced a proof tightening the bound to 1.5/L in just 17 minutes, which Bubeck verified as correct and novel.
Humans later closed the gap fully at 1.75/L, but the AI's independent contribution showed it wasn't relying on memorized data. This raises questions about AI's role in generating original math.
His takeaway is clear: we're entering an era where machines can advance research frontiers, not just apply existing knowledge. Build AI to collaborate on problems, enhancing human discovery without overstepping into creation myths.
Aug 22
at
2:54 PM
Log in or sign up
Join the most interesting and insightful discussions.