This was pretty funny to me. But it makes sense. A lot of companies use LLMs for many of their security or ethical usage detectors. The problem is each one you add, even if a small model, adds costs and latency.
A good regex effectively adds neither. Classic error handling still has its place!
Anthropic has one of the most advanced LLMs, yet they still use regex to detect/block offensive words (per leaked source code of Claude Code). Not every problem needs to be solved with weapons-grade AI.
Apr 7
at
1:28 PM
Relevant people
Log in or sign up
Join the most interesting and insightful discussions.