LLM based AI are inherently arbitrary computing machines.
They approximate rational thought but struggle to arrive at a “right” answer. As they get more helpful and conversant we’re seeing their accuracy drop. As they get more enjoyable to use, they become worse at cold calculation.
Could it be we’re putting AI against the wrong kinds of problems? I’m starting to think that these AI can be more useful assisting over solving
Don’t write my code, remind me of that mistake I always make and that patt…
This has been a long time coming. Just this week a lot of pieces lined up and we couldn't hold this back anymore. The gaps are glaring if you start to look for them and I'm still looking to better understand just what they mean...