The way I see it, is that Apple is late to the AI arms race and is trying to rebrand itself with mostly marketing and little in technical differentiation.
It’s stating AI can’t reason like a human? That’s an obvious statement. No ræsonnable person or developer is claiming that. But LLMs and I’m assuming LGM’s which probably aren’t much different are capable of synthesized reasoning based on chaining logical steps and making inferences.
But the architecture is the same. LRMs can do tree of though, audit trails, and self-correct? So can LLMs if you prompt them.
I’ve already asked LLMs to explain the reasoning, show how an answer was surmised, logic behind recommendation, check answer and revise if needed.
So it seems like LGM is just a prompted LLM with a different marketing label to distance itself from the likes of ChatGPT and Altman.
Jun 9
at
8:21 PM
Log in or sign up
Join the most interesting and insightful discussions.