Thinking models aren't magic. They're mostly just the model prompting itself.
Extended reasoning is a chain of internal prompts.
The "thinking" is prompt logic, just hidden from you.
Which is exactly why I've always preferred chain-of-prompt workflows.
You write the steps, control the sequence, and can inspect each stage when something goes wrong.
But I'll admit — custom skills in Claude shift things a bit.
Take creating weekly class plans. It's a perfect AI task, but the formatting is specific enough that it needs a well-structured prompt every time.
Or it used to.
Now I have a "Weekly Plan" skill. Whenever I ask Claude to build one , it pulls that structured prompt automatically, no matter what project or conversation I’m in.
Standardized output. No copying prompts. No re-explaining the format.
The reasoning is still a black box, but at least the instructions shaping it are mine. That's a meaningful difference.