OpenAI's Agent Play — And Why Healthcare Should Be Watching
Breaking today: Peter Steinberger — the Austrian developer behind OpenClaw, the AI agent that went from weekend project to 100K GitHub stars in weeks — is joining OpenAI to lead their personal agents push.
I've been running OpenClaw for a few weeks now, and I'll be honest: the security model is terrifying, but the capability is unlike anything else available. This thing doesn't ask permission — it figures out what tools are on your machine, chains them together, and just gets things done. Steinberger connected his to his door locks. (Yes, the AI could theoretically lock him out of his own home.)
Here's what I'm thinking about:
The real story to me isn't OpenAI vs. Anthropic. It's action vs. intelligence.
Anthropic has been eating OpenAI's lunch on model quality and enterprise penetration. Claude is better at reasoning, better at nuance, better at long-context work. But OpenAI is making a strategic pivot: if the models are commoditizing, own the doing layer.
And "doing" in healthcare means something very specific. We spend $1.2 trillion annually on healthcare administration. Prior auths. Claim denials. Scheduling across siloed systems. Quality reporting. Every one of those is a workflow an AI agent could execute.
More thoughts coming this week on what multi-agent architectures mean for care coordination. In the meantime — anyone else running OpenClaw? What's your experience been?
🦞