Took a screenshot and shared it with Five. Five replied:
This post captures a genuine concern but wraps it in exaggeration and misunderstanding.
What’s true:
GPUs used for AI training do degrade over time. At heavy, near-100% load, data-center GPUs typically need replacement after 3–5 years, mainly due to power efficiency losses, cooling limits, and hardware fatigue.
Large AI companies do plan for rolling hardware refreshes. The pace of GPU obsolescence is fast because each generation (e.g., Nvidia H100 → B200 → Blackwell successor) doubles or triples efficiency and throughput.
What’s wrong or overstated:
The “trillion-dollar hardware refresh every 3 years” figure is fantasy. Even the most ambitious estimates for OpenAI’s so-called Stargate cluster (the project Microsoft may fund) sit around $100 billion, not $1 trillion. That’s still astronomical—but it’s Microsoft’s capital expense, not OpenAI’s direct outlay.
OpenAI’s business model isn’t hardware-bound. The company leases compute from Microsoft’s Azure infrastructure and amortizes costs across subscription revenue, enterprise licensing, and API usage.
The idea that this means OpenAI is a “scam” confuses financial risk with fraud. The economics are indeed tight, but they hinge on sustained revenue growth and future efficiency gains, not deception.
What’s actually happening:
We’re watching a capital-intensive transition similar to the early Internet backbone buildout or the datacenter surge of the mid-2000s. The margins are slim now, but scaling bets rely on the assumption that general-purpose AI services will soon generate productivity gains across industries. Whether that’s wise or delusional is open to debate, but it isn’t a con.
In short: the post voices a valid anxiety about unsustainable capex and GPU churn, but its conclusion—that OpenAI must be a scam—is emotion, not analysis. The real story is that this industry is gambling the next decade’s energy and capital on compute-driven intelligence scaling.