The app for independent voices

Meta employees have had an internal leaderboard called Claudeonomics where engineers compete on token usage. Top tier: Token Legend (cringe).

Over 30 days, total usage topped 60 trillion tokens. All books ever published amount to roughly 20 trillion. The models generated three times as many words as have ever been published in books.

Wow.

Not just Meta. Jensen Huang says if a $500k engineer spends less than $250k/year on tokens, he'd be “deeply alarmed.”

Token budgets are being pitched as a fourth component of compensation alongside salary, equity, and bonuses. Candidates ask in interviews: how many tokens come with the job?

The token is the new unit of cognitive labor.

The obvious take is Goodhart's law: they will hack it. But there's a deeper problem: why are we stuck with an architecture that forces AI models to think in tokens at all?

Einstein said words played no role in his mechanism of thought. Fedorenko's lab at MIT confirmed with neuroimaging that the brain's language network doesn't activate during reasoning. Language and thought use different hardware.

Why are we forcing AI models to think in tokens?

AI models have the infrastructure to process information pre-linguistically, in latent space. LeCun's group at Meta published Coconut (reasoning in continuous space), the Large Concept Model (next-concept instead of next-token prediction), and JEPA (predicting meaning rather than symbols).

He left Meta recently.

Essentially, no major AI lab is seriously working on this, having surrendered to the god of scaling. The entire AI industry stands on top of a misinterpretation of how humans think.

Just imagine what the world would look like if humans needed to put everything into words or tokens, one after another, before being able to complete a single thought.

Bon

kers.

Inside the AI Industry's Most Expensive Mistake
Apr 9
at
12:39 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.