I am releasing the second installment of my two-part series on Cerebras, Cerebranalysis, on the day before the IPO!
The financials don't matter. The multiple doesn't matter. When you are addressing the largest TAM in the world, the only thing that matters is your market share. Therefore, the point of this article is to simply assess how likely it is the fast inference market ends up being incredibly large.
To do this, I test GPT-5.3-Codex-Spark, the only model custom trained for Cerebras hardware on real-world tasks that I perform every day. We evaluate 5.3 codex Spark on both its real vs claimed speedup and the performance relative to the frontier.
We synthesized the results from our tests into a conclusion about Cerebras hardware.
Then I underline three potential bull cases:
The undiscovered TAM, where nobody knows what fast inference could be used for, so we are underpricing it today. I describe three potential market fits:
Fast fundamental investing
Embodied and human-like AI
Real-time human augmentation
The low-hanging fruit: architectural innovations that Cerebras can easily make, which would be a step function change in their unit economics. These include FP8/FP4 support and hybrid bonding.
Why they are the only choice for the non-NVIDIA ecosystem.
Most importantly, I culminate both yesterday’s and today’s research into a final conclusion and what I personally am doing at IPO.