Spending countless hours (and tokens) over the last week playing with both OpenClaw and Claude Code projects has really clarified one of the big challenges to AI:
How to get enough work done in an acceptable context window before costs explode and results degrade. I feel like I’m dealing with Guy Pearce in “Memento” where I have to constantly leave breadcrumbs for my AI agents to overcome a debilitating loss of daily memory and context.
Playing with OpenClaw truly quantified this problem for me because I see the explicit token use and cost, whereas this phenomenon is hidden to most LLM users through hallucinations and degraded results.
Opinion:
A machine can’t truly have a “soul” until it has a human-like context window, and that’s gonna take some doing.
Feb 14
at
3:21 PM
Relevant people
Log in or sign up
Join the most interesting and insightful discussions.