The app for independent voices

The context rot problem is the one that doesn't get enough attention in the '1M tokens!' coverage.

More context available doesn't mean more context useful. The model's attention degrades across a very long context in ways that are hard to predict.

Things near the middle of a huge context get underweighted compared to start and end. The practical implication: don't dump your entire codebase in and expect uniform quality. Chunking strategically still matters even when the window is technically large enough to hold everything.

The Claude vs Gemini comparison at 1M would be interesting to see. Their architectures handle long context differently. A little bit more on that: thoughts.jock.pl/p/ai-o…

Claude Just Unlocked 1 Million Tokens For Everyone. Here Is What That Means.
Mar 30
at
8:58 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.