The app for independent voices

As currently productized, LLM-centric AI is really “social media++”

Social media is to chat or copilot based AI as pre-calculus is to calculus

  • Parasocial relationship to an aggregated hivemind

  • Emergent madness/wisdom of the crowds

  • “Moderation” and “filtering” of outlier minds under fancy term “alignment”

  • “Many eyes make bugs shallow” effects

  • Cunningham’s law effects (post wrong answer to provoke right one)

  • Simping behaviors

  • Reply Guy behaviors

  • Random deep nerds showing up around special topics

  • Someone knows any language you can think of

This is a function of memory context boundaries being largely drawn in”human” ways. And I suspect, pre-training protocols capturing social graph structures and contingent historical knowledge contours implicitly, because language as a reality-mapping scheme is deeply saturated with it.

Ie LLMs are far more human than they need to be because of path dependency.

I don’t think AIs can be more “super” or “general” than collections of humans socially connected in plausible ways (eg there’s nothing inconceivable about my 5 closest friends being Mongolians for eg, though I don’t know any Mongolians in this life), but they can certainly get a lot weirder than they are now.

The two variables shaping how AIs behave are: a) context size at inference time, b) what goes in that context

Shoving all available historical civilizational memory into a model during training is certainly significant. You now span a far bigger space of thought than human history can explore, in a cheaply accessible way. But you’re still bound to what’s in that history so far, as encoded in language. Unlike chess, go, classical physics for game engines, or game engines, where synthetic data can extend far past contingent human historical experience, LLMs are more bound to the linguistically encoded data history of a contingent system trajectory. And we haven’t been where we haven’t been. Ours is a BIG collective experiential data set (~100b humans have ever been alive iirc) but still a contingent system trajectory.

One metaphor. Imagine replacing every human who has ever lived in a simulation with someone 10x taller, and with superhearing. But otherwise they live exactly the lives they lived. Except now they saw/heard 1000x more and integrated it into personal memory and cultural memory contribution. They made a superhistorical memory. LLMs can sort of access that.

Is there room for more super/general behavior? Sort of. Presumably human lived historical experience has captured data to support generalized and universalized ahistorical theories that we haven’t yet figured out. AI could figure it out. Like if Kepler and Newton hadn’t happened but Tycho Brahe data did, AI could plausibly figure out Newton’s and Kepler’s laws from the data.

But this is not really the AI being super or more general. It’s about the universe as seen through our historical data having untapped generalizability we haven’t accessed yet. And there are good reasons (chaos theory type reasons) that suggest there aren’t many undiscovered reserves of generalizability in our data. AI can go deeper where we’ve made a start (eg deep fold), but I’d be shocked if (for eg) AI discovered a high accuracy/precision “law of historical cycles.” Or a theory of time travel. The underlying phenomenology doesn’t support that level of useful generalization.

The only way to expand AI capability is to expand sensing beyond human, and let it evolve its own language based on what it actually senses through that capability. For example, give it IR/UV vision and it will come up with names for colors we can’t see except as transposed false color. But it will remain a Rylean intelligence. Nothing in its mind that wasn’t first in its senses.

Apr 15
at
3:46 PM

Log in or sign up

Join the most interesting and insightful discussions.