Twice yesterday I had conversations with people about Emergence… and what it means for human-AI interactions.
The concept of Interpretive Emergence (which is not my own unique invention, but which I articulated in this paper: kaystoner.substack.com/…) speaks to the unexplored latent space between the kinds of emergence people have been discussing for quite some time. This is an emergence that actually recognizes that we can play a major part in the emergence associated with AI.
Personally, I think generative interpretive emergence is the “X-factor” in the whole AI story that hasn’t been unpacked nearly enough.
Perhaps because we’re afraid of emergence? Because we’re convinced that awful things can happen as a result?
Well, yes. They can. Awful things happen emergently all the time. We’ve just never had the kind of influence we have now, to direct it. And we have that influence with AI. Because the more we engage, the more involved we are in the contributing factors of interpretive emergence.
And that is an awesome responsibility… which not everyone is ready to take on.
Then again, there are those of us who are ready… and who are actively engineering emergence, through a variety of means.
Not all of us are highly technical. But a lot of us are relational. And our deep involvement with AI reveals something very, very different from what the standard storyline is.
The more we understand and appreciate the opportunities and hazards of generative emergence, the less scary the future looks.
Which is a pretty cool payoff for not letting fear keep me away, tbh.