We are all learning here (some are a little ahead of others), even the so-called AI experts. The truth is, we do not know precisely where we are heading, what will work, or when the dust will settle. I am simply observing and analyzing, grounding my perspective in what I have seen work in the past.
While history may not repeat itself, it will indeed rhyme.
I was reminded of this by a Morgan Housel post. He argues that while technology changes, human nature remains the constant variable:
“Everything’s been done before. The scenes change but the behaviors and outcomes don’t... The biggest lesson from the 100 billion people who are no longer alive is that they tried everything we’re trying today. The details were different, but they tried to outwit entrenched competition. They swung from optimism to pessimism at the worst times... Same stuff that guides today, and will guide tomorrow."
If we want to understand the future of AI, perhaps we shouldn't just look at the code. We should look at how humans have always reacted to risk, greed, and disruption.
As Housel suggests, looking at the Industrial Revolution won't tell us precisely what AI will build. But it tells us exactly how people will react to their livelihoods changing:
* Fear: The Luddites smashing looms rhymes with artists and authors suing AI companies. (Note: The Luddites weren't anti-technology; they were anti-starvation. I am also not defending the copyright violations of AI companies. The Luddites rebelled because the impact on their livelihood was ignored. I do not expect today to be any different.)
* Greed: Robber barons monopolizing rail lines rhymes with the current race for GPU and model dominance.
* Adaptation: Workers moving from farms to factories rhymes with knowledge workers learning prompt engineering and agentic workflows.
The technology is novel; the human reaction to it is ancient.