Why Do We Still Need Humans, Anyway? | Opinion

The past four decades or so have seen spectacular technological advances that have vastly disrupted industries, brought unimaginable convenience and efficiencies, and scrambled our brains in ways we may come to regret.

So tremendous are the changes that it is remarkable that the journey felt mostly incremental. Rare were the moments when it was clear something spectacular had been unleashed. But we are certainly experiencing such a moment with the arrival of ChatGPT, the hyper-bot cooked up by an outfit called OpenAI.

There have been other seminal moments over the years of the digital revolution. One was the arrival of the personal computer, available in the 1970s in the form of the Commodore VIC-20 and TI-99.

Ultimate Algorithm?
This picture, taken on Jan. 23, in Toulouse, France, shows screens displaying the logos of OpenAI and ChatGPT. LIONEL BONAVENTURE/AFP via Getty Images

Then came the move from clunky DOS ("disk operating system") command lines to user-friendly graphical user interfaces (GUIs), pioneered in the 1970s at the legendary Xerox Palo Alto Research Center (PARC) in California. The desktops, icons, drawers, and dropdown menus took a while to get to the public, finally introduced to the masses by Apple in 1984 (via the Macintosh) and then copied and popularized by Microsoft (the Windows 3.1 operating system).

Mobile phones, even the arrival of the World Wide Web in the 1990s, social media—all these had huge impact, but no big bang moment to announce them. An exception was the arrival of the smartphone—again developed elsewhere, but honed by Apple—announced with much fanfare in a Steve Jobs-driven spectacle in 2007.

The advances that drove artificial intelligence also happened without fanfare, behind closed doors in university labs and corporate R&D centers. I remember studying robotics and computer vision for my master's project in the mid-1980s—it was a challenge to get the program to identify a circle. These fields, along with machine learning (an algorithm that seems to get smarter as it gains experience) and natural language processing (approximating human speech) all saw huge advances, oddly unremarked to the ordinary person.

Something critical was churning in the background the whole time: Moore's Law, which posits that the density of transistors in integrated circuits doubles approximately every two years. This projection, made almost 60 years ago by Intel founder Gordon Moore, means that computing power (and access to stored data) would constantly increase at ever greater speed. To infinity, as far as the human eye can see—depending on who you ask, of course. And quantum computing holds the promise of even faster development.

That means that if an algorithm can figure out a certain type of thing, it will eventually be able to figure out every instance of that type of thing, instantly. If it can remember one thing, it will eventually be able to instantly remember everything.

This puts humans at a clear and growing disadvantage versus machines in every area of activity that is based on calculations and recall.

Since chess, for example, is in fact nothing more than a very extensive but actually limited series of possible moves and reactions by both parties, then a computer who knows what all the previous outcomes were and can recall them instantly will defeat any human player. There's no way we can compete on calculations.

Where can we compete?

Inspiration, creativity, passion, emotion, the poesy of the spirit. An algorithm cannot create the magic of the Beatles or Bach. It cannot write the poetry of Homer or Robert Frost. It cannot do Emile Zola or Somerset Maugham. It cannot live romance. It cannot love.

Anyone who has used ChatGPT will begin to see the problem.

The program is not in itself a breakthrough—not exactly, since the technology behind it has been developing for years. We all have encountered early versions of what artificial intelligence can do in Apple's Siri, or even with a Google search. Also, perhaps less impressively, in the aggravating "help" chat services run by various banks.

But ChatGPT is uncanny, and it has seized center stage in the global conversation since being suddenly made available two months ago to the general public. That happened concurrent to news that Microsoft was investing another $10 billion in OpenAI, in a deal that would leave it with a 49-percent stake.

People all around the world began to realize that the bot can answer almost every question reasonably, and sometimes intelligently. It can pass university-level law and business exams. It can write lyrics in certain styles. It can advise on politics. It certainly can recite facts in essays that are better written than what the average person would produce.

There are some downsides: the algorithm seeks to offend no one, and its inclination to hedge and balance can reek of a bothsidesism that in a human would be cowardly. Lacking the recklessness of some humans, it won't take a stand.

Once this wrinkle is addressed, the implications are staggering for professions like journalism and education. This level of AI can write the first draft of the first draft of history. It can answer questions from students. It can enable students to cheat by never memorizing anything themselves. Will a generation allowed to use such tools not find its brains in atrophy?

At this very moment almost every major business, and certainly consultancy, is holding emergency meetings to calculate how to integrate ChatGPT in its activities. It's simplifying the situation to make it all about ChatGPT, but the notion that AI is ready or nearly ready for prime time is correct. This is not a fad.

There are great benefits to be savored here. Medical diagnoses may become speedier and more accurate. Companies may make wiser decisions. The efficiencies will multiply.

Naysayers have focused on fears that the job market may not adjust to too many existing jobs being rendered moot. Efficiency is not a net positive if 90 percent of the workforce is put out of business, and if only computer programmers (and perhaps sex workers) find any demand for their services.

They may be worrying about the wrong thing. Luddites have always expressed concerns such as these, and humanity has adjusted. There is brilliance in our species (along with a vexing dumbness, to be sure).

The real concern is that we start doubting that brilliance. If AI starts producing art and music and even novels that we like well enough, people will start to wonder whether there ever existed such a thing as inspiration and romance. That's because no matter how smart AI becomes it will never be anything more than algorithms and recall. When it becomes good enough, people may start wondering is that is true for them as well.

Science cannot yet explain the spark of life and the very existence of consciousness, but one day it might. There have long been those who argue that we are, in the end, nothing more than neurons. As AI spreads, the mechanists may gain the upper hand, and most of us might conclude that any talk of spirit and magic are nothing but self-delusion.

Since computers cannot be humans—maybe the convergence suggests that humans are actually nothing more than biological computers.

What will that do to our state of mind? To art and culture? To our desire to procreate? Will rates of depression, already depressing, not soar to untenable heights?

In the spirit of the times, I asked ChatGPT.

"The development and use of AI technology does not necessarily lead people to conclude that everything is calculation and there is no such thing as human inspiration," the algorithm said, ignoring my warnings about lame transparent hedging. "It is ultimately up to individuals to form their own opinions and beliefs about the role of technology and human creativity. Some may see AI as a tool to enhance human capabilities, while others may view it as a threat to human uniqueness."

So, we know one thing: humans, or at least some of them, are still less boring than the cream of the AI crop.

Dan Perry is managing partner of the New York-based communications firm Thunder11. He is the former Cairo-based Middle East editor and London-based Europe/Africa editor of the Associated Press. Follow him at twitter.com/perry_dan

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer



To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go