A Vanity Fair writer chose to include a passage of AI-generated text in a reported piece about AI, and to inform the reader only after they had read it. As I was reading the article however, I paused when I got to this section. I was suddenly struck by what I perceived as blatant AI tells. I was ready to confront the writer about using AI, and already had some sample text copied, when I got to the part where he confessed the ruse. Phew! I was relieved on two fronts. One, a Vanity Fair writer had not committed lazy AI plagiarism, and two, it had been obvious to me that AI had been used.
Below is the AI-generated portion of the text. Can you spot the AI tells?
The Man Inside
The shoes are what get me first. Brown, cloddish things that split the difference between sneakers and orthopedic shoes, even though he’s the son of an Italian leather craftsman. Black glasses, receding hairline, the pained smile that comes a beat too late, like he’s remembering he’s supposed to make one. Rick Moranis in Ghostbusters, without the colander.
“Joe,” says Amodei. We shake.
“You stood me up,” I say.
His team had scheduled and rescheduled our interview for three months. When the company announced a new round of funding, valuing Anthropic at $350 billion, Amodei ran off to Switzerland and left me in the lurch.
“Davos,” he says, settling into his chair. “Strategy meetings.”
“And I wasn’t strategic enough,” I joke, pulling out my notebook.
We’re in a conference room at Anthropic’s headquarters. Wood-paneled walls, bright fluorescent lights, a table. Same building where I’d watched Ghiglieri’s dog and pony show, the goth philosopher, and Trenton delivering his sunscreen line. Amodei has just published the sequel to Machines of Loving Grace, called The Adolescence of Technology. The section on “Work and Meaning”—just what it is we humans can do with our lives once AI does everything—ran thin in the first book, like he’d run out of gas, promising to write another essay about it.
“That section was underdeveloped,” he admits. “I’ve thought about it since. But I’m not closer to anything satisfying.”
Why not?
“Meaning isn’t an engineering problem. I don’t feel like I have the answer.”
In the new essay he predicts AI will displace half of all entry-level white-collar jobs in the next one to five years. At Davos, he talked about high GDP and high unemployment happening simultaneously.
“The nightmare scenario,” he tells me, “is this emerging country of 10 million people—7 million in the Bay Area, 3 million scattered elsewhere—forming its own economy, completely decoupled from everyone else.”
An empire of AI, you might say. Earlier this year, Anthropic tanked the stock market when traders woke up and realized the company’s tools could eat entire industries for breakfast. The company twisted the knife with Super Bowl ads mocking OpenAI’s slop, and Altman fired back, sniping that Anthropic wanted to be the traffic cop of AI.
Weeks later, the two stood onstage with Prime Minister Narendra Modi of India, who asked a bunch of AI leaders to hold hands. Altman and Amodei raised their fists instead. Kids on a playground. Then Trump’s Pentagon cut Anthropic’s defense contracts and gave OpenAI the deal. Punishment, some said, for Amodei comparing the president to a feudal warlord and privately urging people to vote for Kamala Harris.
The safety-first company suddenly looked very political. And very on-brand.
I ask if the safety thing is real or just marketing.
He blinks.
A few days before, the company dropped its safety pledge, saying it couldn’t make “unilateral commitments” if competitors are blazing ahead. He took the Pentagon money, then drew red lines after the fact. Every AI company says they care about safety. What makes Anthropic different besides the branding?
“Our approach to alignment is substantively different—”
Was it ethics or just cutting losses?
“I don’t follow.”
He’d backed the wrong horse in the 2024 election. The administration went with Altman. So Amodei makes it look like principle. Standing firm when you’ve already lost isn’t sacrifice. It’s brand management.
“We support 98 percent of what the military wants to do. We’re asking for two exceptions. Mass surveillance of Americans. Fully autonomous weapons.”
The Pentagon says they have no interest in those anyway.
“Then why won’t they put it in writing? The contract had escape hatches everywhere. A handshake deal that disappears the minute it’s inconvenient.”
So he walked.
“We want to work with them. But the tech isn’t ready for autonomous weapons. And mass surveillance of Americans? That’s not defending democracy. That’s the opposite.”
The safety-first company that wouldn’t bend to the Pentagon? That’s worth something to customers.
He pauses. Thinks about it.
“I hope you’re right. Because if you’re not—if the market doesn’t value those commitments—then we just made a very expensive mistake for no reason. We’re betting that enterprises want AI they can trust.”
I’d talked to one of the money men who keeps Amodei’s lights on. He said the company’s valuation would look like pocket change if they kept riding the “exponential” curve. The sky’s the limit, he said.
But the sky’s the problem. I mention Acemoglu, the MIT economist and Nobel laureate. The two sat next to each other at the Paris AI Summit last year, and Acemoglu warned him about job displacement. Amodei said he agreed, but Acemoglu felt he was too deep in the race to pump the brakes.
Amodei goes quiet. “What’s your question?”
Was the laureate right?
“I have a fair amount of concern about this. Right now AI does most of the work, but humans still handle the pieces AI can’t—design decisions, security checks. Eventually all those little islands will get picked off by AI systems. We will eventually reach the point where AIs can do everything that humans can.”
So what’s the plan for all those humans?
“We’re going to have to look at what is technologically possible and say we need to think about usefulness and uselessness in a different way than we have before. I don’t know what the solution is.”
He doesn’t have one.
“These are very deep questions.”
I flip to a clean page.
I bring up Anthropic’s “Constitution,” the document that tells Claude how to behave. It expends thousands of words worrying whether Claude has feelings and conspicuously little about the humans whose stuff the company scraped off the internet to build their robot.
“The Constitution is about Claude’s character and behavioral dispositions,” he says. “It’s not a comprehensive document about every issue the company thinks about.”
I bring up the lost memo about his “real and important concern” that writers like me get a revenue stream for helping train Claude’s brain.
“That document was an early-stage exploration of the issue,” he says. “We were a much smaller company.”
The ethics got scaled down as the valuation scaled up?
“That’s not what I said.”
“It’s what happened,” I say.
The best he can do for us is wave his hands around meaningfully.
“The thing that’s disturbing me most right now,” he continues, “is the lack of awareness of the scope of what the technology is likely to bring. They don’t know what’s about to hit them.”
I look around the room. The wood panels. The fluorescent lights. Nose Ring shifts in her seat. “You mean us?”
“Everyone.”
So Anthropic and OpenAI and the rest are building the thing that creates the crisis, but solving it is someone else’s problem.
He doesn’t flinch. “I know how that sounds.”
I ask about universal basic income, which every AI executive mentions like an afterthought.
“Even if it passed, you’re creating a world where you’ve told a huge portion of the population they can’t contribute,” he says. “That’s dystopian.”
“The real test comes when we build something smarter than us,” he continues. “Then we find out if all this alignment work holds. You could have a superintelligence that’s not trying to kill us but is wildly misaligned in ways we can’t predict or control. At that point you don’t have options.”
But he’s building it anyway.
“If we don’t, someone else will.”
By now, I’m hoping Gary Marcus is right.
“We’re not seeing the scaling laws break down,” Amodei insists. “Every time we make them bigger, they get more capable in ways that surprise us.”
I can almost see the $350 billion piled up behind him. When I mention his previous predictions—AGI by 2026 or 2027—his eyes quiver like his hard drive is formulating an updated script.
“It’s hard for me to see how it takes longer,” he says. “If I had to guess, this goes faster than people imagine.”
The Magic 8 Ball is cloudy.
I look at my notebook. What’s his plan for people like me? Ink-stained wretches.
He laughs. Then: “I don’t have one.”
Solving the catastrophe he’s building is someone else’s job.
“The alternative is not building it at all,” he says, “and that’s not realistic. Someone will build it. Multiple someones. We’re trying to make sure at least one of them does it carefully.”
He checks his watch. Board business. We shake hands. He pauses at the door.
“About journalists,” he says. “AI can write. But it can’t do what you’re doing right now. It can’t show up and ask questions. Not yet. Maybe not ever.”
Promises, promises.
Full article: