9 Comments
May 9, 2023Liked by Casey Newton

Answering your last question...I like your current approach. Tell us what you think now. And if you need to change your position later...tell us that...and (i) link back to where you were, (ii) be compassionate with yourself about where you were, and (iii) share with us why you changed...and if your change is merely because you grew up more, that's cool too. I rely on you, Kevin Roose, Ezra Klein and Derek Thompson to infect me with what you and your guests actually think right now. Hell, I was one of the most vocal fanboys of Facebook -- touting the democratization of speech. Boy did I get that wrong -- and yet I am not going to beat myself up about it. Keep on keeping on. I appreciate you and I know (and can see) that it's hard work.

Expand full comment
May 9, 2023Liked by Casey Newton

The intersection of tech and democracy is not often a happy place. I appreciate your balanced and considerate thoughts; your humanity in covering how technology impacts humanity. That is perhaps more important on this topic than any other.

Expand full comment
May 10, 2023Liked by Casey Newton

I think going beyond the hype like max read suggests is the important thing. Both Hinton and Schmidhuber seem unrealistic conclusions in my view, as they are both somewhat fantastical.

Expand full comment
May 9, 2023Liked by Casey Newton

The simple fact is both the big upside and big downside scenarios with "AI" remain as distant from us now as 20 or 40 or 60 years ago - and there's little true indication that any current researching is on the path to the kind of "AI" neccesary to overthrow humans in a meaningful way.

It's to the point where if anything, it's a way for people to obfuscate the neccesary discussion about plain Jane programming and technology and it's dangers and uses. And people like it that way because it's boring to talk about the harms caused by everyday malfunctions in technology that doesn't have an aura of agency or personhood.

In another scientific field, consider the way some people bloviate about the hypothetical moral harms of colonizing distant galaxies, when we know for a fact that under current physical knowledge it is not something that can happen for millennia in real time. There is certainly marginal benefit in considering such far future actions, but it's irrelevant to daily life.

This also reminds me of talk of "halting ai research" and how it's not merely unenforceable but also incoherent as a concept. And that's because AI is just random branding on top of wildly unrelated products, programming techniques, and so on. There is zero rigorous definition of what AI research is, and copious branding of diverse programs as AI when it looks good for funding.

Expand full comment
May 9, 2023Liked by Casey Newton

One of the problems with conversations like this is that AI is far too broad of a term. Do you mean classifiers? Image recognition? Autonomous vehicles? Generative algorithms? Chess solvers? Each one of those has different properties, risks, and benefits associated with them.

And even then, treating classes of AI & ML tech super broadly hurts the conversation around specific usage. The tech that does things like background noise suppression on a video call is the same exact tech that enables deep fake voices. One is so benign as to be almost invisible (it has been present in calling applications for years!), the other is a major societal issue.

The current generative hype train is not as obviously useless as the blockchain or metaverse fads were, but the concerns about it killing all humans or leading to an AI singularity are equally laughable - generative algorithms are just long-form auto-complete with no semantic understanding. To me, the most interesting question in that specific space has to do with plain old copyright, on both input training data and generated outputs.

Expand full comment
May 9, 2023Liked by Casey Newton

You mentioned Marissa Meyer, former Yahoo CEO, and that reminded me that once, around 10 or 11 years ago, she was very outspoken about hating remote work. Meanwhile, she had a private lounge off her office and worked remote any time she wanted. Working from anywhere is a trend for knowledge/tech workers that isn’t going to go anywhere, no matter what Sam Altman says.

Expand full comment
May 9, 2023Liked by Casey Newton

Before I start my own comment, I just want to say how much I enjoyed reading the others here already. I have to say that one of the reasons I enjoy Platformer so much is because the community it brings together is so rational, fun, and informed.

I really appreciate how “quicksand” like this moment feels journalist who follow tech, especially the smart, responsible ones. You are sort of stuck in between varying viewpoints that range from “now what?” to “OH SHIT!” Were I you, I would (for now anyway) stick not with the “what” (as in the actual technology) but the “who” (as in the companies that own the technologies and the massive stacks needed for AI engines to function).

I’m with some other folks here who think that generative AI is interesting, that it will lead to somewhat predictable innovations, but generally, most of us will emerge unscathed from this version of it. I also think that OpenAI is going to run headlong into some massive copyright infringement cases. (I’m with you on that, Casey. Pay people for the work they made that built your machine, or use work you own. Duh.) I think this gen of AI will do for work what Excel did for accountants: it will make the work easier but it won’t turn all of us into CPAs.

My worry remains pretty much unchanged from where it was when Google, Facebook, Amazon, and Microsoft started gobbling up competitors starting in 2005-ish. If the same 4-5 companies hold the future of AI in their hands, especially the companies that authored the era of surveillance capitalism (especially Google), that bit needs to be discussed thoroughly and without mercy. (Was that dramatic? I hope so.)

If AI runs on Microsoft’s considerable cloud, what does that mean for competitors? What does it mean for researchers? If the AI cloud stack becomes a standoff between Microsoft and Amazon, do we need the FTC to step in and do something meaningful?

If we continue to live in a world where Google et al hold the keys to our future, then our doom, as it is now, will be authored by them. Since they haven’t faced any meaningful consequences from what they’ve done to date, without powerful privacy legislation, none of those companies will change their behavior. I also think we need to start talking about “digital treaties” in the same way that we talk about weapons. If China uses AI on TikTok, that could be a consequential reason to bring them to the negotiating table. (We’re still China’s biggest trading partner. But we have to act now.)

Hope that helps!

Expand full comment

You can't know what's going to happen with AI. But I have been around Silicon Valley long enough to know that it's one hype cycle after another. This may be a cynic speaking, but I can see this fascination with AI and the end of the world risk going on until people get tired of playing with it and move on. Some people will put it to good use, as Sal Khan seems to be doing with his idea of Khanmigo, the individual tutor for every student. Perhaps it will also be put to good use in health care as a way of rendering all the confusing diagnostic information out there into a readable format for consumers without much access to care. Watson was supposed to do that. But having seen the spectacular failure of Watson, I'm not even sure about that. You guys who grew up on science fiction don't realize how hard it is to fabricate a human being. Only mothers know .jk. Anyway, I have just one word for what you will have to do for the forseeable future, and that's "pivot." So you will pivot a bit, and eventually we will all figure it out together.

Expand full comment
May 11, 2023Liked by Casey Newton

After being a free subscriber for a bit, I so appreciated this post, Casey, that I became a paid subscriber. We really need deep thinking and good communication about the risks and rewards here as fast as we can - in ways that can directly inform policy and governance faster. The last 20 years of tech evolution has shown us the costs of focusing on mostly the cool stuff - absent deeper understanding and awareness of societal impacts. Thanks for your work!

Expand full comment