The app for independent voices

Some questions for Anthropic co-founder Jack Clark in today’s letter:

  • If Anthropic’s executives believes that AI might be as dangerous as nuclear weapons, what right does any private business have to build this sort of thing for profit?

  • If AI is really so good at making people more productive, why do Americans overall say they disapprove of AI more than just about every other institution and individual in the world?

  • Does Anthropic really believe that AI will lead to imminent mass unemployment?

  • Why, as Noah Smith has put it, does this industry insist on “our product will make you economically useless, and possibly kill you!” as a marketing strategy?

  • Why does AI still seem quite inept at coming up with truly original insights?

  • How does Anthropic use its own autonomous agents to increase productivity within the company?

  • If other companies learn to use agents effectively, is knowledge work

  • How should we raise our children in an age of AI?

  • And what values would super-intelligence make even more important than they are today?

What Is Anthropic Thinking?
Mar 27
at
3:24 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.