Anthropic’s case against the Pentagon has kicked off, and already looks bad for the government.
I spoke to Saumya Roy at Al Jazeera for this interesting deep dive into what this might prefigure on AI regulation. Such interesting comments throughout this great piece :
“This case is a kind of moment when to reflect on what kind of relations we want between the government and companies and what rights citizens have,” says Robert Trager, co-director of Oxford University’s Oxford Martin AI Governance Initiative.
Alison Taylor, clinical associate professor of business and society at New York University’s Stern School of Business, said, “In the US, technology is moving ahead like a freight train and any idea of human oversight is getting harder. But people are concerned about AI-related job losses, data centres, surveillance and weapons. This has meant public opinion is shifting away from AI.”
Over the last two weeks, a range of tech companies, think tanks and legal groups filed court briefs in support of Anthropic’s stance, asking for oversight and regulation of AI for weapons and mass surveillance. That support ranges from Microsoft and the employees of Anthropic’s competitors OpenAI and Google Inc, to the Catholic Moral Theologians and Ethicist, among others.
In their brief, engineers from OpenAI and Google DeepMind, filing in their personal capacities said, the case is of “seismic importance for our industry” and that regulation is crucial since AI models’ “chain of reasoning is often hidden from their operators, and their internal workings are opaque even to their developers. And the decisions they make in lethal contexts are irreversible.”
Against the backdrop of such concerns, NYU’s Taylor said, “Anthropic is making a risky but good bet that positioning itself as an ethical AI company will give it a hand in shaping regulation when it does happen.”
“The Pentagon thinks Anthropic has the best product for military use so it is applying pressure on the company” to continue using it, says CSIS’s Mehta.
As for Anthropic, “the economics are very challenging for the AI industry. So you do need a robust public sector business with its billions of dollars of contracts,” he says.
OpenAI stepped in place of Anthropic to work with the Pentagon soon after Anthropic’s contract was terminated. But Anthropic seems to have had “a public relations triumph if not one on substance,” says NYU’s Taylor.
Its positioning as an ethical AI company may have won it public popularity. Downloads of Claude increased sharply in the weeks after the cancelled contract.