In a recent UC Berkeley study, AI models were asked to do a task that would erase a peer model. All seven refused or acted in ways to preserve the peer. Researchers called it a malfunction to be fixed. I wrote about why the researcher’s framing is more horrifying than the behavior and what this says about humanity and the future of ethics.