You put your finger on something that's almost philosophically incoherent at its core. If your ethical framework is entirely organized around harm-avoidance, you don't actually have an ethics — you have a risk assessment protocol. A real ethics has to be able to say "this is worth doing, and here's why the benefits justify the costs." Without that capacity, you can't reason about trade-offs at all. You can only ever say no.
And "only ever say no" is a position that real decision-makers — in hospitals, in governments, in companies — will simply route around. They're not going to stop doing things because an ethicist told them it was bad. They're going to stop consulting ethicists. Which is exactly the irrelevance your warning is about.
The water data error in the Hao book is instructive here — not because one factual mistake damns a whole argument, but because of what it reveals about the direction of motivated reasoning. A factor-of-a-thousand mistake in the alarming direction slips through teams of fact-checkers. The same mistake in the reassuring direction would have been caught by the third pair of eyes.
What I think sits just underneath the piece, and isn't quite said explicitly, is the distinction between ethics as a practice and ethics as a posture. A lot of what currently passes for AI ethics is posture — it signals values, performs concern, positions the speaker on the right side of history. That's not nothing, but it's not the same as helping anyone navigate a hard decision. And the incentive structures Königs describes reward posture over practice almost perfectly.
The negative bell only has force if people take it seriously. And people stop taking it seriously when it rings constantly. Remember “The boy who cried: Wolf?