The app for independent voices

Excellent analysis of p-hacking by AI models assisting scientific research. And Exhibit #3084 in support of my and Sayash Kapoor‘s position that alignment/safety is not a model property, which we first wrote about two years ago.

Whether a particular analysis constitutes p-hacking or a responsible investigation is not a property of the analysis itself, but how the user plans to use the analysis. And the user can trivially lie about this to the model. andrewbenjaminhall.com/…

It’s the exact same pattern with other malicious / unethical uses such as using AI for hacking, phishing, disinformation, etc.

The fact that models didn't p-hack by default is important and good. This helps guard well-intentioned but clueless researchers from p-hacking without knowing it (surprisingly common). The fact that ill-intentioned users can trick models into p-hacking should not be considered a problem and is not fixable.

The other piece of good news is that — quoting Andy Hall, one of the authors of the paper — "the same tools that may lower the cost of p-hacking also lower the cost of catching it."

Feb 20
at
12:52 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.