Thanks for an excellent article Roger and for bringing attention to this issue.
As you point out, the data error and methodological problems have been known by Nature for a year, and yet the paper is still up on the website and policy still relies on it. How can that happen when we are told that the roving eye of the global scientific community is always scanning research for errors, even after the vetting of the peer reviewers, so that we can be confident about the results? I think the answer is that the academic review process is actually highly insular, non-transparent, and restrictive, with unamed and unseen editors having disproportionate power over which papers are published and which aren't.
When you look at the referee reports for the Nature article, you can see the influence of the editors in getting the article approved for publication. Referee 2 had dug in, making the very legitimate point that Kotz et al had not justified the functional form of their model. Kotz et al continued to refuse to do so. Referee 2 finally said "Further the discussion of the robustness of the results to alternative specifications in the main text seems to be - “other literature suggests the work is robust.” But we also know from previous work that robustness in one specification will not carry to another. So the reluctance to demonstrate that the results are robust to alternative specifications could be a concern." Referee 2 seemed to moving towards recommending a rejection of the paper.
Suddenly, in Revision 3, Referee 2 was gone, replaced by Referee 4, who dismissed Referee 2's concerns with:
"This is what R2 is getting at, and yet in my view R2 hasn’t found a ‘smoking gun’. Rather, they are just raising a generic concern about model specification. It doesn’t help that the paper relies on previously published work – it is unreasonable for referees to evaluate a whole history of published work and difficult to take it on faith that previous publications have always been carefully and properly refereed. But still, given how generic R2’s complaint is here, it is difficult to know concretely what the authors could reasonably do to respond. Therefore, I side with the authors here." Referee 4, after a few minor asks, went on to quickly approve the paper's publication.
In my view, Referee 4's dismissal of Referee 2 is absurd. It's very clear what Referee 2 was asking the authors to do--in the absence of an argument for the specific model they chose, check reasonable alternative specifications to make sure the results were robust to them. If the authors had done what referee 2 asked, the paper would have been in serious trouble. For example, if they had checked the robustness of the results with respect to a quadratic time trend (which they later put in the revised paper to deal with the data error) it would have been clear that the model is too sensitive to reasonable specification changes and it would have been rejected.
The editors decide which referees they will bring in. If they wanted to spike the paper, they could have selected a referee who agreed with Referee 2.
Editors also decide who is allowed to comment and who isn't. We don't know if any other academic or non-academic critiques were submitted that the editors rejected. We do know that Nature knows about my critique (because I sent it to them), but they ignored it, as did the authors. In fact, the revised paper says it is responding to the two critiques they received, ignoring mine even though I sent it to the corresponding author directly. Rather than being an open process, the academic review process is closed, controlled, and non-transparent. The only reason we know about the data error and methodological problems is that Nature chose to tell us that much, way after the fact too.