The app for independent voices

A recent article in Nature finds that about 25% of the evaluated articles in economics and social sciences contained non‑trivial coding errors. Part of the problem is that we very rarely get feedback on our code. This means that both mistakes and fraud can naturally creep in, and it is very difficult for reviewers to spot these. Another part of this problem is that it is not always clear that the code matches what is in the paper.

I wrote a Claude Code skill (“review-paper-code”) that tries to solve part of this problem. The skill checks whether the code actually implements what the paper describes. It reads your LaTeX files and your code files (Stata, R, or Python) and cross-checks them: are all the tables in the paper produced by the code? Does the sample construction in the code match the sample description in the data section? It should also give some feedback on your code itself. This skill is a bit of a work-in-progress, so please let me know if you have any ideas for improving it!

Find the skill here github.com/claesbackman…. The below post explains this skill and some others I created.

Apr 3
at
8:17 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.