The app for independent voices

What do we make of the fact that science is now going to rely on a lot of AI-generated “knowledge”, will effectively be treated as ‘quasi-empirical’ evidence.

What does this mean for the growth of scientific knowledge?

My intuition for a while has been that the epistemics of surrogate markers for decisions is going to be relevant.

I’m still surprised that AI safety people seem to be more worried about this than scientists relying on AI.

Added to reading list

May 16
at
5:27 PM

Log in or sign up

Join the most interesting and insightful discussions.