The app for independent voices

If you honestly think I believe that "rationalism is a substitute for actually understanding things" or that "probabilities are a substitute for coherent reasoning about uncertainty", let me know and I can confirm to you that I don’t believe either of those things.

If you are honestly unsure what the difference between my position and "probabilities are a substitute for coherent reasoning about uncertainty" is, I'm happy to explain it to you, just as I've tried explaining it to Ben, including in the comment you didn't read - although I may need more hints on why you think my position resembles that.

I appreciate that you've written a couple of essays on these topics. But I've written hundreds of thousands of words over the course of a decade, much of which by your own admission you haven't read, and you dismiss it as "inarticulate, groundless hopes and fears" and said I am trying to avoid "coherent reasoning about uncertainty".

I feel like I'm spending the prime years of my life fruitlessly writing hundreds of thousands of words on a topic, and you come on Substack Notes, read one guy I've replied to a bunch of times, and post "See, rationalists have no arguments, they just believe that you should throw out a random number for no reason."

The rationalists have spent hundreds of thousands of dollars on projects meant to put forecasting on a firmer footing, run 5,000 person experiments to prove that certain forms of probabilistic forecasting really do work, it's helped sparked a field that IMHO has been one of the biggest scientific triumphs of the past decade, and you're still saying "haha, some rationalists use probabilities, must be because they've never realized that you need arguments instead of just reacting emotionally to stuff".

I've read your BetterWithoutAI articles, but my opinion is slightly colored by the fact that I remember you spending the entire 2010s calling us benighted morons who didn't understand how to think because we "fell for AI hype" and believed it was possible to build effective AI without Heidegger and embodiment and so on - which hurt, by the way, it’s really not fun to have everyone spend a decade telling you that you’re a nerd who read too much sci-fi and doesn’t understand basic science. Now that people have built effective AI on exactly the timeline we were worried about, that argument disappears without so much as an "oops" and instead it's a weird attempt to simultaneously launder all of our ideas while still somehow managing to make fun of us and hold us out as the villain at every turn.

You don't have to do this! Your beliefs are probably closer to the center of the AI alignment cluster than mine at this point. You could cooperate with any of a dozen organizations working on the same problems you're concerned about! And if you could just stop taking potshots at it, you could bask in probabilistic forecasting being a sort of triumph of your ideas - it's exactly about how to integrate Kegan 5 meta-reasoning with Kegan 4 modelability. But somehow the only thing you can manage to be consistent on is how you're superior to rationalists and we don't understand anything and are just dinosaurs who recoil from thinking like vampires from garlic.

So here we are on Substack Notes, with you saying you're not sure why I consider it hostile for you to say things like "pretending your inarticulate, groundless hopes and fears about future AI are principled arguments is not a substitute for admitting that no one has a goddamn clue." I look forward to many future interactions of this type.

Mar 8, 2024
at
2:26 PM

Log in or sign up

Join the most interesting and insightful discussions.