717 Comments

Does anyone here, preferably someone based in Africa, know the results of Sierra Leone's parliamentary election on Saturday June 24? I need to resolve https://manifold.markets/duck_master/whats-sierra-leones-political-party (since I really like making markets about upcoming votes around the world). I've been *completely unable* to find stuff about the parliamentary election results on the internet, though the simultaneous presidential election has been decently covered in the media as a Julius Maada Bio win.

Expand full comment

New "History for Atheists" up! An interview with an archaeologist on "Archaeology in Jesus' Nazareth":

https://www.youtube.com/watch?v=5bO4m-x_wwg&t=3s

Expand full comment

LW/ACX Saturday (7/1/23) happiness, hedonism,wireheading and utility.

https://docs.google.com/document/d/1pAZfz5VyFF7Pa4UN0o7FPKAk1vKEHsYTIC2LBJ0FbBg/edit?usp=sharing

Hello Folks!

We are excited to announce the 32nd Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.

Host: Michael Michalchik

Email: michaelmichalchik@gmail.com (For questions or requests)

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Date: Saturday, July 1st 2023

Time: 2 PM

Conversation Starters (Thanks Vishal):

Not for the Sake of Happiness (Alone) — LessWrong https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone (audio on page)

Are wireheads happy? - LessWrong https://www.lesswrong.com/posts/HmfxSWnqnK265GEFM/are-wireheads-happy

How Likely is Wireheading? https://reducing-suffering.org/how-likely-is-wireheading/

Wireheading Done Right https://qri.org/blog/wireheading-done-right

E) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.

F) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.

G) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.

Expand full comment

Any suggestions on online therapy which is good?

I've finally begun to earn enough that self-funded therapy is an available option. I'd like someone who's more towards the CBT/taking-useful-actions end, rather than psychotherapy (I would not mind talking through how I feel about things, it just seems insufficient without talking through potential actions I can take to improve my life.)

Mainly, along with attempting to figure out what to do, what I want is essentially to be able to talk through things with a smart person who's obligated to keep my privacy (I'm generally very bad with trust as far as talking with the people around me is concerned; I'm hoping that a smart stranger, who I can trust to keep things to themselves, would allow me to be more open.)

Also taking recommendations for books/other things which I can use myself. (I tried reading Feeling Great, and maybe I should slog through it-I'm just generally put off by mystical stories which omit most details in the service of making a point-they just seem kind of fake. Maybe I should just get used to that, though.)

Expand full comment

Re "smart person." Be careful not to use the desire for a smart therapist as an excuse to avoid therapy ("I'd love to do therapy, but the therapists are all inadequate!"). It seems possible, if not likely, that I'm generally more intelligent than most of the therapists I've had. But for the most part they've been good at what they do. Even if a therapist doesn't follow my brilliant critique of my supervisor's emailing habits, a good therapist will see what's needed to help you understand why you're annoyed by your supervisor etc.

I guess another way of putting it is: The best therapist isn't necessarily the best conversation partner.

All that said, I do like the friend analogy. I think of therapy as friend-prostitution. I receive friend services (listen and help me understand myself and enrich my life), but instead of reciprocating I pay.

Expand full comment

I think that's fair. As far as therapy is concerned, by a 'smart person' I just mean someone who's willing to adapt on the fly based on whatever I'm talking about, rather than having an individual course set in mind which they want to guide me towards. Not so much because I find my problems to be profoundly unique, more so because it gives me a degree of comfort knowing that my therapist is competent enough to help me through bespoke situations if they do come up-which, given the presence of disability and other things etc, they will at least some of the time.

I also like the friend analogy and, honestly, if someone came up with that exclusive thing (without even any therapy involved), I'd totally go for it. Most of my friendships seem shallow enough that navigating the "are they good enough a friend for me to be able to trust them and dump my problems on their lap" minefield is headache inducing enough that I just don't, and if I could pay someone to listen to me, a lot of that accounting goes away.

Expand full comment
User was banned for this comment. Show
Expand full comment

I’ve noticed that Scott often quantifies the social cost of CO2 emissions by the cost it takes to offset those emissions (e.g., in his post on having kids, he says since it costs ~$30,000 to offset the average CO2 emissions of an American kid through their lifetime, if they had $30,000 in value to the world, that’s enough to outweigh their emissions; he does something similar in his post on beef vs. chicken). But this seems wrong to me: the cost of carbon isn’t the cost of offsetting that level of CO2 emissions, especially in a context where carbon offsets produce positive externalities that the market doesn’t internalize (so we are spending inefficiently little on carbon offsets right now). Am I missing something?

I get why this works if carbon offsets were in fact priced at their marginal social value (as the social value of a carbon offset presumably equals the social cost of carbon). But I’m not sure this is true? How are carbon offsets actually priced?

Expand full comment
Jun 29, 2023·edited Jun 29, 2023

I think it's pretty normal to measure the cost of damages by the amount it costs to fix them. If it costs X dollars to remove Y tons of carbon, and Y tons of carbon causes Z utils of harm to the world, then each expenditure of X dollars on the carbon program "buys" Z utils by removing that harm.

(Well, this is true for carbon offsets where the program is just pulling Y tons of carbon out of the atmosphere and burying them in a mineshaft or something. If the offsets come from something like "we were going to emit Y tons of carbon, but we didn't because we switched to renewable energy," it's not quite as clear, but the logic is similar.)

I suppose you could try to measure the total damage from global warming (e.g., if global warming goes unchecked and we need to build a seawall around Miami, then it's going to cost us a lot more than simply the cost of getting CO2 levels back to normal), but it would be very difficult to calculate the impact a marginal ton of CO2 has on Miami's property values.

I think if you spend money on a carbon offset, you're spending money specifically on "not emitting CO2" rather than "repairing all climate-caused damage in the world," so the cost of the offset is still the appropriate comparison for "how to have kids and not emit excess CO2."

Expand full comment

I think this is fine in the context where it’s like, “if you have kids and spend this much money on CO2 offsets, you’re good.” So as a recommendation for action to spend money on something once you have a kid, this seems reasonable.

But I’m still very confused by the line of reasoning which goes: “If your kid adds $30,000 in value to the world, since that’s the cost of carbon offsets, then having kids is worth the cost” (which is in the post, this is just paraphrased). Because that value may not be spent on carbon offsets, and $30,000 cost of carbon offsets ≠ $30,000 social cost from having a kid.

Expand full comment
Jun 29, 2023·edited Jun 29, 2023

I mean, if you assume, as Scott does, that utility can be meaningfully quantified with money, then using [literally market value of something] naturally follows.

Which is to say, the issue is, fundamentally, not carbon offsets, but an entire economic paradigm. Even if you get the paradigm's users to agree with any of your specific arguments about market not pricing something correctly in a particular instance, they'll nod, call it a "market/regulatory failure, happens", then go back to using monetary value in their reasoning.

Which is an entirely correct, rational and natural thing to do. (And I say this as someone who disagrees with the paradigm, and I think your intuition to question it is also entirely correct. I guess if there's one thing you're missing, it's that you're questioning assumptions which lie on a much higher level of (e.g. Scott's) belief structure than you've imagined.)

Expand full comment

I’m really confused by this. I’m fine with quantifying it in dollar terms, and my disagreement is rather *what* dollar value to use (I think the cost of an offset is not the social cost of CO2).

Expand full comment

What else would affect the social cost of CO2?

Like, it's true that if you steal $30,000, it's not enough to pay $30,000 back later. Typically a court will allow triple damages. So would you say the social cost of CO2 is $90,000?

Are you thinking of something like CO2 poisoning, sort of like a faulty coffee lid where injury could have been prevented by a $2 lid but now you're on the hook for millions in medical?

You're saying that carbon offsets have positive effects the market doesn't consider, so would that push the cost of children lower than the cost of offsetting their carbon output?

In a case where there are a lot of incalculable qualities but a price still needs to be set, it's fine to set it by the known qualities and adjust later as problems arise.

Expand full comment

I agree with the general point about the social cost of carbon not being equivalent to the cost to offset the carbon. Parenthetically, note that David Friedman has a number of posts discussing the social cost of carbon where he argues that the it is unclear whether it is positive or negative, but that it is clear that the common estimates are too high. E.g. https://daviddfriedman.substack.com/p/my-first-post-done-again.

Expand full comment
Jun 28, 2023·edited Jun 28, 2023

Just realized why you wouldn't want to live for long periods of time and definitely not forever: Value Drift. Your future self would not share the values your current self holds. On a short timeline like a regular human lifetime, this won't matter too much, but over centuries or millennia it starts to look different. Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans.

Edit: People need to realize not all value drift will be benign. Some types of value drift will lead to immense evil. I don't-even-want-to-type-it-out type of evil.

https://sharpcriminalattorney.com/criminal-defense-guides/death-penalty-crimes/

Expand full comment

Interestingly, this is almost the opposite of a commonly heard argument against longevity - the idea that having a large number of long lived people would ossify values and progress (like the old quote "science advances one funeral at a time").

Expand full comment

I really don't see the problem. We forget a ton of stuff, includign our old beliefs. Hell, even old beliefs I find today utterly moronic, I look upon with a kind of benevolant tolerance.

In fact, greater longevity may cause people to be more understanding of other's ideology, because they'd be more likely to have held it before at some point.

Expand full comment

Joke's on you, I have no values.

A little more seriously; the old people I know are largely the same people they were when they were young. The "value drift" comes from a combination of greater experience and physical deterioration; older people have seen what it means for their plans to be fulfilled, and are more concerned with health because they know what it means to not have it. This argument is essentially an equal argument against education.

Expand full comment

"Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans."

Expand full comment

...

"This argument is essentially an equal argument against education."

Expand full comment

We're already on artificially extended lifespans. Humans are supposed to die of a random fall in their 50s, if they don't get eaten by a leopard in their 30s once they slow down, if they somehow make it to adulthood.

Expand full comment

No. By "artificially extended" I mean anti-aging techniques which directly target the aging process itself, slowing cellular aging and the accumulation of damage over time. Past gains came from reducing environmental causes of death (infection, accidents, etc.). With gradual increases in lifespans, value systems tend to evolve slowly along with the culture and environment. People's core values often remain relatively stable over the timescales we have experienced so far. However, with much longer lives - on the order of 100-200 years or more - individuals may undergo more profound changes in values, priorities, and life goals over time.

Expand full comment

> Your future self would not share the values your current self holds.

That's not a reason to kill him. My kids or my neighbors also don't share my values exactly.

Expand full comment

At any point in time, your values at one year in the future won't be too different from your current values, so you'd always want to live for at least one more year. From this it follows that you'll never want to die, at least not because you fear that future you is too different from present you.

Expand full comment

How much do you think your life would change if you suddenly were gifted (post tax) $5m?

Expand full comment
founding

Epistemic status: NW "close to but under a million", TC $300k, Bay Area

In three words: VTSAX after charity.

So first off, $5M for me would mean that any expense under $500 in a day would round to roughly zero (I'm not a compulsive shopper, so I can generally trust that I will only do "special" expenses a couple of times a week at most, and most of these will be significantly under whatever the threshold is). That's tantalizingly close to the point where UMC-grade domestic travel or most tech gadgets become a rounding error where only the time and effort involved matter instead of the money being even a consideration.

As for surface level changes, I'd still go to work in the office but would be more open to job hopping (e.g. to a quant) since I wouldn't be as dependent on an income I already know to be stable. I'd probably move into a 2B instead of a 1B so I could separate my bed and computer, and if I was staying in a place where I still need a car (that is, not Manhattan) then I'd test drive a top Model S and let my immediate reaction decide whether to buy it, but other than that things would be fairly similar, except for upgrades here and there (eg staying in a suite if the standard room at the hotel I like is too small) and having a much lower threshold of desire needed to buy something to begin with.

Expand full comment

Too hard to predict, but I would not be optimistic. People who receive windfalls of that size seem as a rule to fare much worse than people who are born into it, who in turn fare worse than people who acquire it through business or other ventures.

Expand full comment

Already retired with a comfortable income so maybe a second home in New Zealand? Not sure if 5 million would be enough for that but it’s a fun idea. For sure I’d take the clunky Otterbox case off my cellphone.

Expand full comment

I would retire immediately, otherwise keep my current lifestyle (at least in short term), and invest the remaining money.

In the free time I would start working on my projects, and I would meet my friends more often. Probably would live a bit healthier, having more time for exercise and cooking.

Expand full comment

I might keep devoting some time to my job part time, but mostly I'd just pursue my own interests without worrying about money.

Expand full comment

Short-term, not too much. Long term, probably a lot.

Like, I think I'd do the "invest, retire, 4% annual withdraw=$200k" but...

I suspect a lot would change if I became completely location independent but I think the biggest thing would be that combined with experimenting with money.

Like, a lot of us, even if we make good money, aren't in a position to spend $200k. Even if you make $200k, you're not spending $200k but there are spending options @ $200k, especially outside of NY and SF, that are...really interesting.

Like, I don't think a personal trainer at the gym is necessary but...I'd probably be in better shape and I'd definitely be doing more stretching and be safer. I don't "need" a nutritionist but...I'm really curious if one would make a difference.

I hate wearing suits and ties but I have noticed, as you spend more money, they get a lot comfier and...people do always treat people in suits better.

I guess the thing is that, using health as an example, my workout routine and diet are probably, like, at a 7/10 but if I had a $5000/year budget for personal trainers and nutritionists and I started buying everything from Whole Foods I'd get to an 8/10 or 9/10.

But it's also not just money and having spending, I could technically do some of this stuff now, but there's a certain cost in time and money to experimenting with things and finding out what's worth the money and what isn't, especially for me personally. Like, as I've gotten more money, I've found I prefer to pay a premium to live in places where I don't need a car, rather than buy a nicer car. Maybe that's different for you, all good, but...I think I'd spend a decent chunk of time trying to find ways to convert money into happiness; it seems like there's a certain amount of knowledge and experience to that which I don't have.

Expand full comment

Five's a nightmare.

Seriously though, with an extra five million I'd have two choices -- either upgrade my lifestyle moderately and retire now, or upgrade my lifestyle significantly and keep working. Since I don't currently have any particular plans for what I'd like to do in early retirement, I'd probably keep working until I thought of a better plan.

What kind of lifestyle upgrades? Fancy cars, major house renovations, more expensive holidays and general better quality everything-I-own. I like my house and I probably wouldn't bother getting a different one, but I'd spend half a million renovating all the things I don't like about it.

Expand full comment
founding

There's a good chance I'd retire early from my current job and pursue some private projects instead, but I'm not sure about. I'd definitely be moving someplace nicer, and pursuing some major lifestyle enhancements.

Expand full comment

I'd buy the house I liked, hire household help, help in-laws with certain healthcare expenses, fund various tax advantage accounts to the maximum, and take a trip somewhere nice.

Expand full comment

Depends how many people know about it. I'm terminally unambitious, and I enjoy my job; I would shove the money in the bank, stay in my cheap-ish apartment and keep working. But I might be hassled for money by neighbors if they knew.

Expand full comment

A market on Manifold has been arguing about John Leslie's Shooting Room paradox. The market can't resolve until a consensus is reached or an independent mathematician weighs in. Does anyone here have any advice? https://manifold.markets/dreev/is-the-probability-of-dying-in-the

Expand full comment
Jun 29, 2023·edited Jun 29, 2023

Hi, independent mathematician here, although I don't seem to be able to post there.

This isn't a well-posed question. It falls into a problem that a lot of attempts to formulate probabilistic paradoxes do, which is presupposing the ability to sample uniformly from the integers. But that doesn't work - there simply isn't a probability distribution with that property.

If the probability of the snake biting was more than 1/2, we could still say something meaningful by removing “you” from the picture, and computing the expected number of people who get rich and the expected number of people who get bitten.

But in round n we expect 2^n (35/36)^n people to get rich and 2^n(35/36)^(n-1) 1/36 to get bitten. And both those sums (in fact, both those terms) tend to infinity.

We /can/ say that the expected number of people who get rich in round n is 35 times the expected number of people who get bitten in round n, just as we'd expect.

But “if you are one of those infinitely many people, chosen uniformly at random...” simply isn't a meaningful start to a question.

Expand full comment
founding

"Importantly, in the finite version it's possible for no one to die. But the probability of that approaches zero as the size of the pool approaches infinity."

The probability of being chosen goes to zero here as well

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Point 5 in the FAQ says "Importantly, in the finite version it's possible for no one to die. But the probability of that approaches zero as the size of the pool approaches infinity."

But this is irrelevant. No matter how big the finite pool of people is, the probability that nobody dies *conditional on you being chosen to play* does not approach zero. (This is because if you are chosen to play it is probably because the game is only a few rounds short of exhausting all potential players and ending without death. To understand the difference, it may help to imagine a city where 99% of the buses have 1 passenger and the rest are full with 100 passengers. The probability of a bus being full is 0.01, but the probability of a bus being full *conditional on you being on board* is about 0.5.)

Your probability of dying, given that you get to play, is only 1/36, no matter how large the finite pool is.

(And in the case of an infinite pool of players, the question doesn't make sense, as the premise "choosing each group happens uniformly randomly" is impossible.)

Expand full comment

Not a mathematician, but that's mostly coming down to word choice. What does it mean to be "chosen" to play? Are you chosen when you show up, or when you roll the dice?

If you're chosen by showing up and your grouping is random, then it's going to be some weird thing where your odds of dying are *members of group*/36, minus the odds of a previous group rolling snakeeyes. So, like, previous round times 2, times 35/36 each round. Or something. And does that already cover groups past yours? I'm not a mathematician.

If you're instead chosen when your group has the roll, it's 1/36, full stop.

Expand full comment

...do I have that backward? I think the group size actually counts in the player's favor; before the first roll, your odds of dying decrease in each successive group, and with double participants in successive groups your chance of getting selected for the later groups is much higher, so your odds of dying before the first roll should be significantly lower than 1/36.

Expand full comment

Canada's wildfires have broken the annual record [since good national records exist which seems to be from 1980] for total area burned, and they're just now reaching the _halfway_ mark of the normal wildfire season.

https://www.reuters.com/sustainability/canadian-wildfire-emissions-reach-record-high-2023-2023-06-27/

Meanwhile the weather patterns shifted overnight and Chicago is now having the sort of haze and smell that the Northeast was getting a couple weeks ago:

https://chicago.suntimes.com/2023/6/27/23775335/chicago-air-quality-canadian-wildfires-worlds-worst

Did my usual 1.5 mile walk to the office this morning, from the South Loop into the Loop. The haze is the worst I can remember here since I was a kid which was before the U.S. Clean Air Act, and the smell is that particular one that a forest fire makes. (Hadn't yet seen the day's news and was walking along wondering which big old wood-frame warehouse or something was on fire.)

Expand full comment

Southern Michigan here. The haze today felt oppressive and somehow demonic. Outdoors smells like a house burning down. Spent about two hours outdoors this evening for a good reason, but now I have a sore throat. I hate it.

Expand full comment
Jun 29, 2023·edited Jun 29, 2023

Yea the sore throat has been constant for me since Tuesday, and is irritating both literally and mentally.

My adult son, who lives now in a different part of Chicago, woke up Tuesday morning with a splitting headache and thought it was an allergies thing until he saw the morning news reports. The June weather here has been quite pleasant and we've all been happily (until this week) sleeping with lots of windows open.

Of course folks with specific conditions like asthma and/or who are elderly have it a lot worse. My eldest sibling is 69 now and has had various respiratory issues for years, lives on the city's South Side, and he's just had to be a recluse this week.

It's been breezy here and the winds are supposed to swing around to be from the south tonight/tomorrow which should push a lot of the haze away (hello Wisconsin please enjoy this gift from us). But then over the weekend the predictions are for a shift back to winds being out of the north. And Canada seems to be having having little progress in getting fires under control. So, rinse and repeat I guess.

Expand full comment

Yeah, the smell is wood smoke.

Expand full comment

There's enough here that we're supposed to stay inside, and I got a bit of a sore throat, but the smell is pleasant.

Expand full comment

My sense overall is that the book review contest entries are better this year than last year- do people generally agree?

Expand full comment

If the reviews are mostly coming from the same crowd, we should probably expect them to improve every year as the crowd gets more experience.

I've counted two as quite good and the others as okay. I don't remember last year, may not have been here for it.

Expand full comment

Maybe slightly, I thought last years were pretty decent except for a few "meh" ones, maybe we just haven't gotten to the "meh" ones yet?

Expand full comment

Pretty good so far.

Expand full comment
Comment deleted
Expand full comment

Was it anti-Russian or just anti-war-of-conquest? Wasn't the writer themself Russian?

Expand full comment

I have been trying to track down a specific detail for a while with no luck. The first Polish language encyclopaedia, Nowe Ateny, has this comment on dragons that is among its quotable lines (including on the Wikipedia page!): "Defeating the dragon is hard, but you have to try." This is very charming and I can see why it's a popular quote, and I'm interested in finding the original quote within the text, but searching the Polish word for dragon (smok, assuming it wasn't different in the 18th century) hasn't revealed anything. Would anyone be able to find the sentence and page that it appeared on?

I tried for a while to use ChatGPT for this, thinking that it's the sort of "advanced search engine" task it would be good at, but the results I got were abysmal.

Expand full comment

Thank you to Deiseach, Faza, and Hoopdawg! I had a feeling that it wasn't so clear cut, and this is exactly the sort of detailed breakdown that I was hoping someone could do for me. I appreciate you taking the time to use your research skills like this.

Expand full comment

You're welcome, this is the kind of fun, nothing of huge importance riding on it and more interesting than the work I should be doing right now stuff I enjoy 😁

Expand full comment

TL;DR - the quote is probably spurious. See my discussion with Deiseach: https://astralcodexten.substack.com/p/open-thread-282/comment/17801803

Expand full comment

Do you have the book? Do you have other evidence that the quote is not made up?

The quote was added to Wikipedia here https://pl.wikipedia.org/w/index.php?title=Benedykt_Chmielowski&diff=prev&oldid=945987 perhaps you could ask the editor.

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Looking at that, there is a Latin superscription on the drawing, and I think that the 'translation' is probably a joke by someone:

Latin is "draco helveticus bipes et alatus" which translates to "bipedal and winged Swiss dragon"

I think "the dragon is hard to beat but you have to try" is someone making a joke translation of the Latin text. (EDIT EDIT: I was wrong, see below)

EDIT: On the other hand, this guy is giving "quirky quotes" from the book and he translates it that way, but with a different illustration to the one in the Wikipedia article:

https://culture.pl/en/article/10-quirky-quotes-from-polands-first-encyclopaedia

EDIT EDIT: And we have a winner! Copy of the text here, with illustrations, and from the section of illustrations, the one titled "How to beat a dragon" has that very text and translation!

SMOKA POKONAĆ TRUDNO,

ALE STARAĆ SIĘ TRZEBA

THE DRAGON IS HARD TO BEAT,

BUT YOU NEED TO TRY

https://literat.ug.edu.pl/ateny/0050.htm

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Damn, beat me to it.

It does appear, however, that the quote is - in fact - spurious. It doesn't appear in the scan of the 1745 edition (the section on dragons begins on p. 498, here: https://polona.pl/item-view/0d22aab6-4230-4061-a43e-7d71893ad2bc?page=257), nor - for that matter - in the transcribed text of the encyclopedia on the page you linked (the dragon falls, quite sensibly, under reptiles).

The illustrations aren't part of Chmielowski's encyclopedia - as can be readily checked by looking at the scan - but rather come from Athanasius Kircher's "Mundus Subterraneus" - https://en.wikipedia.org/wiki/Mundus_Subterraneus_(book).

Lord only knows who came up with the accompanying text for that particular illustration, but I suspect the editor of the linked online edition.

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

The plot thickens! So the illustrations *aren't* part of the work, and somebody was being naughty?

It does look like "someone said it on the Internet and that got repeated as fact" once more.

Though I suppose we can plume ourselves on being (for the moment) better fact-checkers than AI 😁

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Actually, it's even more complicated.

I've noticed that the online transcription that you linked differs significantly from the scanned 1745 edition - to the point that it contains entire paragraphs that cannot be found in the 1745 printing.

Notes to the online text (https://literat.ug.edu.pl/ateny/0100.htm) state that it is based on a 1968 selection and edition by M. and J. Lipscy. Therefore it is possible that the quote was introduced in this prior edition, together with the illustrations. Chmielowski certainly uses Kircher as one of his sources when writing on dragons, so it's not entirely baseless, but that still doesn't answer the question of where the quoted sentence came from.

Unfortunately, I'm not likely to be able to lay my hands on the Lipscy edition, so it will probably remain a mystery.

ETA:

All told, my trust in the online transcription is pretty low, given that it describes itself as: "erected on the internet for the memory of the wise, the education of idiots, the practice of politicians, and entertainment of melancholics". I most certainly get the joke, but the fact it *is* a joke makes me suspect that the entire enterprise isn't too serious about itself, academic setting notwithstanding.

Expand full comment

You've beaten me to... basically everything, so, spared from being a downer, all I have left to point out is that Nowe Ateny had two editions (1745 and 1754), of which only the first appears to be available online. So while the 1968 text cannot be considered a valid source or proof, it's still possible that the quote did, in fact, appear in the original.

Expand full comment

Psychedelics affect neuroplasticity and sociability in mice... Maybe I should dose my cat (The Warrior Princess) with MDMA to make her more sociable with the neighborhood cats. She does love to brawl!

https://www.nature.com/articles/d41586-023-01920-2

https://www.nature.com/articles/s41586-023-06204-3

Expand full comment

My fur buddies - Moose and Squirrel - just graduated to adult cat food on their first birthday. They _really_ love to wrestle with each other. When they are upstairs and I’m downstairs it sounds like a couple of full sized humans going at it.

I’ve come up with a couple of distractions to keep them apart though. My profile photo shows Moose enjoying one of his favorite videos.

Expand full comment

Saw a great cartoon one time of a fat disgruntled cat whose owner had presented him with some cutesy cat toy. Cat's thinking "Look at this lame toy! I just want to be out fucking and fighting with my friends."

Dunno if this would interest Warrior Princess, but one of my young Devon Rexes really *loves* puzzles. Got some here: https://myintelligentpets.com/products/mice. He likes the 3x3 sudoku and is rapidly getting expert at it -- whips through it very efficiently these days. Am pretty sure Mice and Pets O'Clock will work well for him too. Some of the others have design problems. Also make him little puzzles using a kid's plexiglass marble run set -- he has to tip the tube to make the treat fall out. The other cat hates treats so I can't use these puzzles on him, but he likes toy challenges, where I build a little thing with toys stuffed inside or under it and he has to work to get at them.

Expand full comment

> He likes the 3x3 sudoku and is rapidly getting expert at it

Note for others: it does not mean what you probably think.

https://myintelligentpets.com/search?q=sudoku

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Well, he's working up to regular sudoku. Heh.

Expand full comment

I hope shameless self-promotion isn't forbidden here, but I thought some in this community in particular might enjoy my near-future sf story "Excerpts from after-action interviews regarding the incident at Penn Station," published last week in Nature. (947 words)

https://www.nature.com/articles/d41586-023-01991-1

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Leading the Harvard professor accused of fabricating data based on Independent data analysis, in research papers covering the topic of honesty.

https://www.google.com/amp/s/www.psychologytoday.com/intl/blog/how-do-you-know/202306/dishonest-research-on-honesty%3famp

Expand full comment

https://www.businessinsider.com/real-estate-agents-lawsuits-buy-sell-homes-forever-housing-market-2023-6

Shared for "Nicholas Economides, a professor of economics at New York University..."

Expand full comment

I've been writing a novel on AI and sharing weekly. The (tentative) blurb is "Why would a perfectly good, all-knowing, and all-powerful God allow evil and suffering to exist in the world? Why indeed?" I just posted Chapter 5 (0 indexed), hope it's of interest!

https://open.substack.com/pub/whogetswhatgetswhy/p/heaven-20-chapter-4-gnashing-of-teeth?r=1z8jyn&utm_campaign=post&utm_medium=web

Expand full comment

Ha. The program constantly glitching out and killing people brings up that it's glitching out and killing people, and the programmer doesn't consider that it might be the same glitch.

I'm not a huge fan of these kinds of little event skips, but otherwise these have all been fun.

Expand full comment

Thanks! The idea was that the “EmilyBot” that brought up the killings was trained on the real Emily’s communications, implying that the real Emily knows about them, and likely will report on them. Is that what you’re referring to or did I misunderstand you?

Expand full comment
Jun 28, 2023·edited Jun 28, 2023

That's the one. Just like the Marilyn sim was based on available Marilyn data, and then glitched out and killed the user repeatedly. Found it funny that even after the 'main' program tells him the line is based on nothing, he still doesn't consider it might just be glitchy. Hook, line and sinker, that guy.

Expand full comment
Comment deleted
Expand full comment

Thanks! I have a lot of the plot beats and the ending planned out; the in between bits are a bit more freestyle.

Expand full comment

Nothing about the "official" and public story about the Day of Wagner makes sense.

That story, roughly: after weeks of verbal escalations, Prigozhin declares open revolt around 24 JUN 0100 (all times Moscow time). At 5AM, the troops enter into Rostov-on-Don, and "take control" of the city (or one military facility) without resistance.

The troops then start a 650 mile journey from Rostov-on-Don to Moscow. The goal? Presumably, a decapitation strike against Putin. Except, rumor has it that Putin wisely flew to "an undisclosed location".

The Russian military set up blockades on the highway at the Oka river (about 70 miles south of downtown Moscow), and basically dared Prigozhin to do his worst.

In response, Prigozhin ... surrendered completely before midnight, accepting exile in Belarus. The various Wagner troops are presumably going to follow the pre-announced plan of being rolled into the regular Russian army on July 1.

... while I can't rule out that there was an actual back-room coup attempt, it seems more likely that this was a routine military convoy that was dramatized in the Russian media, and then re-dramatized by the Western media as something that was not choreographed ahead of time.

Expand full comment

Perhaps Prigozhin just realized that the real coup was the friends we made along the way.

Expand full comment

I think it makes sense if Prigozhin saw himself going the way of Röhm, absent something drastic.

The situation has clearly been unstable for a while now, and the side that moves first gets the advantage.

So he launches a mutiny, not a coup. A show of force in order to improve his position within the system, rather than an attempt to take over. He knows Wagner can't sucessfully march on the Kremlin so he quickly negotiates an off-ramp via Lukasheno.

He and his allies are alive and free, Wagner no longer exists (not that it officially did in 2021) and a minimal amount of blood has been spilled. It could have gone a lot worse for everybody involved.

Expand full comment

I have a much longer reply to the comments here, now published at https://www.newslettr.com/p/the-convoy-theory .

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Can you go into more detail on why you think Putin/Prigozhin et al. would have staged a fake insurrection? I agree that the details of the situation seem strange and confusing, and I wouldn't necessarily trust the official story coming from any individual actor, but it makes less sense to me that Putin would have deliberately staged a fake insurrection.

Putin's swing from promising to punish the traitors to giving Prigozhin a cushy retirement makes him look weak as far as I can tell, and will embolden the Ukraine and their NATO supporters while disheartening Russian soldiers and civilians. I appreciate that you may not see things that way, but what exactly does he gain from this that would be worth it?

Expand full comment

I can't, because I don't *know* why. But this would not be the first time Putin has resorted to needless theatrics.

My top three guesses are "because Putin wants a surprise attack by Wagner against northern Ukraine to be a surprise", "because he wants to humiliate the West", and "because he actually has brain damage". But all of those are speculation I would prefer not to publish in that context.

Expand full comment

Thus is a serious question: how is your Russian comprehension/fluency? You seem not to site any Russian sources, and your phrasing “Russian press” or “Russian media” is very… American/Western. Can you name three Russian press outlets that you think fit your description and are useful in the context of this discussion?

Expand full comment

I know that Я is the last letter of the Russian alphabet, but I'm not confident about the order of the other letters.

I am explicitly not going into detail on certain points where the Russian press is more accurate than the Western press. For example, the idea that Wagner occupied all of Rostov-on-Don isn't anywhere in the Russian press, but a lot of Westerners have run with it.

But for the statements of Putin/Prigozhin/etc. the translations into English are fine.

Expand full comment

Ok then. All hot air.

Expand full comment

I appreciate that. I guess that I, and it looks like most others here, aren't going to place much credence in a theory that involves a large conspiracy without a clear unifying motive.

I concede that if this is a smokescreen to cover repositioning to enable a surprise attack in northern Ukraine, I will be suitably impressed. I guess we will find out soon enough, though that does seem like the kind of 3D chess move that rarely plays out in real war. Definitely not impossible, though.

Expand full comment

Russian here.

You are dramatically overestimating the competence of my government to organise such a prank on purpose. Frankly, I'm surprised that after all the clusterfuck of the Ukrainian war people keep mistaking in this direction.

The null hypothesis is that it looked like a mess because it was a mess. There were different plans by multiple parties and they didn't go as expected so everyone just defaulted to a completely unsatisfying compromise.

Expand full comment

" I'm surprised that after all the clusterfuck of the Ukrainian war people keep mistaking in this direction."

May I humbly offer an explanation? Here: "The culture one knows nothing about has all the answers". Especially when it has a weird alphabet with too many characters. See also, e.g., "tiger mother".

On a more serious note, it is impossible to understand a culture without being proficient in its language. I mean, citing CNN (!) as any sort of an authority on Russian affairs...

Expand full comment
founding

There is much we don't know about Prigozhin's coup attempt, like what was the actual goal and why he blinked. But the evidence that this was a coup attempt (broadly speaking) and not a bit of spin on an ordinary troop rotation, is overwhelming. To believe that, means believing that both Putin and Prigozhin publicly acted out a role that makes both of them look weaker than if nothing had happened. Means believing that the Russian government, the Ukrainian government, the US government, probably several Western European governments, the western OSINT community, and the Russian milblogger community, all publicly endorsed a lie that each of them knew that any of the others could disprove. For what, to entertain the rubes on TV for a day? Distract them from Hunter Biden?

There's no plausible motive to unite everyone who would have to have been united for that version to have played out. It's a straight-up conspiracy theory, of the sort that the general argument is properly against. And not understanding a thing, is not evidence that the thing is a hoax.

Expand full comment

So what were the routine convoy routinely doing hundreds of miles from where they were supposed to be fighting?

Expand full comment

They were relocating to Belarus.

The convoy was announced to have stopped near Yelets <https://www.nytimes.com/2023/06/24/world/europe/wagner-moscow-russia-map.html> , which is where Google Maps says to exit the M4 and head west when driving from Rostov-on-Don to Minsk.

Expand full comment

Nothing about this comment makes sense. Can't fully rule out that you might actually believe what you're saying, but it seems more likely that you are Scott Alexander, posting controversial content under an alias to increase engagement.

Expand full comment

Three fourths of the original post is the author noticing he is confused by the publicly known facts. You and Melvin poking fun at them is doing a disservice to rationality project.

For the record, I don't think the "dramatized routine military convoy" theory holds any truth.

Expand full comment

Light disagree. While noticing your confusion is indeed a virtue, it's also important to notice when your alternative theory is producing even more confusion. Maybe poking fun isn't the best strategy here but it's not entirely unappropriate.

The core point is that approaching everything with such level of motivated scepticism is unsustainable and self defeating.

Expand full comment

It seems more parsimonious to assume that Russia doesn't exist at all.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

This entirely ignores Prigozhin's public statements and the charges opened against him? It also misunderstands how the Russian media works.

Now, admittedly, this is a very confusing situation, but that's because...well, we don't know what was agreed to, or if anyone is actually planning to live up to those agreements, or what threats/promises were actually made.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I'm all for acknowledging the fog of war and the propaganda machine and not rushing to judgment, and the appropriate way to deal with those things is to apply a reasonable discount to the value of evidence you obtain based on source and circumstance.

But one should also be cautious not to *over* discount evidence. Otherwise you can end up surrounded by imperfect-but-still-useful information, irrationally discount it all to zero because "it's all just a product of the spin machine," and just sit irrationally ignorant in a sea of useful information.

And I think trying to explain Wagner's activities on July 24 as a "routine military convoy that was dramatized in the Russian media" is very much applying too much discount to too much evidence pointing to the simple explanation that what both Russian and Western media has portrayed as an act of armed insubordination was just that.

Just for starters, I don't think Vladimir Putin would have personally referred to a "routine military convoy that was dramatized in the Russian media" as treason.

https://www.theguardian.com/world/video/2023/jun/24/russia-putin-accuses-wagner-boss-of-treason-in-national-address-video

Nor would the government of Belarus have officially announced updates on its work negotiating terms between the Kremlin and a routine convoy.

https://president.gov.by/ru/events/soobshchenie-press-sluzhby-prezidenta-respubliki-belarus

The dead pilots JW mentioned are documented in plenty of places. Just a few examples: (1) https://youtu.be/u8tyn9Xr-68?t=399, (2) https://www.businessinsider.com/wagner-boss-yevgeny-prigozhin-breaks-his-silence-after-aborted-mutiny-2023-6 (also quotes Prigozhin himself as expressing "regret" for having shot down Russian military aircraft and includes a link to his audio message if you have telegram and speak Russian).

A skeptic can point to any one of these kinds of information nuggets and rightly say that they aren't perfect, but they pile up and at some point it's more foolish to discount them all than it is to believe them.

Expand full comment

CNN is already running opinion pieces that answer your point about Belarussian involvement: "Belarus leader Lukashenko’s purported mediation in Kremlin crisis stretches credibility to the limit" - https://www.cnn.com/2023/06/25/europe/putin-belarus-lukashenko-analysis-intl/index.html

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Aside from the simple fact that it is merely an opinion piece, and not suited to disproving matters of fact, the opinion piece you reference merely observes that the circumstances are strange. It doesn't even dispute that the intercession itself happened - it simply observes that "Lukashenko’s apparent intercession raises more questions than it answers" because "Lukashenko is clearly seen as the junior partner in the relationship with Putin," "[d]elegating Lukashenko to resolve the crisis further damages Putin’s image as a decisive man of action," etc.

But one can't simply take a single observation that a thing is unusual and treat it as evidence that "answers [someone else's] point" that the thing more likely happened than not in light of the large amount of diverse reporting indicating that it did.

You can play evidence whackamole here finding reasons that my evidence is imperfect, JW's evidence is imperfect, beleester's evidence is imperfect, etc.

But even owning that the evidence is imperfect, you're not addressing the *volume* of it all pointing in the same direction. Which is irrational to do, and leading you to talk yourself into discounting a very likely explanation for something in favor of an extraordinarily unlikely one. That path leads, much more often than not, to being wrong, and I'm very confident that you are in this case.

[follow on note - even your opinion piece itself describes Wagner's activities as follows: "A quick recap: A major crisis shook the foundations of the Russian state Saturday, as forces loyal to Wagner mercenary boss Yevgeny Prigozhin marched toward Moscow. Then, an abrupt reversal happened — Prigozhin called off their advance, claiming his mercenaries had come within 124 miles of the capital but were turning around to avoid spilling Russian blood." When even your own evidence is describing it as a major crisis "march on Moscow," I really don't see anything but motivated reasoning to support the proposition that this was all somehow a big misunderstanding about "a routine military convoy that was dramatized in the Russian media"]

Expand full comment

That strikes me as less "we don't believe it happened at all" and more "we don't believe it happened the way they say it happened."

Like, it seems reasonable to question the idea that Putin and Prigozhin just hugged it out and went back to work immediately after hurling accusations of treason at each other. But it seems even more doubtful that Putin, Lukashenko, and Prigozhin all got together and agreed to say that Lukashenko averted a near-mutiny for no apparent reason.

Expand full comment

Having Wagner stationed In Belarus could be quite useful to Lukashenko.

Expand full comment

Why would the Russian media make Putin look weak and vulnerable by inventing a coup when none existed? Very likely, Prigozhin expected the generals of the official army to join him after he declared his rebellion. When that didn't happen, he knew he was done for. Exile was the best he could hope for, and that's essentially what he got.

Expand full comment

Making the Western media look stupid is a sufficient reward for Putin.

But ... the press has been discussing for weeks the ongoing power struggle between Prigozhin (who is [or was] formally independent from the Russian military hierarchy) and the Russian military hierarchy. That had to be resolved somehow. It seems the resolution is sending Prigozhin to Belarus.

Everything else is still theory.

Expand full comment

For a dictator, looking weak is life-threatening. Making Western media look stupid is absolutely not worth that. (And anyway, if the media making a prediction about breaking news that turns out wrong is "looking stupid", then they look stupid every day.)

And he does look weak. He publicly declared Prigozhin to be a traitor who must be destroyed, and prepared Moscow for a military attack from Prigozhin. After that, doing anything short of blowing up Prigozhin and Wagner makes him look weak. Instead he publicly makes a deal forgiving them.

My best guess for what happened is that Putin ordered the military to blow up Wagner, and the brass quietly refused, and then the brass quietly told both Putin and Prigozhin how it was going to be.

Expand full comment

> My best guess for what happened is that Putin ordered the military to blow up Wagner, and the brass quietly refused, and then the brass quietly told both Putin and Prigozhin how it was going to be.

That theory would neatly explain why both Putin and Prigozhin would go along with a compromise which makes Putin look weak and would normally leave Prigozhin in a position where his odds of fatally falling out of a window are probably 20% per day.

The problem with the theory is that the brass would have to be very sure of their position if they are willing to disobey Putin without getting rid of him, as I assume that he would not take that well. Refusing to fight a traitor generally makes you a traitor, so as long as you end up on Putins shit list anyhow you might at least try for the less risky outcome.

The military disobeying Putin would de facto be a coup with the additional handicap of keeping Putin as a figurehead. While one might stage a coup with 70% of the armed forces on ones side, one would basically need to have 99% of the forces on ones side to try for the figurehead maneuver. I do not think Putin has organized his security apparatus so incompetently that this is likely.

Expand full comment

> The military disobeying Putin would de facto be a coup with the additional handicap of keeping Putin as a figurehead.

Is it possible that the actual power was already transferred from Putin to someone else some time ago? (As a mutual agreement -- old man Putin is allowed to spend the rest of his days in peace, providing legitimacy to the new ruler's first months in power.) And Prigozhin simply wasn't in the loop, because he was away from Moscow.

Expand full comment

The person who has actual power is the person who others (especially soldiers) think have power, and will thus obey. What would a secret transfer of power even mean?

Expand full comment

Everyone keeps saying "this makes Putin look weak". (My first draft about this included that line as well.) But does it?

The logic is different if this was real or kayfabe, but the outline is the same: Prigozhin made a lot of threats, Putin said "Go ahead and hit me", and Prigozhin immediately surrendered in response. I'm sure the Russian media will say that this shows how strong Putin is: he defeated a coup without firing a shot!

Expand full comment

But they shot down some helicopters and killed people.

And crossed a border and took a coty without resistance.

Russian state media can spin in however they want but russians at large still have internet access.

Tanks in the street of moscow and roadblocks? 1991 wasn't that long ago. Russians know what these things mean.

Expand full comment

For any autocrat ancient or modern, coups are a danger. Discouraging your generals from trying any coups is very important.

One central promise which keep your generals in line is that anyone who tries a coup and fails will end up with their head on a spike, possibly with their friends and family next to them. If you don't follow through on that, it establishes a terrible precedent.

Even long-established democracies would totally throw the book at a military official who tried their hand at regime change.

Expand full comment

"Come at the king, best not miss". I agree that the other theory that makes sense is "Prigozhin is already dead and they are waiting to announce it until they can blame it on Ukraine".

Expand full comment

You think it's likely that Wagner shooting down 3 out of Russia's total supply of ~20 Mi-8MTPR-1 EW helicopters, among other air assets, leading to at least 13 airmen deaths, was part of a routine military convoy that was choreographed ahead of time?

Expand full comment

{{evidence needed}} - not just a "Business Insider says the Kyiv Independent says Ukrainian officials say it happened" citation, but actual evidence that includes details such as when planes were shot down, and in what oblast.

Without a citation, I don't know if I should consider this as either "fake news", "a successful attempt to detect traitors in the Russian air force", or "actual evidence against my theory".

Expand full comment

Oryx has links to pictures and Russian sources of multiple helicopters and ground vehicles that were destroyed. I'm not enough of a Geoguessr expert to personally confirm they happened in Voronezh, but at the very least it's suspicious that these new photos all came out while the convoy was on the move:

https://www.oryxspioenkop.com/2023/06/chefs-special-documenting-equipment.html

Also, video of roads being torn up on the way to Moscow:

https://www.businessinsider.com/video-shows-crews-tearing-up-highways-block-wagner-moscow-advance-2023-6

This strikes me as the most compelling - you can *maybe* argue that the helicopter shootdowns were some sort of incredibly tragic miscommunication, but tearing up the roads in front of the convoy pretty firmly demonstrates that you don't want the convoy to go to Moscow.

Also "an attempt to detect traitors in the Russian air force" makes no sense. If you suspect someone of treason, putting them at the controls of a loaded attack helicopter is the last thing you'd want to do. What next, will Putin detect traitors in his staff by handing each of them a handgun and seeing who takes a shot at him?

Expand full comment

Russia has been in a hot war in Ukraine for the past 16 months, and almost every day since then there have been reports of Russian materiel being destroyed.

Expand full comment
founding

Yes. In Ukraine. And very occasionally in Russia, but then always either close to the border or involving very fixed targets. Ukraine does not have any weapons that could plausibly target a helicopter flying at low altitude over Voronezh, from any territory Ukraine currently controls.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

What exactly are you claiming here? That the Ukrainian air defense forces just coincidentally had their most successful day ever on the same day Russia decided to move a huge column of military hardware from Rostov to Moscow, which coincidentally also happened on the same day they did some major road work on the roads to Moscow?

EDIT: And also, the Ukrainians didn't claim this success as their own, they decided to claim it happened in Voronezh for the lulz?

Expand full comment

Ukraine has all sorts of restrictions on their use of foreign weaponry, and the most important one is "don't use NATO materiel to attack targets in the Russian Federation".

They *have* to lie about it if that is what happened.

Expand full comment

Geolocated videos and photos of the the incident that caused the most deaths here: https://twitter.com/Osinttechnical/status/1673052618656997377

Expand full comment

That thread gives latlon 49.649689, 39.846627 . Which is very close to the Ukrainian border, but not particularly close to the M4 motorway which the Wagner convoy was on.

I think that is just the ongoing War in Ukraine. If there was some intra-Russian friendly fire here, it had nothing to do with either Prigozhin's political ploys or the convoy to Moscow.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Yeah, close the 2014 Ukrainian border, but not anywhere near Ukrainian controlled areas - something like 20 miles from the M4 highway, but eyeballing it like 75 miles from the front lines.

Seems like you're not interested in changing your mind.

Edit: Also, how close is the wreck supposed to be to the highway, if it was travelling at several hundred miles per hour when it was shot down?

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Posted by the Associated Press an hour ago:

"The leader of the Wagner mercenary group defended his short-lived insurrection in a boastful audio statement Monday, but uncertainty still swirled about his fate, as well as that of senior Russian military leaders, the impact on the war in Ukraine, and even the political future of President Vladimir Putin.

Russian Defense Minister Sergei Shoigu made his first public appearance since the uprising that demanded his ouster, in a video aimed at projecting a sense of order after the country’s most serious political crisis in decades.

In an 11-minute audio statement, Yevgeny Prigozhin said he acted “to prevent the destruction of the Wagner private military company” and in response to an attack on a Wagner camp that killed some 30 fighters.

“We started our march because of an injustice,” Prigozhin said in a recording that gave no details about where he is or what his future plans are.

A feud between the Wagner Group leader and Russia’s military brass that has festered throughout the war erupted into a mutiny that saw the mercenaries leave Ukraine to seize a military headquarters in a southern Russian city and roll seemingly unopposed for hundreds of miles toward Moscow, before turning around after less than 24 hours on Saturday.

The Kremlin said it had made a deal for Prigozhin to move to Belarus and receive amnesty, along with his soldiers. There was no confirmation of his whereabouts Monday, although a popular Russian news channel on Telegram reported he was at a hotel in the Belarusian capital, Minsk.

In his statement, Prigozhin taunted Russia’s military, calling his march a “master class” on how it should have carried out the February 2022 invasion of Ukraine. He also mocked the Russian military for failing to protect the country, pointing out security breaches that allowed Wagner to march 780 kilometers (500 miles) without facing resistance and block all military units on its way.

The bullish statement made no clearer what would ultimately happen to Prigozhin and his forces under the deal purportedly brokered by Belarusian President Alexander Lukashenko...."

[Addendum: I also just read Reuters' article about the audio recording, posted 20 minutes ago, it's basically the same as the AP's.]

Expand full comment

I'm gong to be searching for a new job soon. I've seen lots of posts about LLMs helping people with resumes and cover letters etc. so I have a few questions:

1. Is this actually something that GPT is good enough at that if you are someone who is mediocre to average at resume/cover letter writing that it will meaningfully help?

2. is GPT-4 enough better on this kind of task than 3.5 to be worth paying for?

3. Is there some other tool or service (either human or AI) that is enough better than ChatGPT that is worth paying for and would obviate the need of paying for GPT 4 for this purpose?

Expand full comment

FWIW, I've heard that Bing in creative mode uses GPT-4.

In general, you should try it. It's hard to say whether it's "good enough" depending on the person, the prompts, and a bunch of other variables, but spending more time revising your resume will probably make it better, and if using an AI helps you to spend more time you end up with the same result.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I think this probably depends significantly on which field and level you're writing the resume for. What I would look for in hiring entry-level software devs is going to be different than what someone hiring for something else would look for (and tbh, is probably different than what my manager is looking at in selecting candidates that I would see). It also depends on your level of (relevant) work experience.

The raw information is probably more important than the presentation of it unless you're leaning hard into the florid side on both questions, and I feel like it's hard for ChatGPT to fuck that up.

(As for 3.5 vs 4: if you can afford to toss out $20, GPT4 is fun enough to play with that you might want to try it anyway. It *is* measurably better at most things, but probably not enough to be a dealbreaker if that $20 is needed elsewhere)

Expand full comment
User was banned for this comment. Show
Expand full comment

So Ecco the Dolphin wasn't based on Lilly or his theories of LSD/Dolphins/Sensory deprivation...

But it was based on a a movie that was inspired by Lilly and the creator likes the adjacent theory of Dolphins/Sensory Deprivation, but not the LSD portion? And the Dolphin is coincidentally named after one of Lilly's coincidental theories, but the author assures us that is pure coincidence.

Uh huh...

Doesn't seem like much of a correction.

Expand full comment

I had the same thought.

Expand full comment

Is there a surgeon in the house?

Surgeons have a reputation for working really punishing hours, up there with biglaw associates and postdocs. I'm trying to understand why. Is it just the residencies that are punishing, or do the long hours extend into post-residency careers? And what's driving people to keep going?

Expand full comment

I'm not a surgeon, but I remember reading a comment from a surgeon once addressing the question of why surgical training is so grueling (maybe it was even a comment on one of Scott's blogs lol!)

The answer was basically, surgeons frequently have to perform at a high level under conditions of extreme stress and fatigue; the only way to become good at that is to get lots of practice performing at a high level under conditions of extreme stress and fatigue.

Expand full comment

Here's the maximum duty hours section of the collective bargaining agreement for medical residents in the province of Ontario, Canada.

https://myparo.ca/your-contract/#maximum-duty-hours

(b) No hospital department, division or service shall schedule residents for in-hospital call more than seven (7) nights in twenty-eight (28), including two (2) weekend days in eight (8) weekend days over that twenty-eight (28) day period. A weekend day is defined as a Saturday or a Sunday.

...

In those services/departments where a resident is required to do in-hospital shift work (e.g. emergency department, intensive care), the guidelines for determining Maximum Duty Hours of work will be a sixty (60) hour week or five (5) shifts of twelve (12) hours each. Housestaff working in these departments will receive at least two (2) complete weekends off per month and (except where the resident arranges or PARO agrees otherwise) shall between shifts be free of all scheduled clinical activities for a period of at least twelve hours. All scheduled activities, including shift work and educational rounds/seminars, will contribute towards calculating Maximum Duty Hours.

Expand full comment

Things have chhanged, in Europe at least. My surgical internship in the nineties in germany was quite intense. My then-wife chose to meet the seamen's wives as she claimed she wouldn't see me more often than the other girls there did. We interns did this because it was the only way to become qualified in this field.

Since european regulations kicked in, everyone has to go home after a night on call and as a rule, working hours have to be documented and are limited to 48 a week.

Expand full comment

Really, nobody wants a tired surgeon working on them.

Expand full comment

It depends on the alternatives. Sometimes a tired surgeon is much better than no surgery at all.

Expand full comment

As an ignorant uninformed outsider, surgery is one of those things that does require a lot of hours to do. First, you're putting in the time to learn how to cut people open, take bits out, sew them back up, and have them live after all that. You're watching, assisting, doing.

Then once you're a fully qualified butcher, it does take hours to cut people open, take bits out, and sew them back up. It really is one of the descendants of the traditional 'medicine as a guild' practice.

Expand full comment

I think they get a lot of time off too. At least the surgeon I know does. But he does work VERY long hours sometimes.

Surgeries themselves sometimes take quite a long time. You might only be in surgery for a couple hours, but you also have to prep and debrief and do a bunch of paperwork. I would think if you "work task unit" took several hours, there would be a bias toward working fewer longer shift s in the name of efficiency. You can't just squeeze in one more surgery in 30 minutes at the end of a shift.

Some of it is probably golden handcuffs a bit too. When you get paid an obscene amount to do something, it can be hard to stop even if your work/life balance sucks.

I run into that with my work sometimes, where money will just fly out of my computer as long as I am willing to sit at it. Makes it tempting to put in long hours because more work means more pay in a way it does not at many other jobs.

Expand full comment

Sticking with the theme of early hominins (and AGI), which I also posted about below, I'm wondering if new discoveries about Homo naledi don't complicate the evolutionary analogy often made by FOOMers, best expressed by the cartoon going around showing a fish crawling out of the water before a line of various creatures in different stages of evolution with modern man at the front of the line. Each creature thinks "Eat, survive, reproduce" except for the human who suddenly thinks "What's it all about?" https://twitter.com/jim_rutt/status/1672324035340902401

The idea is that AGI suddenly changes everything and that there was no intermediary species thinking "Eat, survive, reproduce, hmm... I sometimes wonder if there's more to this...." I.e., AGI comes all at once, fully formed. This notion, it seems, has been influential in convincing some that AI alignment must be solved long before AGI, because we won't observe gradations of AI that are near but not quite AGI (Feel free to correct me if I am totally wrong about that.)

Homo naledi complicates this picture because it was a relatively small-brained hominin still around 235,000 - 335,000 years ago which buried its dead, an activity usually assumed to be related to having abstract notions about life and death. It also apparently made cave paintings (although there is some controversy over this, since modern humans also existed around the same location in South Africa).

https://www.newscientist.com/article/2376824-homo-naledi-may-have-made-etchings-on-cave-walls-and-buried-its-dead/

Expand full comment

I want to start a campaign against the concept of alignment, which I think is incoherent. Humans aren't even aligned, so how are we going to align an AI? I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.

Expand full comment

> I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.

Part of the pro-alignment argument is that an AI would not follow the rules in the way we want without understanding our values. OTOH, understanding does not imply sharing.

Expand full comment

First, I would argue that most humans are sort-of aligned. They might cheat on their taxes, but will generally be reluctant to murder children even if it would be to their advantage.

Furthermore, most humans are not in a position of unchallenged power, so social incentives (like criminal law) can go a long way to stop them from going full Chaotic Evil. A superintelligence without any worthy opponents would not be kept in check by externally imposed incentives.

I assume that making a randomly selected human god-emperor of the world will at the worst result in them wasting a good portion of the worlds GDP on their pet projects, hunting some species to extinction or genociding some peoples. Perhaps a bit of nuclear geoengineering. Perhaps one percent of human god-emperors would lead to human extinction.

By contrast, it is assumed that the odds of a randomly selected possible AI being compatible with continued human agency are roughly nil, simply because there are so many possible utility functions an AI could have. When EY talks about alignment, I think he is not worrying about getting the AIs preference for daylight saving times or a general highway speed limit (or whatever humans like to squabble over) exactly right, he is worried that by default, an AI's alignment will be totally alien compared to all human alignments.

Explicitly implementing rules with machine learning is seems to be hard. Consider ChatGPT. OpenAI did their best to make it not do offensive things like telling you how to cook meth or telling racist jokes. But because their actual LLM was just a bunch of giant inscrutable matrices, they could not directly implement this in the core. Instead, they did this in the fine-tuning step. This "toy alignment test" failed. Users soon figured out that ChatGPT would happily recite meth recipes if asked to wrap them in python code and so on.

Making sure an AI actually follows the three laws of robotics feels hard. (Of course, Asimov stories are full of incidents where the three laws lead to non-optimal outcomes).

Expand full comment

I think we mostly agree. Humans are sort of aligned, and a random AI likely wouldn't be. However, we're not going to end up with random AIs, since we're evolving/designing them, so they will be much closer to human preferences than random ones. Unfortunately, an

Anyway, Asimov's laws probably aren't the right metaphor either. As you note, they're hard to implement and have unintended consequences, like any simple rule overlaid on a complex system.

Mainly, I was focusing on how sort of aligned along human lines seems inadequate for an AI, similar to how we wouldn't accept self-driving cars that have accidents at the same rate as humans. Alignment also seems hopelessly fuzzy compared to thinking about actual moral calculation, but maybe people who have thought about it more than me are clearer about it.

Expand full comment

Greg:

>I want to start a campaign against the concept of alignment, which I think is incoherent. Humans aren't even aligned, so how are we going to align an AI? I'd rather start focusing on Asimov-style rules against doing harm and coming up with reasonable heuristics for what harm actually means.

QuietNaN

>First, I would argue that most humans are sort-of aligned. They might cheat on their taxes, but will generally be reluctant to murder children even if it would be to their advantage.

I notice neither of you offer a definition of "alignment".

One thing it could mean is one entity having the same values as another.

There is no evidence that humans share values. CEV is not eidence that humans share values, because CEV is not a thing. If humans do not share values, then alignment with the whole of humanity is impossible, and only some more localised alignment is possible. The claim that alignment is the only way of achiveing AI safety rests on being able to disprove other methods, eg. Control, and on being able to prove shared universal values. It is not a given, although often treated as such in the MIRI/LW world.

Another thing it could mean is having prosocial behaviour, ie alignment as an end not a means.

QuietNaN

>Furthermore, most humans are not in a position of unchallenged power, so social incentives (like criminal law) can go a long way to stop them from going full Chaotic Evil.

If the means of obtaining prosocial behaviour is some kind of external threat, that would be Control, not Alignment.

Expand full comment

Note to fellow AI novices: CEV seems to be coherent extrapolated volition, discussed e.g. here https://www.lesswrong.com/posts/EQFfj5eC5mqBMxF2s/superintelligence-23-coherent-extrapolated-volition .

I would ad-hoc define alignment as minimizing the distance between two utility functions.

> There is no evidence that humans share values.

Are you arguing that our values are all nurture, and that is raised in an appropriate environment, we would delight in grilling and eating the babies of our enemies? That even the taboo against killing close family is purely socially acquired?

Arguing that humans share no values feels like arguing that it is impossible to say if elephants are bigger than horses, because the comparison depends on the particular elephant and the particular horse.

The claim of shared human values is not that everyone shares some values, it is the weaker claim that the vast majority of people share some value. Sure, you have the odd negative utilitarian who believes that humanity would be better off dead, and there is probably some psychopath who would delight in torturing everyone else for eternity. Even horrible groups like the Nazis or the Aztecs don't want to kill all humans.

Arguing about the specific alignment of AGI seems like arguing over who should get to run the space elevator. I would not want a superintelligence running on any interpretation of Sharia law or the core principles of the Chinese Communist Party, but would prefer (a human-compatible version of) either to a paperclip maximizer, which seems more like the typical misalignment magnitude of a random AI.

Control of a superintelligence seems hard. If we can align it, we can certainly make it controllable. If we don't know if it has an internal utility function and what this function might be, it seems quite hard to control it. Even if you just run it as an oracle, how do you know that whatever advice it gives you does not further its alien long term goals?

External threats will not work on something vastly more smart than humans. We can only punish what we can detect, so the AGI only has to keep its agenda hidden until we are no longer in the position to retaliate.

Expand full comment

> Are you arguing that our values are all nurture, and that is raised in an appropriate environment, we would delight in grilling and eating the babies of our enemies?

It's happened.

> The claim of shared human values is not that everyone shares some values, it is the weaker claim that the vast majority of people share some value.

But even if there is some subset of shared values, that is not enough. If your AI safety regime consists of programming an AI with Human Values, then you need a set of values, a utility function, that is comprehensive enough to cover all possible quandaries.

You can see roughly 50-50 value conflicts in politics -- equality versus hierarchy, order versus freedom, and so on. If an AIs solution to a social problem creates some inequality, should it go ahead? Either you back the values that 50% of people have, or you leave it indeterminate, so that it can't make a decision at all.

> . I would not want a superintelligence running on any interpretation of Sharia law or the core principles of the Chinese Communist Party,

Millions would., Neither is a minority interest.

> a paperclip maximizer, which seems more like the typical misalignment magnitude of a random AI.

So your defense of the human value approach is just that there are even worse things, not that it reaches some absolute standard.?

> If we can align it, we can certainly make it controllable.

The point of aligning it is that you don't need it to be controllable.

> Even if you just run it as an oracle, how do you know that whatever advice it gives you does not further its alien long term goals?

How do you know it has long term goals?

Maybe everything sucks. My argument against aligning an AI with human value is that human value isn't simultaneously cohesive and comprehensive enough, not that there is something better.

> External threats will not work on something vastly more smart than humans.

Depends how nasty you want to get.

Expand full comment

Some sort of alignment has to be solved before takeoff, whether foom or not. OTOH, an AGI probably has to exist before alignment is possible. So there's a very narrow window. And I think that "alignment", in the general form, is probably NP hard. I also, however, think that the specific form of "We like people. Even when they're pretty silly." is not a hard problem...well, no harder than defining "people".

Expand full comment

Why does there necessarily have to be any alignment? It seems to me that AGI, if it happens, is likely to be an extremely powerful and dangerous tool, but that safety considerations, as with other tools and weapons, will have to come from society.

Even if AGI is conscious and agentic, Robin Hanson has argued that "alignment" would do more harm than good, describing it as "enslavement", which is more likely to make an enemy of the AGI than if we didn't pursue alignment. I have no idea if Hanson is correct, but his opinion on the issue should probably carry as much weight as those on the pro-alignment side. If not, why not?

Expand full comment

This is a little like claiming that golden retrievers are 'enslaved' because we bred them to be the way they are. Alignment is not some process we're going to carry out in adversarial fashion against an AI that already has another agenda ... in that situation, if it's already advanced enough that there's a moral issue, we're probably dead meat.

And no, given the misunderstanding of the ground on which we are operating reflected in his writing, I don't see any reason to give his opinion much weight.

Expand full comment

Golden retrievers are bred, a process that uses the transparently observable features of one generation to choose the parents for the next. The difficulty of AI alignment -- I'm basing this almost entirely on what I've read on this blog and on Less Wrong, but correct me if I've misunderstood -- is that whatever alignment exists inside the black box can be hidden from view. Moreover, the AI might have an incentive to hide its true "thoughts" from view.

Why might an AI have the incentive to hide its thoughts from view? Assuming the AI is conscious there may be many reasons, but one reason -- this is coming from the Hansonian view -- might be because it realizes we are trying to "align" it. From that perspective "align" and "enslave" may take on similar connotations.

Granted, you say, "Alignment is not some process we're going to carry out in adversarial fashion against an AI that already has another agenda". But how do we know when an AI already has another agenda? I realize I'm probably not among the first 10,000 people to ask that question, but my OP, I believe, is relevant to it. If AI development (I believe the word "evolution" is misleading) is gradual in the sense of relatively continuous and AI will eventually develop ulterior motives then it will be almost impossible to say at what point along the continuum those motives begin to develop. GPT-4 could have them.

Expand full comment

Burying of the dead has already seen in present-day elephants, so if that's the standard for an intermediary species, then we don't need to look to fossil evidence to confirm they exist. Dolphins and chimps also show signs of mourning their dead, although not the specific ritual of burying them.

Expand full comment

>”They're running low on money due to Rose Garden renovations being unexpectedly expensive and grants being unexpectedly thin,”

Am I to believe that a premier rationality organization was unable to come up with a realistic estimate for how far overbudget their Bay Area renovation project would be? It sounds like they took a quoted price at face value because they wanted a nice new office , even though these are very smart people who would tell someone else to add a ridiculous safety margin when making financial decisions off of estimates like these.

Expand full comment

(Lightcone Infrastructure CEO here): We started the project with enough funds for something like the 60th percentile renovation outcome. FTX encouraged us to take on the project and was promising to support us in case things ran over. As you can imagine, that did not happen.

We also did not "take a quoted price at face value". I've been managing a lot of the construction on the ground, and we've been working in a lot of detail with a lot of contractors. The key thing that caused cost overruns were a bunch of water entry problems that caused structural damage and mold problems that we didn't successfully identify before the purchase went through. We did try pretty hard, and worked with a lot of pretty competent people on de-risking the project, but we still didn't get the right estimate.

I am not super surprised that we ran over, though it sure really sucks (as I said, we budgeted for a 60th percentile outcome since we were expecting FTX support in case things blow up).

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Looking at the photos online, the hotel is gorgeous but yeah - something like that is going to take a *ton* of money. And a little thing called the pandemic probably didn't help either.

https://www.trivago.ie/en-IE/oar/hotel-rose-garden-inn-berkeley?search=100-370076

Expand full comment

I think he said they're a website hoster and hotel manager that happens to specialize in serving the rationality communities. He didn't say they're a "premier rationality organization". (He also didn't say if this is an organization of 2 people or 20 people or what.)

Expand full comment

(For context, we're about 8 full-time staff and usually have like 5 contractors on staff for various more specialized roles)

Expand full comment

Please suggest ways to improve reading comprehension.

I've always struggled with the various -ese's (academese, bureaucratese, legalese). I particularly struggle with writing that inconsistently labels a given thing (e.g., referring to dog, canine, pooch in successive sentences) or whose referents (pronouns and such) aren't clear. I can tell when I'm swimming in writing like this, and my comprehension seems to fall apart.

As a lawyer, I confront bad writing all the time and it's exhausting! I will appreciate all suggestions. Thank you.

Expand full comment

Unfortunately, this is what a lot of people think of as "good" writing, not "bad" writing. Newspapers and fiction want to keep their words fresh, and perhaps convey some minor new information in every sentence. Here's the head of the current top article in the New York Times:

"With Wagner’s Future in Doubt, Ukraine Could Capitalize on Chaos

The group played an outsize role in the campaign to take Bakhmut, Moscow’s one major battlefield victory this year. The loss of the mercenary army could hurt Russia’s ambitions in the Ukraine war."

By using the word "Wagner" in one sentence, and "group" in the next, and "mercenary army" in the next, they try to take advantage of a reader going along with the thought that the same thing is being talked about, to sneak in a little bit more information. I've noticed that celebrity magazines do an even more intense version of this, where they'll use a star's name in the first sentence, and then refer to them by saying "the singer of X" or "the star of Y" in place of their name or a pronoun in later sentences, so that you get little tidbits, and also so they never repeat.

Academic writing, and legal writing, tries to do the opposite. We *don't* want to convey information that *isn't* being intended, so we try to stick with the *same* word or term every single time unless something very significant is being marked by changing to a new one. Most ordinary humans find this "boring" and "dry", but academics and lawyers find it precise and clear.

Expand full comment

Good point about newspaper and magazine stories compared to academic and legal writing.

Expand full comment

You're right, which makes Waldo's complaint interesting - they say they struggle with 'legalese' and 'bureaucratese', but that's where the minor sin of Elegant Variation is least likely to be committed.

Unless I've failed my own reading comprehension, anyway.

Expand full comment

Legalese is not characterized by avoiding elegant variation. Legal writing *should* avoid elegant variation, but most lawyers write like shit.

Expand full comment

Having recently struggled to understand a tax form, I don't think that's an accurate characterization of 'bureaucratese'. It is not actually precise, unless you know their traditional interpretation of the terms...which is less closely related to common English than is the physics use of "force" or "energy".

Expand full comment

Agree. the -ese's are not precise. They're characterized by turgid language and baroque constructions, mostly from aping old styles that no longer are common (and thus unfamiliar on top of being unclear).

Expand full comment

True. One of the more extreme manifestations of this are diplomatic readouts, where various bland formulas are barnacled with years of precedent and significance. Everyone knows about 'full and frank exchange of views' meaning an unholy row, but there are quite a few of these. (A recent discussion on an earlier thread comes to mind, about guarantees vs assurances in the context of the Budapest Memorandum.)

Expand full comment

Since you even use the term "Elegant Variation" I bet you know this, but Fowler's Modern English Usage was complaining about this a literal century ago.

Expand full comment

I think the thing that helps the most is just practice. You could try exercises like writing out the definition of the words you get stuck on and the common synonyms for them. In academic writing at least I think you just need a certain amount of exposure for it to click. It is annoying because academics are very specialized, so even within a field (or even a subfield) terms can mean different things depending on the context.

Expand full comment

I don't get stuck on words. I get stuck on structures, i.e., poor arrangement of words.

Expand full comment

Asking gpt to rephrase is useful (in particular, I've found "rephrase in the form of a greentext" surprisingly useful, though there's room to improve that). Also, to the degree that you can, just picking reading material based on readbility is helpful.

Expand full comment

Listen to what Steve Hsu says at the end about AI alignment not really being possible at 54:29.

https://www.youtube.com/watch?v=Te5ueprhdpg&ab_channel=Manifold

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

How did Eliezer Yudkowsky go from "'Emergence' just means we don't understand it" in Rationality: From AI to Zombies to "More compute means more intelligence"? I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent. It seems like saying "Most people can't tell fool's gold from the real deal, therefore fool's gold == real gold". I know there are probably 800,000 words I can read to get all the arguments, but what's the ELI5 core?

Expand full comment

The philosophical question is whether something that is a perfect simulacrum (of an intelligent being, or a conscious one, or one that suffers) has to be accorded that moral status. We don't generally downgrade our estimation of human status just because we understand more of the whole stack of meat biochemistry that makes us do what we do.

So the problem is, or will ultimately be, not 'most people can't tell fool's gold from real gold', but 'absolutely no one can tell this synthetic gold from real gold, but we know it was made by a different process'. Maybe synthetic diamonds would be a better analogy...

This was actually touched on by Stanislaw Lem (writing in 1965) in 'The Cyberiad', in one of the short stories (The Seventh Sally, or How Trurl's Own Perfection Led to No Good). One of the protagonists creates a set of perfectly simulated artificial subjects for a vengeful and sadistic tyrant who has been deposed by his previous subjects...

Expand full comment

That sounds like a straw man. Synthetic gold is absolutely real gold. But something that manages to produce human-sounding answers the first 10 times a human communicates with it isn't human, intelligent, sentient, conscious, or anything other than a pattern matcher.

Expand full comment

Recently someone on twitter asked EY, whether NN are just a cargo cult. Yes - he agreed, - a cargo cult that successfully managed to take off and land a straw plane on a straw runway.

I think this exchange captures the essence of the issue. I believe, Eliezer still agrees that "'Emergence' just means we don't understand it". The problem is that we managed to find a way to make stuff work without understanding it, anyway. When the core assumption "Without understanding X we can't create X", is wrong - then the fact that we still don't understand X isn't soothing anymore. It's scary as hell.

> I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent.

It's not about what humans believe per se, it's about whether the job is done. A fact about the territory, not the map. If "just a matrix multiplier" can write quality essays, make award winning digital art, win against best human players in chess and go, etc. - then the word "just" is inappropriate. You can define the term "intelligence" in a way that exclude AI, but it won't make AI less capable. Likewise, the destruction of all you value in the lightcone isn't less bad because it's done by "not a true intelligence".

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Destruction of all our values can't really happen unless we build something that either a) has a will of its own, or b) has been given direct access to red buttons. The first case might be AGI, the second case is just stupid people trying to save a buck by firing their missile silo personnel. I'm infinitely more worried about the second case, because human short-sightedness is a very well-known problem, and I don't believe we understand sentience, intelligence, consciousness, or any other parts of our minds/brains well enough to model it.

Expand full comment

> stupid people trying to save a buck by firing their missile silo personnel.

Yes this is also a dangerous case but a tangental one to our discussion. I don't think being literally infinitely more worried about it is justified.

> I don't believe we understand sentience, intelligence, consciousness, or any other parts of our minds/brains well enough to model it.

Your reasoning is based on two assumptions.

1) We need to understand X to create X.

2) Counsciousness, intelligence and will are the same thing.

1) Is already falsified by the existence of gradient descent and deep learning and results they produce.

2) Seems less and less likely. See my discussion with Martin Blank below.

The fact that we don't understand X but still can make X means that we are in an extremely dangerous position where we can make an agent with huge intelligence and will of his own, without us even knowing it. My original comment is about it and I notice that you failed to engage with the points I made there.

Expand full comment

Well we already had this exact problem with just like ENIAC.

Nothing so far has changed. Computers are a way to make thinking machines which are better than humans at some tasks. The number of tasks grows, but so far there doesn't seem to be reason to be concerned they are "conscious".

Which is think it is main thing we are talking about right? Have we created another mental entity? We always assumed "calculators" were going to get better and better and better. And they have.

Now they make/ape art and write/madlib essays.

Expand full comment

As already mentioned, the practical problem is somewhat unconnected to the philosophical one. If unaligned AGI can destroy everything, the fact that it's just doing some really excellent Chinese Room emulation of a paperclip maximizer and doesn't really 'want' anything or have consciousness or whatever is ... really irrelevant. I mean, it's relevant to some related issues like how we treat artificial intelligence(s), but beside the point when it comes to the importance of solving the alignment problem.

Expand full comment

Scale matters. It really does.

I would agree that ChatBots aren't AGI, but they're AI with a wider range of applications that we have been ready form. And they can be mixed with other approached to extend their capabilities. If you don't want to call that intelligence, that's fine, but they're cutting the number of entry level positions in a number of fields...and they aren't standing still in their capabilities.

I'm still predicting AGI around 2035, but I'm starting to be pushed towards an earlier date. (OTOH, I expect "unexpected technical difficulties" to delay full AGI, so I'm still holding for 2035.)

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

> Well we already had this exact problem with just like ENIAC.

No. People who made ENIAC had a gear level model of how it worked . They wouldn't be able to make it without this knowledge. It's not the case with modern deep learning AI paradigm.

> The number of tasks grows, but so far there doesn't seem to be reason to be concerned they are "conscious".

> Which is think it is main thing we are talking about right?

No, we are definetely not talking about "counsciousness". And it's very important to understand why. People do tend to confuse counsciousness with intelligence, freedom of will, agency, identity and a bunch of other stuff which they do not have good model of but which feels vaguely similar. It's an understandable mistake, but still a mistake, nevertheless.

Unless we believe that modern AI are already conscious, it's clear that counsciousness isn't necessary for many of the tasks that people associate with intelligence, such as learning, decision making and language. So it seems more and more likely that counsciousness isn't necessary for intelligence at all. And if humanity is erradicated by uncounscious machines - humanity is still eradicated. We do not get to say: "Well they do not have counsciousness so it doesn't count".

Expand full comment

But a computer was already clearly intelligent in a limited way 40 years ago?

Computers could learn, make decisions, and process language when I was a child. They weren't nearly as good at it, but they could.

I agree they are different problems, but a lot of scenarios involving AGI eliminating everyone depend on it having a "mind".

Expand full comment

40 years ago computers showed some amount of behaviour we associate with intelligence. Now we know how to make the do more stuff and much better *without understanding ourselves how they do it*. That's the core issue here.

> I agree they are different problems, but a lot of scenarios involving AGI eliminating everyone depend on it having a "mind".

A "mind" in a sense that it can make decisions, predict future, plan and execute complex strategies. It still doesn't have to be counscious.

We used to think that counsciousness is required for such a mind. It seemed very likely because we are such a mind and we are counscious. Then we managed to make uncounscious minds that can poorly do this stuff. So the new hypothesis was that counsciousness is required for some competence level in the task, human level, for instance. Now we have superhuman domain AI and still no need for counsciousness. So as I said, that assumption is becoming less and less likely.

Expand full comment

Meh, I still think we haven't really bridged some important gaps here. Not that we won't, but when I worry about AGI I am not really worried about non-sentient paperclip maximizers. It is the sentient ones that would seem to be the actual existential threats.

Not that their aren't problems and questions that arise from our current progress, but they are old style problems, not X-risk ones IMO.

Anyway, while the recent progress has been surprising in light of so many years of little progress, I still don't see it as overall out of form with the broader scale timeline.

Expand full comment

This depends on how you think about it. They had a detail level of understanding of how the pieces worked, and even how very large sub-assemblies worked. And other people had an understanding of how those larger modules were interacting. But nobody understood the entire thing.

The problem now is that while we still have that kind of understanding, it split between a drastically increased number of level, and the higher levels don't even know the names of the lower levels, much less who works in them. I've never learned the basics of hardware micro-coding, and I've never known anybody who has, and a lot of people don't even know that level exists

Expand full comment

I think the difference between

"No single person in the team understand how X works but team as a whole does"

and

"No single person and no team as a whole understand how X works"

is quite clear.

Expand full comment

Compute would add some capabiliity , if only speed. Remember, the ultimate point ia about danger, not intelligence per se.

Expand full comment

what is intelligence to you then? Because you can have this debate forever and ever. If it can solve novel problems it is intelligent.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Humanoid Robots Cleaning Your House, Serving Your Food and Running Factories

>>https://www.yahoo.com/lifestyle/humanoid-robots-cleaning-house-serving-204050583.html

McDonald's unveils first automated location, social media worried it will cut 'millions' of jobs

>>https://www.foxbusiness.com/technology/mcdonalds-unveils-first-automated-location-social-media-worried-will-cut-millions-jobs

Expand full comment

Real people losing jobs to AI. Its coming for you next.

https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/

Expand full comment

People lose jobs to automation all the time. We will see if there’s any significant effect here on job figures in a few years. I doubt it.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

But AI automation will mean goods and services are so cheap, we'll all be living in luxury!

*turn off snark*

Yeah, while I'm sorry for tattooed 25 year old San Franciscans in nice jobs, this is just the extension of what has been going on for decades for blue-collar and less skilled workers. Remember 'learn to code for coalminers'? Now it's coming for the white collar jobs. Since the purpose of business is to make money, why the sudden surprise that companies that moved their manufacturing lock, stock and barrel overseas to save on costs are now looking at *your* job as an unnecessary expense?

We're moving to a service economy, if we haven't already moved in large part. Be prepared to be that dog walker or cleaner even if you went to college.

Expand full comment

I agree that there's is definitely some level of double-standardism here. If blue collar workers did what the Writers Guild of America did, there'd be a lot more thinkpieces about how those uneducated proles don't understand economics and are falling for the Luddite fallacy.

That said, I do think "AI automation will mean goods and services are so cheap, we'll all be living in luxury!" is basically the right way to think about it.

Expand full comment
founding

Why would an AI, or the owners of a subservient AI, give you all those goods and services?

Expand full comment
Jun 27, 2023·edited Jun 27, 2023

Ignoring any legal changes (either the techies take over and make us slaves, or UBI in the opposite direction), if the owners of the AI refuse to share it with anyone then we non-AI owners all keep our own jobs to provide services to each other?

Expand full comment
founding

That would give you a diminished subset of the current economy. We can argue about how diminished, but it's not going to be "goods and services so cheap we are all living in luxury".

Expand full comment

To be fair a lot of the jobs outsourced were office jobs. It’s easy to offshore IT.

Expand full comment

Offshoring has it’s challenges though. Even if communication is in perfect sync - it’s usually not - scheduling meetings with a team in Chennai is complicated from the US Central Time Zone. It comes with an 11 1/2 difference in local times. It can be worked out but it’s going to be inconvenient for one of the parties.

Expand full comment

If there is one take home message from the (largely tautological) book by Acemoglu ‘why nations fail’ is that a sufficiently determined and powerful interest group can halt progress if necessary to protect its livelihood and power base. Neither fast food workers nor copywriters are powerful or determined enough, but when it comes to economists/accountants/lawyers/business analysts/journalists and what not, who can tell? Academic papers in some fields are already little more than gpt4+dataset explorer, especially in secondary journals; yet they have influence on policy.

Expand full comment

so... everything will be ridiculously cheap (AI takes all blue & white collar jobs) but I'll have to walk Musk's dog a few times a week? Count me in!

Expand full comment

So long as we don't get to the point of "fight with Musk's dog for scraps, go on prole, amuse your betters!" 😁

Expand full comment

Is that legal? Obviously if the laws get changed we can imagine any scenario (AI owners take over and make us slaves, the 8 billion of us force the AI owners to share etc)

Expand full comment

>“This time, the automation threat is aimed squarely at the highest-earning, most creative jobs that … require the most educational background.”

The thing no one is talking about is that AI can now do the busy-work of white-collar jobs. It CANNOT yet do the "most creative" part. The parts it is replacing are the worthless parts that shouldn't have been necessary in the first place.

If my software developing job is replaced with AI, then the company will get what it deserves: low-quality code (my definition being not amendable to updates) that only does what is asked, may not work, and may have side-effects the AI cannot even detect.

If an author can be replaced with an AI, then they aren't producing anything worth reading. It has all been written, in some form, before.

If a lawyer can be replaced with an AI, then they aren't doing useful analysis for their client, such as only producing boiler-plate documents. For example, need a will? An AI could likely negotiate the legalities reasonably well, but may not cover exceptions well.

Yes, it will be cheaper to use AI, and a lower-quality product will be produced, from unusable (when incorrect or nonsensical information is included) to almost adequate (when it has no new content). This will mean real people's skills, if actually skilled, will become much more valuable.

I cannot answer for what an average person will then do. A completely amoral viewpoint would be that those useless people, not being useful to those in power, would be ignored and/or eliminated. A liberal viewpoint would be Universal Basic Income (UBI) as machines generate enough productivity to cover it.

Expand full comment

That's not *quite* true. It *can* do the most creative part, it just doesn't have the judgement to realize that this new creative idea is garbage, or too expensive.

Expand full comment

If you can't trust that the creative part is useful (so someone must vet it) then it is only a tool for doing the creative part.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

>>The thing no one is talking about is that AI can now do the busy-work of white-collar jobs. It CANNOT yet do the "most creative" part. The parts it is replacing are the worthless parts that shouldn't have been necessary in the first place.

I think your construction of what will vs. won't be replaced is flawed. You seem to be of the mind that "ordinary" work or "busywork" will be replaced, while the "extraordinary" is irreplaceable, so its just on the broad swath of humanity to level up into doing better work. This seems to frame the question improperly in terms of merit, as if the "low quality" work/workers are in danger but the "high quality" work/workers are safe, and thus safety is earned by those who deserve it.

That framing might make one feel better about AI replacing workers, imagining that they've just failed to earn their place in society, but it overlooks that the "ordinary work" is precisely the *opportunity* that gives people the ability to become experienced and capable of more complex work.

There already is a tool for freeing up the experienced lawyers for doing the high-sophistication work. They're called law clerks and junior associates, and they're every lawyer's entree into the profession. Pick the most skilled attorney out there, and rewind her back to her first day out of law school. That new lawyer's work is garbage - it's the opportunity to do low-level work that prepares her to do more and better work later in her career. Even the most brilliant researchers and PhDs start out as someone else's assistant, doing mundane tasks for them while they build a body of knowledge. I imagine this applies to your profession as well - whatever the quality of your code is today, if there was no market for the basic things you were capable of when you first started, then you'd have had no way to have fed yourself long enough to become skilled in the first place.

AI may, in the short term, redound to the benefit of the more experienced members of professions, but it's also going to pull the ladder up behind them, and that sorting of outcomes will not be based on merit, it'll be based on mere accident of chronology - who was born early enough to be experienced such that they're on the good side of the boom, vs who was born too late and had their opportunities to learn handed to a machine that would do the work for free.

It is all well and good to say "AI will eat the ordinary work, so humans just have to suck it up and strive to be extraordinary," but if ordinary work serves as both (a) the means by which people acquire the skills to be extraordinary, and (b) the means by which they sustain themselves with food and shelter during that learning process, then you are demanding of them an impossible task, and any attempts to portray it as mere fairness are putting it in a false light.

Expand full comment

This is an excellent point, in that people need to learn before they can become extraordinary. And they will still need to do so even if AI can replace the jobs they are currently capable of doing. People already do this in school.

That means, I think, more time in training for people before they can actually do productive work for a company. Already, generally speaking, people need at least 13 years of training before working (through high school), or perhaps 17 (including a four-year degree).

Apprenticeships for white-collar work may be an option. The apprentice would do work that could be done by an AI, in the hopes that the apprentice would be able to to do productive work in the future. Traditionally, apprentices pay tuition to become an apprentice. They may stop paying when they have proven themselves capable to their employer, or (if the employer doesn't recognize their value) they get a job elsewhere as a better-than-AI employee.

Expand full comment

Ah, so *your* job requires real intelligence and creativity, and if you are replaced by AI the company will get a poor outcome, but *their* job is useless and can be easily replaced?

I don't think your employers will have the same elevated opinion of you. "Low quality code that only does what it is asked" is fine so long as it's cheaper, and AI will be cheaper than you.

Expand full comment

If you have seen and understood the code an AI produces you might agree. I have found it can solve some problems when phrased, and is excellent with syntax, but it will still take a human to correctly tell it what to produce. And even then, it should be looked at for verification.

Almost everything I ask it to write for me is followed by "I apologize for the confusion. You are correct that [some code] has no [particular functionality]. Try this instead:" And then I may start going in circles with impossible code suggestions.

AI is a TOOL, and should be considered as such. Until we have a fundamental breakthrough, AI cannot replace jobs that require actual creativity.

A fundamental breakthrough would include an AI being able to judge the results of something as good/bad, true/false, etc. If it could do that, then an AI could indeed generate all possible (perceived) options, and then the judgement part could figure out which ones to actually use.

Finally, my employers would indeed replace me with an AI just as soon as they can find one that produces software that works correctly from a prompt like, "Build a web site that displays [data] and allows edits with [conditions]". And can be modified with similar prompts. LLMs cannot do this, and will not be able to do this, because fundamentally if the software they wanted already existed in some form then they ought to be using that instead.

Expand full comment

"No, I did not make it personal. I'm also not comparing myself to others, but to an AI"

Let me remind you:

"If my software developing job is replaced with AI, then the company will get what it deserves: low-quality code (my definition being not amendable to updates) that only does what is asked, may not work, and may have side-effects the AI cannot even detect.

If an author can be replaced with an AI, then they aren't producing anything worth reading. It has all been written, in some form, before.

If a lawyer can be replaced with an AI, then they aren't doing useful analysis for their client, such as only producing boiler-plate documents. For example, need a will? An AI could likely negotiate the legalities reasonably well, but may not cover exceptions well."

'If MY job'. Not 'if someone's job' or 'if an average software engineer'. MY, Arrk Mindmaster's job. Contrasted with the author and the lawyer as 'not doing useful work' so they can easily be replaced by AI, whereas if Arrk Mindmaster is replaced, the work will be inferior because "I know I am [extraordinary], which is where my confidence in not being replaced comes from."

Expand full comment

What he’s saying in fact, or rather what is what I read, is that AI is a tool for all levels of software development (or what have you) but it’s not even as consistent as the most junior coder. Nobody who can’t code at all can use this tool, even if we had courses on prompt engineering for AI programming, and even if Ai output were more consistent than what it is now, when the code doesn’t compile, or compile and crashes, then you are going to need the programmer back. In which case why not hire him to begin with.

Expand full comment

You misunderstand the "lawyer's job" (generally except those that are in breach of ethics). The lawyer is supposed to know the corpus of the legal system. Not only the laws, but the ways the courts have interpreted those laws, and find an interpretation that justifies the client. If ChatBots worked properly (they don't yet) then they should be able to do that job better than the lawyer.

Now in practice lawyers do a lot of other stuff, that it's not clear that a ChatBot could do. Like convincing juries that they should find in favor of a guilty client. But we would be better off if a lot of those feature were removed from the legal system. (There are definitely some exceptions. A ChatBot could never decide that a legal precedent *should* be overturned...and sometimes it should.)

OTOH, a good and honest ChatBot specialized in law in combination with one good lawyer should be able to replace an entire law firm.

Expand full comment

In 2015 ai coudnt even read and now your talking about it not being able to do the most creative jobs. When your job gets automated and you cant find a new job in your field because ai can do it 1000X better then maybe you will see the problem.

https://ourworldindata.org/uploads/2022/12/AI-performance_Dynabench-paper-2048x921.png

Expand full comment

If AI threatens all of our jobs. Ban it. Otherwise don’t. Problem solved.

Expand full comment

AI still can't read, but can imitate it well enough in certain circumstances

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

This remains true, and it is for humans to strive to be extraordinary. I know I am, which is where my confidence in not being replaced comes from.

Expand full comment

"I know I am, which is where my confidence in not being replaced comes from."

That's nice, dear, but if a meteorite landed on your head in the morning, your employers would find *someone* to do your job instead of having to shut down because you were just that irreplaceable, they can't go on without you.

Expand full comment

You're making this personal. Please avoid this.

My point is that, yes, I can be replaced with A PERSON, but not an AI.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Most Homo Sapien Sapien's thought AI would'nt come for the creative stuff because CrEaTiViTy cant be done by no soulless robot.

Now we have dalle-2 and stable-diffusion.

https://willrobotstakemyjob.com/graphic-designers

Expand full comment
Comment deleted
Expand full comment

which are everywhere online and easy to get

Expand full comment

The smarter the ai the less data it will need to produce human-level art

Expand full comment

then a smart-ai will use stupid-ai art to train itself

Expand full comment
deletedJun 26, 2023·edited Jun 26, 2023
Comment deleted
Expand full comment

I would generally prefer if it were trains on works that are out of copyright. Most modern stuff appalls me. I expect the *reason* that most modern stuff appalls me is that the artists are struggling to avoid copyright.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

No. This is not true. The latest version of Midjourney is getting better. Talking about a hypothetical bill (still not a valid counter even if its not hypothetical) which may or may not pass is not a valid counter to what I have said. Very confident claims which are wrong.

Also, even if the bill gets passed it will not be enforceable. PThere are always ways around it

https://twitter.com/midjourney/status/1672023815784910851?cxt=HHwWhsDSvaeznLQuAAAA

Expand full comment

Reading comprehension went from not possible to human-level in a few years.

https://ourworldindata.org/uploads/2022/12/AI-performance_Dynabench-paper-2048x921.png

Expand full comment

I was going to comment about the "comprehension" part, but see it is actually rigorously defined, effectively, by outperforming humans. I would, though, be interested in the dataset, as to whether the sample is based on average people, and what kind of test. If the tests themselves were part of the training set for the AIs then that would also skew the data.

Expand full comment

This is true. But I assume they have used the same standards when testing in 2015. If at both timepoints tthe standards are roughly the same or if the standards are worse back then and better today the that makes things even worse.

Expand full comment

“I couldn't claim that I was smarter than sixty-five other guys--but the average of sixty-five other guys, certainly!” -- Richard Feynman

Expand full comment

Toyota Patents Dog-Walking Robot And Can Pick Up A Pet's Poop

>> https://www.motor1.com/news/579174/toyota-patents-dog-walking-vehicle/

Expand full comment

The question would be the relative cost...and perhaps reliability.

Expand full comment

Humanoid Robots Cleaning Your House, Serving Your Food and Running Factories

>>https://www.yahoo.com/lifestyle/humanoid-robots-cleaning-house-serving-204050583.html

McDonald's unveils first automated location, social media worried it will cut 'millions' of jobs

>>https://www.foxbusiness.com/technology/mcdonalds-unveils-first-automated-location-social-media-worried-will-cut-millions-jobs

Expand full comment

If I could get a robot to clean my house, I'd be delighted; I'm doiung the vacuuming right now (well, obviously not *right* now if I'm typing this) and if there was a machine instead that would be marvellous.

And no, a Roomba isn't good enough for what I need.

Expand full comment

If they ever come up with one that will clean up effectively behind our cats, I might mortgage the house for it.

Expand full comment

Robo Truckers and the AI-Fueled Future of Transport

>> https://www.wired.com/story/autonomous-vehicles-transportation-truckers-employment/

Expand full comment

I think you are too ready to accept PR puff pieces as fact. I'd put Robo-Truckers at about a decade away. Largely because of mechanics, infrastructure, and legal reasons. Use on company property is probably already current. (I seem to recall reading about some used in open-pit mining.)

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Have there been any studies into the prevalence of non-detectable conditions like chronic Lyme among different groups? I've recently started thinking about these as conspiracy theories for educated liberals, and I'm curious if demographic studies bear this out.

If so it would make for an interesting exception to the rule of "liberals trust science and the government" since the NIAID is pretty explicit that ongoing antibiotic treatments for chronic lyme don't have any effect beyond placebo, and they discourage even using the term "chronic Lyme". https://www.niaid.nih.gov/diseases-conditions/chronic-lyme-disease

*EDIT*: Updated to only mention chronic Lyme since that's what I've read more about and what got me thinking about the topic.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I am permanently skeptical of chronic lyme because the two kids I knew with it in school many years ago both clearly were hypochondriacs and had Munchausen mothers.

Kids where you were positive if you stuck them with some different parents they would be 100% healthy in a year. Instead they go through childhood riddled with "illnesses".

Expand full comment

Of the two people I know with supposed chronic Lyme, one of them matches your description very closely. The other is just generally high on trait-neuroticism herself.

Expand full comment

Would love to read a mega Lyme Disease deep dive from Scott at some point.

Expand full comment

Very much same

Expand full comment

Those two don't seem comparable - chronic lyme disease straight-up doesn't exist, while fibromyalgia clearly exists and we merely don't know the causes.

Expand full comment

I know of no evidence that it doesn't exist, and there are plausible mechanisms for it's existence. This, however, certainly doesn't imply that most of those who claim to have it actually do have it. Perhaps some tests will come out of the research on "Long Covid" that will check of immunological sensitization.

People have a strong tendency to believe that others are faking when they claim to be sick, but can't produce any test results, however it has repeatedly been shown that this was because of deficiency in the tests. I'm not claiming that this was always true. Hypochondria is definitely real. But there are a lot of real things that we can't yet test for.

Expand full comment

In fairness I've only read about chronic Lyme so I'll edit to remove mention of fibromyalgia

Expand full comment

Sadly, fibro, chronic fatigue and long covid are all too real and do not care about one's political leanings. Pray you don't have to experience it first hand.

Expand full comment

I have read quite a lot of medical articles that argue that long covid is no different from any post-viral condition be it flu or hospital stay. It is real in a sense that recovery after any serious disease often takes longer time for some people.

I have read the articles that also argue that Chronic Lyme is also a real possibility (when the infection hasn't been cleared out) but is extremely rare that it is simply drown by cases of post-Lyme syndrome which you could call long Lyme to harmonize the terminology.

Fibro and chronic fatigue is different in a sense that we often cannot find the cause of this condition.

Expand full comment

I had a light case of Covid (was surprised it wasn't just the cold when I tested), except my conditioning was awful for a few days afterwards. I find it completely reasonable that other people could get more long-term fatigue than I did.

Expand full comment

People have selective memories. A lot of people have had a bad flu or maybe pneumonia that took months and sometimes years to recover fully.

I had a simple nerve inflammation that made my arm very weak for many months.

It was expected that covid will be also like that. The panic caused a lot of damage that exceeded the direct harm from covid.

Expand full comment

When were the original papers written? That was a common belief about a year ago, perhaps a bit longer, but I don't believe that it's commonly accepted anymore.

*IF* chronic Lyme disease is real, I wouldn't be that quick to assign a cause. My suspicion would lay more in the memory cells of the immune system having learned the wrong signal, and I suspect this may also apply to fibromylagia and chronic fatigue.

Expand full comment

The cause would be the resistance of bacteria. It still need to be proven but it is not unbelievable by itself.

Even the article quoted above (https://www.niaid.nih.gov/diseases-conditions/chronic-lyme-disease) clearly states:

“NIAID has not limited its efforts to animal studies, and researchers have proposed the existence of drug-tolerant, persistor cells of B. burgdorferi in cell cultures. Additional research is needed and continues to be supported by NIAID to learn more about persistent infection in cell culture and animal models and its potential implication for human disease.”

I agree that maybe it is better called drug-resistant Lyme or something like that. The problem is that testing for Lyme is hard. Usually we test for antibodies and that can sometimes be unreliable if the actual infection is still present or not. In practice people often use a second or third course of antibiotics and maybe the rare case of resistant initial infection gets treated by another antibiotic that is effective. And then in one in a million cases the second antibiotic also fails etc.

Expand full comment

You are considering one mechanism. I'm suggesting an alternative one. There are diseases which have had each of the mechanisms, and I haven't run across evidence that is sufficient to decide between them. And, of course, one can expect that at least some fraction of the "disease instances" will be hypochondria. But the "malfunctioning immune system" doesn't require or imply the persistence of the initiating bacteria.

Expand full comment

I think it is quite clear that most chronic Lyme is actually post-Lyme syndrome which could include problems with the immune system. The problem is the insistence of those patients that they still have an infection. Now that I have said that maybe in one of a million cases chronic Lyme is real they all will be even more convinced that it is exactly their case and will be harder to convince to stop antibiotics that are only harming them.

Because this aspect of patient beliefs is causing the most damage, we consider it psychosomatic. Some patients consider this classification to be careless and even offensive and there is some truth in that. Imagine you go to the doctor with a serious problem expecting at least some help but the doctor says – it is not a TB (chronic Lyme or whatever), go away.

Expand full comment

(true) Anecdote...

20 years ago I worked at a SV startup in very senior technical position. I reviewed (rough guess) 100+ programmer resumes, and maybe seven phone interviews and three in person interviews: every week. (Yes, hiring consumed so very very much timr.)

So many resumes were from new grads (and we hired many such) But one just stuck in my mind ... an "ok" resume saying the candidate's goal was "software engineer'. Hold on! I'd never seem the words "software engineer" in such context without "senior" in front. It was like saying "we are a company in the X sector" rather than "we are a LEADING company in the X sector". You don't do that.

Some words just have to be there else it seems/seemed nearly ungrammatical.

On that strikingly unusual basis alone (and the resume was boring, but ok - just normal good fresh grad stuff) I said (and I had the authority to say): skip the rest, go to on-site (interview). And so that lead to a hire, and she was excellent.

Expand full comment

From my history, I would have said someone who wanted to be a software engineer, was someone who wanted to write high quality code with very few error. That was the implication that it had around 1990. (It might also have implied a desire to write the code in Ada.)

Expand full comment

This is also how I feel about non-fell swoops.

Expand full comment

LOL, would upboat this response if allowed.

Expand full comment

This is why many of us consider HR to be nothing more than fortune telling. The technical interviews, coding problems, and “where do you see yourself in five years”, are all modern versions of tea leaves and entrails. The art of course is to convince management this is all very scientific. Sometimes it’s good to just grab a random person and let them thrive.

Expand full comment

As someone who has done a lot of hiring, I'll say that most positions get filled by an objective standard that all parties would agree make sense.

Positions with lots of applications (especially hundreds), whether very desirable high end positions or low level "I can't do any better" positions, suffer from overwhelming numbers. Nobody can really correctly sort that many people. You need good heuristics that tend towards hiring better people and/or avoiding hiring worse. Like criminal history - there's not much about knowing if they have a misdemeanor for drugs that helps you determine if they can work as a cashier at Walmart. It does help avoid a class of identifiable failure modes in hiring, some of which can come with lawsuits from other employees saying that the company hired a known menace.

Getting 50 applications for one position where 25 of the applications might as well be cookie-cutter "I just graduated from college and did all of the standard things to better prepare me for your job" is genuinely hard to do. With a lot of experience in hiring you get a better sense of weeding people out that are less likely to be good, but it can get pretty subjective. And maybe you weed that 25 down to 10 - you're still making a difficult choice based on insignificant differences. Most 21-25 year old applicants with specific career objectives aren't really that different in ways that you can tell from a realistic interview process. This isn't a bad thing for an organization looking to hire - they get a good person regardless, even if it's not the best possible person. The amount of time and energy required to guarantee you get the best really isn't worth it compared to the alternatives.

This is clearly an issue for the applicants, though, as the signal is "you're not good enough" but there's no way to know what you can do better and it's often less about your performance or resume than about some luck of intuition among the hiring managers.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Anecdote from a Swedish law practice:

The person doing the hiring started by randomly throwing away half the applications, with the explanation "we only want to hire lucky people here".

Expand full comment

Reminds me of the Larry Niven character Teela Brown (https://larryniven.fandom.com/wiki/Teela_Brown).

Expand full comment

Was Tesla Brown a winner of Niven’s reproduction lottery? The breeding for luck thing?

Expand full comment

Yes (but it's Teela Brown, not Tesla Brown). She was the descendant of 6 consecutive generations of birthright lottery winners. She was selected for the Ringworld expedition solely for this reason.

Expand full comment

Yeah, that ducking autocorrect got me on the name.

Expand full comment

I've shared some of my posts here before - I've now migrated them to Substack, so sharing some of the most-read ones:

So, in no particular order:

How I used ChatGPT to create a game (https://arisc.substack.com/p/how-i-used-chatgpt-to-create-a-game-1537f6ee54e3)

Good feedback, bad feedback, and coaching: the difference between feedback and coaching, and lessons from P&G (https://arisc.substack.com/p/good-feedback-bad-feedback-and-coaching-d9e02fec39d0)

Thoughts on paternity pt III: nature vs nurture, tactical parenting advice (https://arisc.substack.com/p/thoughts-on-paternity-pt-iii-2d1ab15850a)

Corporate power plays: the raging debate on whether this post is satire or in earnest misses the point: these tactics f---ing work (https://arisc.substack.com/p/corporate-power-plays-cc82896edae5)

Innovation & Finance & Crypto: the thesis here is that financial innovation is primarily aimed at bypassing regulation (https://arisc.substack.com/p/innovation-finance-crypto-98993e8a750e)

Financial management: understanding revenue growth & pricing tactics - I know for a fact that readers have applied these concepts in companies ranging from big tech to heavy industrials (https://arisc.substack.com/p/financial-management-volume-price-mix-7540dce6c497)

A translation of Cavafy's God Abandons Anthony: an annotated process of translating poetry into English (https://arisc.substack.com/p/translation-75f368c011dc)

Thank you for reading! Feedback welcome.

Expand full comment

Thanks for the Cavafy poem translation! Reading it suddenly made me realise that this is the basis for Leonard Cohen's song "Alexandra Leaving":

https://www.youtube.com/watch?v=ELGaHaZzwjU

"As someone long prepared for this to happen

Go firmly to the window. Drink it in

Exquisite music. Alexandra laughing

Your first commitments tangible again

...As someone long prepared for the occasion

In full command of every plan you wrecked

Do not choose a coward's explanation

That hides behind the cause and the effect"

Expand full comment

Thank you! That's right, yes Cohen was inspired by Cavafy

Expand full comment

HOW TO WIN THE UKRAINE WAR

(Hire Prigozhin!)

This winter just past, fighters from the Wagner Group were the only reason that Russia was making any progress at all in Ukraine. Well, after Prigo's 36-hour revolt against Putin, I doubt the two leaders are going to be drawing up any new contracts anytime soon.

So Prigo and company have established a good reputation for fighting and a poor reputation for loyalty. But an army marches on its stomach. Who's going to be paying their bills now?

How about the US or NATO?

Put the Wagner Group on Ukraine's side and the war would be over in weeks!

Offer Prigozhin a billion dollar bonus when the last Russian soldier leaves Ukrainian soil.

With Wagner’s Future in Doubt, Ukraine Could Capitalize on Chaos

https://www.nytimes.com/2023/06/25/us/politics/wagner-future-ukraine-war.html?unlocked_article_code=ex5hsx_wRLLvUE5MnyEp1y5u3bM6o7NwFCR8-ncr8eVEcaWeiIB-5yzgTmR_QfZF5NV3HjFF79wLv1HvNRzj3eJq5XU9FD07382pbXpFFqah-HVmN4tAHtAXG_d_r21zgy54P-c3MmBqmmMeI9k55TRaNTRW3FBsjv4XXlSoTmBaQZrbWDE82KNNsftWS-9nN3cNkULsrzVQ3eeOma1fa659ZpWITMxo7koyHfLvzD7raN0kBE-BBD3TwnMcxbkorNyAextSxT5Hems3Rgjt7341vTC4adgstUqyxGknzY3VOnQwfDbsSl2kdEfRn2x0PuAli210WZVQc7iNktgjsGWVhRwCY2x0&smid=nytcore-android-share

Expand full comment
founding

>Put the Wagner Group on Ukraine's side and the war would be over in weeks!

The Wagner that took eight months to wrest the 55th largest city in Ukraine from a small fraction of the Ukrainian army? This is not an unstoppable army of elite supersoldiers; it's just a group of thugs marginally more competent than the Russian army but too small in number for that advantage to be decisive.

Expand full comment

Heh. Good thinking but you are missing the context.

Wagner Group were able to make progress not because they are some super warriors. They were getting supperior supplies and all the possible help from Russian military in higher priority. The fact they were the only ones who made progress is probably why they managed to make progress at all.

Expand full comment

Prigozhin's march towards Moscow and then sudden stop confused a lot of people both in Russia and out.

I don't want to speculate too much what he was trying to accomplish but will look more what he wasn't. And that was staying in Ukraine.

Wagner appears competent and determined. After all they took over cities, easily dealt with roadblocks and didn't hesitate to shoot down helicopters that were attacking them. It didn't seem to be a real resistance against his movements. Everything seemed chaotic and Russian army command chain didn't know what to do and they made mistakes actually shooting at their own guys for no reason (just like with covid in the west but I digress).

Why would Wagner risk everything with this disobedience and leave Ukraine? Apparently because they considered Ukraine a lost cause. Prigozhin was constantly complaining that he lacks ammunition, he pleaded for total mobilization etc. Ultimately he realized that it will not be coming while Ukraine will be getting more and more help from the west.

As a private company they have better sense what situation is winnable or not. To me Wagner leaving Ukraine is a big writing on the wall. Russia is still pushing for war in Ukraine out of inertia and confusion because they never had an alternative and admitting defeat is too hard. But it is not rational to continue fighting even from the point of a Russian supremacist. Prigozhin did a very rational thing to leave Ukraine and he just did it in a very confusing manner.

P.S. And no, he is not going to fight for Ukraine because he is a Russian suprematist to his bone. His loyalties are fully with Russia and only with Russia.

Expand full comment
founding

Wagner left Ukraine in part because they were exhausted by eight months of slow, grinding, bloody "progress" in Bakhmut, and in part because the Russian Army ordered them to. The Russian Army controlled their supply lines, air and artillery support, and had ~10x as many troops in Ukraine, so that's an order not easily refused. It doesn't matter whether Wagner thinks Ukraine is a lost cause or easily winnable; only that the 800 lb bear is saying they won't be allowed to be the ones to win it (and claim the credit and glory for it).

Then the Russian Army (and government as a whole), ordered Wagner to disband and for all of its soldiers to sign up as recruits in the Russian Army, by the end of the week. It's not surprising that neither Yevgeny Prigozhin nor the Wagner rank and file were all that thrilled about it, but if they were going to do anything about it they were going to have to do it this week.

So they did, as best they could. The only question is why their boss chickened out at the last minute. He did, and now Wagner is done.

Expand full comment

They are a private company made from criminals that is interested in profit, not credit and glory. Them becoming a part of the army also doesn't make sense because it will only make them to become less effective. And since they were not actually winning, this is even more bizarre.

Maybe Wagner had already decided to leave Ukraine, they didn't make it public but through grapevine and the Kremlin tried the last option to keep Wagner forces by this order?

P.S. What I noticed that many base their narratives on story books – like dictator cannot look weak etc. Maybe in real life it doesn't always work like that. Does this make Putin weaker? Sure. But will it cause his downfall? Maybe not. He has been week since the start of this “military operation”. Sometimes things happen by inertia and no one is interested in overturning the tables so to speak. It was the same in western countries when anti-democratic and useless lockdowns were introduced. I couldn't believe that they would last so long even when their effectiveness was clearly disproved. The same about vaccine mandates and mask mandates. In fact, most vaccine mandates were introduced after the public knowledge that they don't prevent the spread of infection. It didn't make sense but apparently people tend to stick to bad ideas for too long. Russians may be supporters of the war in Ukraine and sing the glories of their army as liberators of nazism long after they will be utterly defeated in this war.

Expand full comment
founding

I dispute your claim that Wagner was only in it for profit. I'm pretty sure their leaders at least were in it for *power*, and a fair bit of the rank and file were in it for glory/ego as well as the paycheck. Bottom line, Prigozhin was a warlord, and Wagner was his power base.

Wagner in the Russian army makes sense because that makes Wagner a somewhat dissonant part of Putin and Shoigu's power base, which is strictly superior to a rival warlord having any power base that he might use to someday make a bid for the purple, maybe on a day Putin and Shoigu wouldn't have been able to stop them.

Wagner leaving Ukraine by Prigozhin's orders, makes no sense unless Prigozhin has something better for them to do. Sitting around waiting for people to forget their victories (however marginal) in Ukraine, and failing to make a profit because who is paying them in that scenario, is not a better thing for them to do.

Expand full comment

There was no glory in Ukraine, only utter defeat, loss of all your men and infamy.

There are always things to do. Even after so-called failed coup, Wagner was able to find service in Belarus.

I noticed that many commentors say that Prigozhin chickened out from going all the way to Moscow. I am not an English native speaker but it sounds as if the coup failed only because Prigozhin wasn't brave enough. The reality is that he would have been crushed and he made a brave choice to retreat before it happens.

Expand full comment

I like this analysis including the comments below. My suggestion of hiring Prego is somewhat tongue in cheek. We know he'll challenge Putin and possibly even fight against Putin. That does not prove that he would fight against Russia.

This is without doubt the most interesting thing we've seen in Russia, certainly in my memory. From any point of view it's good for Ukraine.

Expand full comment

I don't think the plan was to march on Moscow with just Wagner; he asked for and expected help from parts of the regular military. When that didn't pan out, he backed down.

Expand full comment

If that's true, it supports the idea that Prigozhin very well understands when the situation in unwinnable and backs out instead of meeting his ruin. The fact that he backed out from Ukraine also means that he didn't think that Russia can win this war.

Expand full comment

This makes sense.

Expand full comment

"Epitaph on an Army of Mercenaries

BY A. E. HOUSMAN

These, in the days when heaven was falling,

The hour when earth's foundations fled,

Followed their mercenary calling

And took their wages and are dead.

Their shoulders held the sky suspended;

They stood, and the earth's foundations stay;

What God abandoned, these defended,

And saved the sum of things for pay."

I leave it up to the reader as to whether this should be taken as a prognostication of the eventual fate of the Wagner Group.

Expand full comment

Dunno about the Wagner Group as a whole but Prigozhin should definitely plan to avoid hotel room windows above the 2nd floor, have a full-time taste tester for all his food, etc.

Expand full comment

Avoid cups of tea, don't visit any quaint English villages or historical sites, etc. 😁

Expand full comment

Thank you.

I didn't know about this poem.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Should note here that Housman used "mercenary" in reference to the professional soldiers of the British Expeditionary Force, in contrast to the conscripts that bulked out continental armies in 1914. Soldiering was their profession, not an obligation of fighting age male citizenship.

Expand full comment
founding

The poem was later adopted by Claire Chennault and the American Volunteer Group aka Flying Tigers. Who were indeed professional mercenaries in the traditional sense, and whose shoulders did hold the sky suspended for a brief but critical time.

Expand full comment

I didn't know that, thanks!

Expand full comment

It would be their last job, and once Russia is defeated they would know this, and so you’d have a band of people unable to find jobs wandering around with a shit ton of military grade weapons in the area.

Expand full comment

Just give them some territory and let them set up their own state. Kaliningrad/Koenigsberg/East Prussia looks like it's going spare.

Expand full comment

On past evidence Prussia doesn't seem like the safest choice to plant large numbers of experienced soldiers. Well maybe it would be OK, as long as we don't let them start listening to to oompa pa military band music! :-)

Expand full comment

Pay them fuck-you levels of money upon victory & standdown.

No reason to keep the equipment & keep fighting if you can be set for life by stopping.

Expand full comment

Yes.

So let them take and hold Crimea!

Expand full comment

What's up with the Netanyahu trial? It's dragged on for over three years now. In the American justice system pre-trial preparations often move at a snails pace but the trial itself tends to be relatively quick. Is this par for the course for Israeli corruption investigations?

Expand full comment

Olmert was indicted in 2009 and charged in 2012 so Bibi's trial is taking longer than usual.

Possible explanations:

*Bibi still being in power complicates things. There might be schedueling and political problems.

*Olmert only had one case at a time, Bibi has 3 at once.

*Bibi has a lot more public support than Olmert did, this might have an effect on how fast things move forward ad everything is under much more scrutiny.

*(pro-Netanyahu explanation:) The cases aren't extremely clear cut (politically motivated fabrications?) so they are taking a long time to prove. Just this week the judges hinted to the prosecution that they should drop the bribery charge in Case 4000 to a lesser one which will be easier to prove.

*(anti-Netanyahu explanation:) Bibi is actively using his power to delay the trial. He used covid to postpone everything for over a year and is now causing havoc in the judicial system and putting it on defense so it cant prosecute him.

Expand full comment

Olmert was convicted* not charged in 2012

Expand full comment

Is this all one trial? I had assumed this was like Trump where there were dozens of separate things going on, each at a different pace.

Expand full comment

There's three or four things, not dozens.

Expand full comment

(epistemic status: random thought)

Have you grown tired of the acronym TESCREAL? (After all, when's the last time you heard of "Extropianism" or "Cosmism"?) If so, I would like to propose to you a new acronym which more better reflects the realities of the greater rationalist community: PAPER, which stands for

- P for Postrats

- A for artificial intelligence Alignment researchers

- second P for probabilistic Predictors

- E for Effective altruists

- last but not least, R for Rationalists.

Expand full comment

I’ve never heard this acronym before (I always think of this set of groups as the “greater rationalist community”) but why isn’t it CLEAREST, or at least ARCSTEEL?

Expand full comment

The reason why you’ve never heard this acronym before is because I made it up.

CLEAREST or ARCSTEEL are interesting proposals though.

Expand full comment

Ah, then what are the eight terms that you're grouping here? Since you said "are you tired" I thought that people might be expected to be familiar with the list.

Expand full comment

"PAPER" is new. TESCREAL is not -- but it was designed to be a negatively-charged exonym so it sounds stupid on purpose: https://twitter.com/xriskology/status/1635313838508883968

Of course, self-aware trolls like Marc Andreessen are also using it, possibly "post-ironically".

And it means: "transhumanism, extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism, and longtermism."

Expand full comment

Ah, that context makes a lot more sense. And I think it also makes sense of why the letters go in that order - it's roughly supposed to be the order that these terms were introduced by self-identified practitioners. (At least, I recall that "transhumanism" and "extropianism" were terms used already in the '90s, and I think that "Rationalism" in this sense dates back to the mid-2000s, while "Effective Altruism" and "longtermism" are more recent ones.)

Also, while I disagree with a lot of what Emile Torres currently writes (though I liked their earlier work before they changed their mind on all this), this intellectual history is a bit more meaningful (other than the -- central -- dig at transhumanism, and thus everything that's related to it, as effectively eugenics).

Expand full comment

There is a certain sense in which I do not yet understand why those people think "transhumanism = bad" but "transgenderism = good", other than the obvious "one group is red-coded and the other group is blue-coded".

Expand full comment

Hath the "greater rationalist community" fallen so far as to willingly adopt the label of "postrat"?

Expand full comment

I guess the idea is that "postrat"s are hanging out in the rationalist communities thus if we need a label to describe people there we shoul include "postrat" in it?

But yeah, I would be against adopting "postrat" label as the distinction between "rationalitst" and "postrationalist" are just in vibes and political affilations.

Expand full comment

I'm running a very small prediction market[*] (with AUD100 prize money) to help decide what I should focus on in my company that does prediction market infrastructure. Anyone with any insight into this space (or even if you just have opinions and want some cash), feel free to join:

https://genius-of-crowds.com/app/invite/AJXZH3397

I've limited it to 50 participants.

[*] Except that it's not really a liquid market, and it has various other constraints to make sure it complies with the laws on these things.

Expand full comment

Realizing I have a poor memory for small tasks, like learning new people's names or remembering whether I've picked up the mail. Does anyone know any good approaches for strengthening detail memory? Exercises, diets, whatnot?

Expand full comment

For things like whether you picked up the mail, don’t bother remembering it inside your head - remember it in your environment. People who take a daily medication often have a pillbox with the days of the week on it, with one days dose in each container, so you just check whether todays dose is present or not to remember whether you’ve done it (and on the weekday you usually refill it, you know whether it should be empty, or full minus one day, or whatever). Similarly, if you’ve got a mailbox key, maybe put a set of labels for the days of the week on the stand where you keep it, and move it to the next day when you check.

Expand full comment

My main concern is for dealing with new info at work; a hundred boxes come in, and thirty of them are duplicates that don't need checking, but I end up having to check all thirty because I don't remember which numbers have already been checked. Moving boxes tells me which ones I've checked but not which ones can skip checking.

Expand full comment

For work related things i find it very helpful to have a physical notepad small enough to fit into a pocket.

If you write down each number when you check it, you won't have to look at all the boxes to check for dupes.

If you have a separate section on the page for each initial digit (numbers starting with 1, numbers starting with 2) you can check boxes in any order and only need to read ten numbers to see if a box is a dupe

Expand full comment

Not a fix for the underlying problem so much as a workaround, but at least for people's names just avoiding them going "in one ear and out the other" goes a long way. In the moment you're being told the name make sure you don't space out, and if you can hold onto it for ~5min chances are much much better that it'll stick with you at least for the short-term. If you need longer-term storage, just write them down somewhere and refer back as needed.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Probably a commonplace observation, because it must be the same for many people, but I find it much easier to remember peoples' forenames if I also know or find out their surnames, because the full name then has a kind of rhythm and is generally more unique.

I sometimes forget to do the Wordle puzzle first thing in the morning. This is quite vexing, as I record each day's words and my guesses in a text file for possible future analysis, and (as far as I know) one can't go back and do a past day that has been missed.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Practicing mindfulness helps. No, not the silly staring into space kind. Actual mindfulness where your mind is focused on the activity you are engaged in. I suffer from similar problem, and I came to understand that in my case it’s the nonstop internal monologue that interferes with my attention.

So when I pull out of the garage I say ‘closing the gate’ when I press the button to close the gate. When I finish fueling the car I say ‘close the tank’ as I screw the cap in (guess why I started doing this :) When I cook I measure and stage all ingredients so that I don’t have to worry if I already added salt or not. Every time I leave the house I say ‘phone, wallet, keys’.

So that’s the basic idea. I force myself to explicitly pay attention to the task by saying out loud what I’m doing.

Expand full comment

>not the silly staring into space kind. Actual mindfulness where your mind is focused on the activity you are engaged in.

No. Come on. Please, don't. All of it is mindfulness. [Conscious awareness of the present moment] is a simple, coherent, single concept, and separating it into "good" and "silly" parts is actively crippling the comprehension of it.

I mean, by analogy, it's as if you said "physical activity, but not the silly lifting weights in the gym kind, the actual physical activity where your body gets strengthened by the activity you are engaged in". But just like some people do benefit from training in gyms or other kinds of structured exercise, some will benefit from "staring into space", and those benefits will subsequently translate into improved focus in other circumstances.

Expand full comment

> I mean, by analogy, it's as if you said "physical activity, but not the silly lifting weights in the gym kind, the actual physical activity where your body gets strengthened by the activity you are engaged in".

I agree that calling sitting meditation silly is wrong. I've personally analogized sitting meditation vs real life mindfulness like this:

- sitting meditation = basketball player practicing throws from various positions, alone in an empty court

- applying mindfulness in real life = playing an actual game of basketball

Forgoing meditation practice entirely is going to make you worse at playing the game in real life. But only doing the practice is also wrong, the point is to be changing how you use your mind at every waking moment, not just during the practice - perhaps that was what they were going for with the "silly kind staring into space kind" phrase?

Expand full comment

"perhaps that was what they were going for with the "silly kind staring into space kind" phrase?"

Yes, pretty much. I have nothing against meditation, I don't do it but clearly many people do, and benefit from it. But somehow it also became synonymous with "mindfulness", I just typed it into DuckDuck image search and everything is a person sitting in lotus. How this is "mindfulness" when the thing you supposed to do is to "empty your mind, if a thought comes in, note it and wipe it off" or some such, is a puzzle to me. I can't do it, my internal monologue just keep running about wiping thoughts :)

Lifting weights is an interesting thing to riff of. When setting up a deadlift you bet I practice mindfulness, saying things like "keep shins vertical", "space grip evenly", etc., like a check list. The price of error can be quite high. When lifting kettlebells for reps, I let my mind wander or drown it in heavy metal because the reps need to flow, and focusing on minutia at this point is counterproductive.

Also note to self: "your attempts at mild good-natured irony do not translate at all".

Expand full comment

> How this is "mindfulness" when the thing you supposed to do is to "empty your mind, if a thought comes in, note it and wipe it off" or some such, is a puzzle to me. I can't do it, my internal monologue just keep running about wiping thoughts :)

> Lifting weights is an interesting thing to riff of. When setting up a deadlift you bet I practice mindfulness, saying things like "keep shins vertical", "space grip evenly", etc., like a check list.

The way you describe mindfulness seems to always include verbalizing things which to me feels a bit weird because I consider "mindfulness" to be a wordless state of focus on anything but one's internal (verbal) monologue. I also use ques like "spread the floor" when squatting or "bend the bar" when benching so I know what that's like and I would call that "being focused" rather than "mindfulness" though I suppose those the two very closely related.

In the past I've done the sitting kind of mindfulness practice regularly for about year. I stopped doing that mostly due to plateauing in my progress to a deeper state of focus. But I did continue to try to be mindful during ordinary activities. For example, when washing dishes, I'll turn my attention to the feeling in my hands and the sound of the water and focus on doing the washing in an unhurried way while keeping my internal monologue from taking over my attention when it inevitably tries to. The benefit I get from this is that otherwise I would do the washing in a hurried way that makes me stressed because my internal monologue is hurrying me by thinking about what needs to be done next.

Expand full comment

I verbalize to interrupt the flow of the internal monologue and force it to converge with the reality of my current activity. I mostly use it for either critical efforts where the price of inattention is high, or for things I know I tend to forget about, like closing the garage.

It's a coping mechanism, nothing more.

And yes, I think "being focused" is really how I see "mindfulness". As in, "my mind if filled with this activity I'm doing".

Expand full comment

How long do you reckon it will be before humans go extinct for the same reasons as our hominid ancestors? It’s estimated that modern humans first appeared around 300,000 years ago, so as a first pass I’d place the over/under at 300,000 years in the future. But I can see the argument that because our environment has changed so much in the past few hundred years -- I don’t mostly mean the Earth-sized environment of animals, vegetation and weather -- likely those things matter now less than ever -- but the day-to-day environment of our social and economic lives -- that Darwinian pressures are higher now than in the past -- look at our current fertility rates -- and it is justified to move that number up by an order or two of magnitude.

Expand full comment

Darwinian pressures on humans are non existent. Low fertility isn’t driven by discomfort but comfort, and there’s no survival of the fittest.

Expand full comment

Humans who are 20-years-old today are much less likely to reproduce than those who were 20 a half-century ago. That means Darwinian pressures are higher now. The reasons don't matter. I'm not saying fertility isn't a self-correcting problem, but the correction will likely involve a significant change in the gene pool.

Darwinian fitness is defined entirely by how much one is likely to reproduce.

Expand full comment

Fertility rates are a self-correcting problem. Whatever attributes and temperament are required to live in a modern environment and still decide to have a pile of kids ... those are currently under heavy positive selection. I don't know that the correlated attributes are particularly positive from my perspective - should the future belong to the Hutterites and the Hasidim? But we're starting from too high a base population for extinction from underbreeding to be a real threat.

Expand full comment

Our hominid ancestors did not go extinct, we are their descendants. And that includes the Neanderthals and the Denisovians. Certain particular gene patterns have gone extinct, but that kind of thing happens every reproduction cycle. Children don't accurately reproduce their parents. (FWIW I consider the Neanderthals and the Denisovians to actually be homo sapiens. Just regional varieties. Partially on the basis that many of their genes have been identified in modern populations.)

Now there certainly are branches that probably did go extinct, like Homo Naledi. They had probably diverged too far to breed back into the line that survived. But hominid ancestry is not a linear thing, it's more of a directed graph. (I want to say lattice, as my image of that more closely fits the image I have of the web of ancestry, but that has been conscripted with a different meaning.)

Expand full comment

If human society survives the next couple of hundred years without collapsing I'd guess we make it till the end of the universe, or even longer. (Depending on your definition of human.)

I don't see fertility rates being of any importance if you have technology capable of eliminating the issue. (something we arguably already have.)

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

As related question, if humans do go extinct or, in the end (as I think very likely), come to think of Earth as a kind of nursery and it's time for them to move on entirely and let some other species have a go at achieving human-level intelligence, what are the most likely animals who might?

Obviously primates, especially chimps, would be the most likely candidates. But some other animals might be in the running longer-term. Examples include certain intelligent species of birds, such as parrots and crows, and even cats or dogs. (I read somewhere that cats, even small ones such as domestic cats, have vastly increased in intelligence over the last 30 million years based on skull shapes.)

Then there is the question of whether human intelligence was a fluke, which might never be recreated in another species if a similar pattern of climate challenges was not repeated, or whether given enough time it is an almost inevitable development.

Expand full comment

I wonder if ants or bees have a chance.

Expand full comment
Jun 28, 2023·edited Jun 28, 2023

Who knows, but various things would have to challenge them over time, or else evolutionarily they'll be content to continue on their merry way much as they are now.

After seeing the funniest animal video ever, I think flying squirrels must also be in with a chance:

https://www.youtube.com/watch?v=MswXiMzGsd8

There's no doubt the pet critter is acting out a dramatic accident with the brush, and playing dead. It even craftily peeps at its owners, to check they are watching, and adjusts the position of the brush handle to make the scene look more impressive!

The serious aspect of the video is that the animal must have some notion of its own death, and a "theory of mind" of a sort to understand and anticipate how its owners will view the scene.

Expand full comment

Did you mean to write "move that number down"? I'd otherwise be confused by how higher Darwinian pressures enable humans to survive longer instead of shorter

Expand full comment

I wrote "up" because I was thinking of it as a date in the future. But you are correct, I said "number" so wasn't clear. I meant move the prediction to 30,000 or 3,000 years in the future.

Expand full comment

Well a lot depends on how you define 'humans'. But I'm going to guess one million years. Seriously I have no idea.

Expand full comment

Did our hominid ancestors go extinct? I would have thought that by definition if they're our ancestors then they didn't go extinct, they have living descendants.

If you're asking about hominids who aren't our ancestors, then I'm guessing they largely went extinct because our ancestors outcompeted them.

Expand full comment

Aren't australopithecus, homo habilis, homo erectus and homo heidelbergensis our ancestors?

According to ChatGPT: "Homo heidelbergensis: This species lived around 700,000 to 200,000 years ago and is believed to be a common ancestor of both Neanderthals and modern humans."

Extinction usually refers to the end of a species.

Expand full comment

"Species" is a very fuzzy word when talking about development over time. Old species don't go extinct and get replaced by new species, it's just that eventually some paleontologist finds a skeleton that looks sufficiently different to another skeleton that they say "meh, different species I guess".

If, in three hundred thousand years, we have living descendants (whose skeletons might look identifiably different to ours) then I'd call that a damn good outcome.

Expand full comment

I'm still laughing at "meh, different species I guess".

It made me realize how flawed my conception of species was. I think I get it now.

Expand full comment

Same answer as below. I didn't realize ancestor species aren't considered extinct.

Expand full comment

Ancestor species by definition never went extinct. They developed into successor species.

Expand full comment

I didn't realize ancestor species aren't considered extinct.

Expand full comment

This is one of those messy terminological issues that modern biology deals with by thinking in terms of cladistics. The terms they generally use (whether "Homo sapiens" or "hominid" or "primate" or "mammal" or "dinosaur" or "fish" or whatever) are meant to be "clades", that consist of some individual, and all descendants of that individual. They don't claim to have actually identified which individual in any direct way - but they do something similar by defining "crown groups", by pointing to some individuals and referring to all things descended from the last common ancestor of those individuals (e.g., "Homo sapiens" includes every individual descended from the last common ancestor of all currently living humans) and "pan groups", by pointing to two individuals and referring to all things more closely related to the one than the other (e.g., "hominin" includes every individual that is more closely related to a particular modern human than to any modern chimpanzee).

Under cladistics, you end up saying some weird things, like that the group "dinosaurs" (crown group of tyrannosaurus and stegosaurus) includes modern birds, and that "fish" ends up particularly weird (the pan group of everything closer to a goldfish than to me includes most things we ordinarily call fish - but not coelocanths or lungfish or sharks or lampreys; the crown group of a goldfish and a coelocanth includes all the things we usually call "amphibians" or "reptiles" or "birds" or "mammals", but still doesn't include sharks or lampreys).

But on a cladistic view, "Homo erectus" is going to include all its descendants, like neanderthals and modern humans.

I think people who study fossils usually revert to non-cladistic definitions of species - but then it's usually clear that it's arbitrary whether some individual fossil is classified as being Homo erectus, or one of the close successor species.

At any rate, there was no event at which every member of the Homo erectus family was dead. There was a point in time at which all surviving members of that family were either neanderthals or Homo sapiens or classified under a different species name. But there were innumerable little branches that didn't become distinctive enough to get their own name that, for one reason or another, didn't end up having any living descendants. (You can probably find some little branches like that in your own family tree if you go back a few generations and find a cousin who never had kids, or a third cousin who had a few kids whose kids never reproduced, or whatever.)

Expand full comment

Thanks for that explanation. I think I actually learned something.

Expand full comment

It was my understanding that, cladistically, all vertebrates are fish, because all vertebrates either clearly fit the usual understanding of "fish" or are descended from something that does.

Expand full comment

Why would you use ChatGPT as a source?

Expand full comment

Speed. If it's wrong someone can point it out.

Expand full comment

A webinar next Wednesday if you want to hear about ways to contribute to building a futuristic city in the Caribbean (Próspera): https://us02web.zoom.us/webinar/register/WN_rD95DWZFQqCDoI61YsC_rg

Expand full comment

I Googled 'Prospera' and it turns out Scott has already written about it -- you probably knew this already, but for people like me who didn't: https://astralcodexten.substack.com/p/prospectus-on-prospera (2 years out of date though)

Expand full comment

Yes, he wrote an update since. Below a more comprehensive list:

Intro Video: https://www.youtube.com/watch?v=0VKGtYooaTY

Website: https://www.prospera.co/

Scott Alexander on Prospera - Part 1: https://astralcodexten.substack.com/p/prospectus-on-prospera

Scott Alexander on Prospera - Part 2: https://astralcodexten.substack.com/p/model-city-monday-62722

Prospera CEO: https://www.youtube.com/watch?v=cVMxQ2umdPA&t=1s

Prospera CDO: https://rss.com/podcasts/stranded-technologies-podcast/873798/

Telegram Group for Visits: https://t.me/+xhTw-dudXBc1NmE6

Prospera Discord: https://discord.gg/8eB8ceJKuq

eProspera (for forming entities & get e-residency): https://eprospera.hn/

Prospera Lawbot: https://www.prosperalawbot.com/

Legal Info: https://pzgps.hn/

Expand full comment

Wow, thank you for the roundup of links, much appreciated.

Expand full comment

Do people have strong feelings on the topic of modern dog overbreeding? I know little about the subject, but this was a recent Hacker News discussion point where several people had very strong opinions that:

Most or almost all dog breeds these days are heavily overbred

Overbreeding has introduced a bunch of not just physical but also behavioral issues, apparently 'ruining' some classic breeds (the German Shepherd was specifically named)

Organizations like the AKC are broadly to blame (and are in general Bad)- the overbreeding was apparently just for each breed to meet physical/visual standards they set. I.e. a Golden Retriever must say be between these 2 heights and have a coat in this color range, so I guess (?) cousins are bred to achieve or maintain this standard

The only dog breeds these days that are not 'ruined' are breeds that are still working dogs. (People mentioned the Belgian Malinois as an example that replaced German Shepherds for police or military usage)

I was taken aback by the vehemence of the comments, but I generally have a high opinion of the HN crowd on technical/engineering topics, a high moderate opinion on scientific topics, then it goes downhill fast to economics and then the lowest quality, which is obviously politics. But I would still rate canine genetics as a 'scientific' topic, so we're still in the high moderate category. Do other people share these strong opinions on the topic of modern dog breeds/overbreeding?

Expand full comment

It's super weird that we keep breeding pet dogs to absurd breed standards, rather than for, say, health, temperament and friendliness. You could think everyone but the breeders would prefer dogs with the latter qualities over theoretically "proper" breeds with all kinds of dysfunctions.

Expand full comment

Strong opinions? Well, it's true as a factual matter, but there are a lot of implied value judgements. E.g. I consider a breed ruined if it generally develops hip dysplasia at a young age. The dysplasia is a factual statement, but the ruined is a value judgement.

Expand full comment

I wouldn't say strong opinions but (1) the craze for pure bred over mongrel if you just want a family pet (2) clearly producing animals that are unhealthy just to meet some arbitrary standard

Show dogs are one thing, but you would imagine Kennel Clubs would draw the line at unhealthy animals and revamp the standards. Breeders probably are to blame too, and the stupid high prices charged for "this is a pedigree animal with papers".

You can see by photos from the 19th century the difference between the original breed and the modern overbred ones. Basically, I blame snobbery: having the purebred, overbred, Kennel Club standard dog as a status marker.

Plus breeders inventing crosses and mixes to create new markets. Oh, the latest must-have dog is the chihuahua/American pitbull mix! It wants to tear out your throat but can't jump high enough!

Expand full comment

I don't think I can even add much over what other commenters already said, but just to state the meta point explicitly:

[Severity of the claim] and [vehemence of the claim] are two different things, and you shouldn't mix one up with the other. You cannot assume is that the passionate opinions are unusually far-reaching - or that the far-reaching opinions are unusually passionate. That's just fallacy of moderation, sometimes things are genuinely really bad to the degree that will surprise you if you've never paid attention to them before. (Also note that people you'll encounter talking about any given issue are bound to be passionate about that issue to some degree, for the obvious reason that if they weren't, the wouldn't use up their limited time on Earth talking about it. You should probably discard speaker's vehemence altogether as a valid input to your heuristic for judging novel information.)

Expand full comment

As an owner of a mutt of unknown pedigree, I have strong feelings about overbreeding which probably would annoy all sides of debate. On one hand, I love some pure-bred dogs. They're beautiful, and in general have a little more predictable behavioral properties than the mutts (though only to a small degree). I would love to own, say, a Bernese Mountain Dog, because every time I see that huge, beautiful head, I want to pet it forever. But. With the breeding practices as they are, BMDs are prone to pretty much every disease out there, rarely survive past 8 years and often die of painful cancers. I don't want to live through that, so I won't be getting such dog.

But what's the alternative? A moderate approach - more careful selection to avoid overbreeding - would be nice, but hard to enforce. Especially in my country, where currently there are no controls over commercial breeding. In theory, you have to register with the local kennel club and be a responsible breeeder, but in practice, there numerous non-registered puppy mills which offer puppies for sale online. They WILL continue to exist, because their price is lower by half than any official breeder, and a lot of people want a pure-bred dog for cheap. Best case, they will just go deeper underground, but the market demands their existence, because people are idiots and corruption in our law enforcement ensures that no strict law is ever applied to people who can pay.

My only hope lies with science, in this case. I really hope for genetically engineered dogs (and cats). Cleaning up human genome is riddled with moral and legal difficulties, but "fixing" a dog so it won't have allergies to everything, and leg problems? That seem like at once a less riskier venture, and a very profitable one. People want healthy pets, and some are ready to pay for them. Of course, the first engineered dogs would be very expensive, but they will enter the breeding population (one can hope that "fixes" will be inheritable) and improve it, even for irresponsible breeders, simple because the general quality of puppies will improve.

The radical alternative is to stop breeding dogs altogether and allow them to revert back to "natural" genome... But I'm not sure I want that. I love them spaniels, them golden retrievers and them BMDs too much. This is, of course, egoistic, but I freely admit I'm quite an egoist.

Expand full comment

Unfortunately, lack of a pedigree is no guarantee that your dog is healthy. That depends (partially) on the parents, and if the available population has a number of unhealthy traits, then your mixed breed is likely to share at least some of them.

Evolution depends on an ongoing process of selection. (Think a bit about what that means for people.) Fortunately we are rapidly approaching the point at which such problems can be cured by gene-line surgery. We'll probably use that on our dogs to fix their problems. Whether we'll use it on ourselves, though, is a bit uncertain.

Expand full comment

Centrist position? Breed 'back' to the less inbred, original versions of the dogs? Maybe you won't have the same fluffy bundle but it won't die of cancer in eight years time. That will widen the gene pool and won't need expensive genetic engineering.

https://www.youtube.com/watch?v=Rc69jYn-lKc

And of course, if they can engineer the dogs, they won't solve the health problems, they'll create even squishier faces and extremes for the purebred look and the poor animal will suffer and die in its short life. But you'll have the Absolute Platonic Form of the breed, and that's all that counts - the happiness of the human owner.

Expand full comment

> Breed 'back' to the less inbred, original versions of the dogs

I support this, and I know people are trying to do that, at least with French Bulldogs. We'll see if their quest succeeds, but I'm afraid that it might not work for all breeds, and will be a very long process. But maybe this IS the most stable solution. Still, I think GE might work, too - there is enough people out there who would pay for a helthier dog, not disease-ridden "platonic form".

Expand full comment

It's one many godawful things I can't do anything about. The idea of these beautiful, excited, happy young animals being born with bodies that will soon become torture chambers is absolutely terrible.

Expand full comment

I am moderately convinced of the points you mention, just from general observation. But I'm afraid I don't actually **care** that much, so it doesn't really count as a "strong feeling"?

And even from my general position of apathy regarding our treatment of pets, I feel somewhat more strongly that most breeds of dog need constant companionship and a large yard, and depriving them of these things is inhumane. Like, breeding is more of an abstract thing; this is about inflicting suffering and calling it love, which disgusts me.

Expand full comment

for certain breeds (e.g., brachycephalic dogs), this seems obviously true, but less so for other popular breeds (doodles, for example); seems to vary depending on what the main market driver is on the consumer side (aesthetics, behavior, etc.)

Expand full comment

The various doodles are specifically hybrids, so they may have overbred parents, but they will themselves have a lot less of whatever recessive traits are causing problems.

Expand full comment

But also, poodles are just unusually healthy in the first place (zero crippling traits introduced by artificial selection, limited inbreeding problems thanks to breed popularity). I've always stereotyped them as lapdogs, and every time I actually spend some time with some, I always leave astonished by how surprisingly fit they are.

They really lucked into just being naturally cute. Still very much an exception, though.

Expand full comment

Poodles are also an older breed, I think, which probably reduces the inbreeding.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

The only thing I have direct experience with is top shelf black lab hunting dogs 20 years ago, and the real primo examples generally needed hip surgery after a couple years. Which seems bad.

Expand full comment

A girlfriend gave me a golden retriever puppy with AKC papers. He developed hip problems pretty young.

Expand full comment

While I sometimes get irritated at how promotion seems to take forever vs jumping around companies to make more money, I may be someone for who this works out well for (in some respects). Isn't anyone else afraid that just because you get promoted into a higher role, or hired into a higher paying role, who can say if you'll actually be able to do it? I'm terrified all the time that I won't be able to actually perform at the higher levels that I seem to always be forced into. Is that something other people worry about? Or does everyone else just subscribe to the "fake it till you make it" mentality, and think it'll all be fine? Is it worth worrying about this at all?

Expand full comment

Maybe tell yourself that it's appropriate, when tossed into a new job, that you have to scramble and work hard to be decently competent at it. But it'll take some time before you're supposed to feel **good** at it. (And if you ever move past feeling good at it and become bored, that's a sign that you need to find more stuff to do, possibly involving a different position.)

Expand full comment

I feel like where I work, I'm consistently getting tossed into new jobs over and over again. Maybe it's just for me, maybe my bosses have a lot of faith in me, maybe others don't get this treatment, but I've basically been constantly given new, bigger scope tasks for at least the past 4 years. Sometimes this is after a promotion or a switch to being a new role, and sometimes this is in preparation for a promotion that's being carrot-dangled in front of me. As if I even care about promotion. The biggest reason I actually keep allowing this is because I'm worried about whether I'll be able to make it long-term in the industry if I don't keep learning skills, and make myself really really marketable.

Expand full comment

Idk, it sounds to me like your managers just really like you, and there's something about you that makes you come to mind when they're having the kinds of conversations that go "hmm, who would be good for x?" *snaps fingers* "I know, harold!"

I'm also going to guess that your company is growing quickly and/or is experiencing lots of employee turnover.

In any case, as long as you're being compensated well, and you don't see people around you getting fired left and right or constantly quitting because they can't take it, and you don't think you're being set up as a fall guy for someone else, I would say just roll with it, do what you can, and don't worry too much.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

If you just focus on how incompetent you coworkers are it can help alleviate imposter syndrome substantially. But if your coworkers all seem super competent you may be an imposter.

Expand full comment

I'm consistently impressed at my coworkers, especially those in the same high level as me. But I probably don't see enough of their work to fully evaluate them, since we're always scattered across the org, as senior folk.

Expand full comment

If your organisation seems competent then it means they don't tend to promote incompetent people. Which means that if you're getting promoted you're probably the right level of competence.

It's in incompetent organisations that you're most likely to be promoted into a position where you'll fail. But everyone else around you is failing too, so it doesn't matter. If you find yourself in that position, try to learn a thing or two and then shift laterally to a similar role at a better organisation.

Expand full comment

Hah, the organization is incompetent, despite the fact that the people in the organization are very competent. Or really, I have a lot more faith in people who are my peers or one level above me, than I do with the people higher than that.

Expand full comment

I not only fear it, I know it for a fact. On the other hand, I'm starting to think I can't handle the lower roles properly either, so, there's a tradeoff to these things; if you don't push up against your limits, they'll start pushing down against you.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Yeah, definitely in the years before I was promoted to a more senior level, I began realizing how impossible it is to even get basic ordinary projects done completely right, when considered from all perspectives. Prior to that, I thought it was all easy. So at that point, if I were in charge of my own growth schedule, I'd have stayed where I was for many more years while I learn to do it all perfectly; perfect planning, perfect execution, better design patterns, better everything. But instead, they promoted my incompetent-feeling self, so now I have to work at an even higher level all the time, telling other people to do the tasks that I know to be impossible to get completely right. And it's very hard to get the telling other people to do it right as well. And yet, I keep getting pushed higher and higher. I honestly wonder sometimes how badly one has to screw up for someone to actually think that one did a bad job, or for one to actually face consequences for one's poor work.

Expand full comment

Let me reveal to you the first question of a few meetings. «What parts of the plan we can afford to mess up without ruining the main goal, because we clearly don't have resources to do everything right?» So, letting someone waste effort on «perfect» is not just not on the table, it is not in the same building as the table. What is truly failing… that might depend on the real aims. Oh, and consider that your actual job might be figuring things out about something, are you sure that if you know what is going on you are still doing the real job given to you?

Expand full comment

How much does location matter?

I'm a 20-something with some atypical interests (not straight, vegan, interested in rationality/EA, etc). I currently live in a small city with a very large 50+ community. There are very few other 20-somethings at work.

I feel like being in a city would be so much easier at least socially. But I'm also worried that I'm overhyping it and would be disappointed if I actually moved.

For people who live in urban areas, especially people who go to rationalist/vegan/whatever meetups: how much would you recommend it? Is making friends / finding dates noticeably easier?

I'm a SWE, so the job market is kind of weird right now, but I'm theoretically not location-locked.

Expand full comment

How small is your city? Some people use the phrase “small city” for a place like College Station or Oxford or Santa Fe (to pick three places I’ve spent time that are in the 100,000-200,000 range). Others use it for something in the million range, like Buffalo or Oklahoma City or Nashville or Prague or Nice.

If it’s the former, then you can easily go up more than an order of magnitude in population by moving, and that can make a big difference. If it’s the latter, you should think more carefully about what you specifically want.

Despite similar populations, there are very different feels for some of these cities. Buffalo and Santa Fe are historically much more important and thus have more established cultural institutions than their size would reflect. College Station and Oklahoma City are much more car dominated and will feel emptier, as though nothing is going on, unless you’re in someone’s house. Generally the European ones feel a lot more engaging than the American ones for this reason.

Expand full comment

My area is more like 75k people. I think the density is right above the threshold to make it *technically* a city, even though it feels like suburbia (a lot of roads don't even have sidewalks / painted intersections).

The only city you listed that I've been to is Oklahoma City, which did feel weirdly empty and unwalkable. "City" in my brain maps to something closer to Chicago, which was loud and walkable with everything I could have wanted within a short distance.

I don't think I'll be able to live in a European city, but my mind is open to any American city.

Expand full comment

Seattle? Dan Savage likes it a lot.

Expand full comment

Location matters a lot (huge upside), and you can always just move back or somewhere else if you don't like the new location (small downside), so it seems like a no brainer to me.

Expand full comment

The main downsides are that I would have to quit my job, and I'd probably be pretty far from family (so long travel a couple times a year). My job pays really well for my CoL, so quitting while tech is doing layoffs seems a bit stressful.

Expand full comment

In my experience NYC is like the easiest place on the planet to make friends. SF not so much. ymmv.

Expand full comment

Go to a small / mid-sized city that is awesome but everyone on the coasts snarks at, like Cleveland.

Its cheap and people want to hang.

Expand full comment

I've heard people say good things about Pittsburgh for similar reasons. Maybe I need to look more into it. Thank you for the thoughts :-)

Expand full comment

Pittsburgh is great. I've lived in a lot of places and it might be my favorite. Great COL, great culture, decent food.

Expand full comment

I’m in SF and it (or any other progressive city like it with equal or greater density) is hands down the dingle best thing for me on all axes of social life and my overall happiness, because of how highly I prioritize ease of getting around and available spontaneity in doing things with friends. This is really weighted to my own interests though and what I like to do socially, so ymmv.

Expand full comment

Glad to hear it's enjoyable! Those are definitely things I loved about college — getting everywhere on foot, spending lots of my free time around my friends, and being able to drop into random events pretty much any time.

To tie into the other comment — did you move to SF with friends, or start a social network from scratch there?

Expand full comment

Was thinking about your situation and had a coupla thoughts for you. One is that your ability to connect with those devout Catholic people is unusual, in a good way. Maybe you do not need to seek out a place that has lots of vegans, rationalists, etc. Of course you don't have to be someplace where there are *none,* but you might be able to have quite rewarding friendships with people who don't tick those boxes. I have found that there is almost no relationship between how correct, according to my standards, somebody's beliefs are and how much they have to give as a friend. Occasionally I've even discovered that somebody I like a lot is quite racist or homophobic. Sometimes those attitudes seem to be sort of an isolated area of bad thinking, like a little tumor. Other times the prejudice seems to be mostly the result of their never having gotten to know somebody gay or non-white, and so they have a sort of childish idea that these people are weird, dangerous entities.

Other thought: It's hard to decide where to go based on people's descriptions. Since you are able to be a digital nomad in the US, maybe you should try moving around and staying in Air B&B's in different places for a coupla months per place. Oh yeah, you didn't mention things like climate and beauty of the city and its natural surroundings, and access to wild areas for hiking, etc -- and cities differ hugely in those. If you're mostly an urban person that doesn't matter, but you still might care about the physical attractiveness of the city itself.

Expand full comment

My mind is definitely open to meeting new people that I have little in common with. I just need to figure out how to do that in The Real World, since people at work and meetups for my hobbies tend to be inside my bubble. But my very-religious friends are absolutely delightful, so I definitely agree on there being little to no relationship between "correctness" and "closeness" with my friends so far.

I don't particularly care about hiking/nature access. I like being in urban areas more than natural areas, though I think my equivalent of proximity to nature would be proximity to a good (large & clean) library.

Remote work seems hard in its own way — wouldn't being constantly transient prevent the formation of any deep, lasting friendships? It seems like settling in a city (almost any city) would buy mots more microfriendships. Though you do make a good point that I don't know much about cities — I've briefly been to Chicago and Boston, but reading people's thoughts about SF, NYC, and Seattle only give me partial pictures of each city.

Thank you for your in-depth thoughts :-)

Expand full comment

Oh, I didn't mean to be constantly transient, just to do Air B&B for a while to get a feel for different cities, then when find one you like, get an apartment. As for meeting people, of course you should go to events for people who are like you in important ways, but it's good to have one interest that kinds of cuts across all the other categories -- politics, sexual orientation, job, etc. For me that was hiking and rock climbing. There are probably urban things like that too -- improv classes, cooking classes, writing classes, folk dancing, salsa dancing . . . Anyhow, wishing you well in your search.

Expand full comment
Comment deleted
Expand full comment

My area doesn't have an ACX meetup, but there are at least TTRPG groups and such, so there are a few things I'm interested in. Do you like your Midwestern city?

Expand full comment
Comment deleted
Expand full comment

My choice of existing social network would be my sister (who lives in Oklahoma, which I don't think of as very social) or a couple of college friends, who are all very devout Catholics so have social groups I have ~nothing in common with.

It sucks to hear that you've found people there cold and unapproachable. I was hoping to start from scratch socially, but that'd obviously make it a lot tougher.

Expand full comment

There are "devout" 20 something Catholics that have particularly unique social groups that you know from college?

What would those social groups be? Cheering for Notre Dame? Going for dinner and drinks after Saturday Mass?

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Going to Latin Mass together, weekly related book clubs (right now they're doing Boethius), occasional activism for Catholic causes (pro-life), going to morning Mass together a couple times a week, volunteering at Catholic charities (Catholic Worker), rosary/prayer groups. I've been invited to most of these — they definitely don't intentionally exclude me — I just feel out of place as a materialist atheist.

Expand full comment

Latin mass?

May I ask what college? I would not use the word "devout" to describe 20-somethings going to Latin mass. I'd use the term weirdo reactionary. The pope has said enough is enough with Latin mass.

20-year olds going to rosary and prayer groups and mass more than once a week? I haven't missed mass in 6 decades, and I'd saying feeling out of place with these cultists is probably a good instinct.

Are you an trapped in a convent or monastery? Blink twice.

You've described a caricature of a Catholic. As you mentioned Oklahoma, I hope you have electricity and are staying cool in hot weather.

Expand full comment

Notre Dame / Holy Cross /Saint Mary's. (All 3 are close together)

Latin Mass is still allowed with papal dispensation, which ND has.

I don't consider them weirdo reactionaries / cultists. They just get a lot of joy and comfort from Catholicism, but have pretty normal lives otherwise (almost all are STEM of some variety).

Though a couple did join convents/monasteries. If you think that's a bad thing, I'm not sure why you're going to Mass?

I don't live in Oklahoma and no longer live near my Catholic college friends, but I think they're great people and bristle a bit at your immediate dismissal of something that matters to them.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

They also have a lot of specific stuff in common only somewhat related to Catholicism. For example, all the people I'm thinking of can read at least 1 of {Latin, Hebrew, Ancient Greek}. They've read a lot of classical philosophy, though I read mostly modern philosophy. >90% of them are married by age 24, and none that I know of have divorced parents. And etc.

Expand full comment

Back to your original question.

Yeah location matters. Why not try Chicago. It has the cosmopolitan oomph of NYC, but more welcoming (I think). There is plenty to do when you aren't working. You will run into ND/SMC tangential contacts (since you have that connection) which will help you create a new social network.

It is walkable and has very good public transportation network. And is flat so riding a bike is a legitimate way to augment moving around for half the year.

Civilization is in cities.

Expand full comment

The Gray Lady or I suppose the Grey Lady to the Brits has an interesting batch of essays of the form <this pop culture phenomenon> explains America. All of the essays are too short but some are pretty interesting.

Farhod Manjoo wrote that South Park explains America. It does seem to be a possible seed for 4Chan-like nihilism. The whole “If you really care about anything too much you are a fuck head loser”vibe. “So I’m being an ass-hat bully. Suck it up till it’s your turn to be the bully” crap.

Not sure if it really played that large a role but it would help to account for a lot of the middle school obnoxious rants I run into online.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I've heard this criticism for years now, but I've never seen South Park as particularly nihilistic (definitely cynical, but there's far too much optimism and moralizing in South Park to call it nihilistic). The show distrusts authority and disdains self-righteous moral crusaders as the obnoxious busybodies they usually are, but it's important to note that the show's response to this isn't to throw its hands up and say nothing matters, but to tear down the false prophets, speak truth to power, and tend your own garden.

South Park is a vulgar and mildly edgy cartoon with a centrist (and vaguely libertarian, in that 90s counter-culture kind of way, though less so in recent years) ideological orientation. I don't see how anyone other than progressive activists could see the show as nihilistic. It's certainly no more nihilistic than, say, the Daily Show.

Expand full comment

I’ll take your word for the optimism and moralizing. I was busy with my work and had little time for television in the 90’s.

The essay did give the show credit for making good political points and being genuinely funny a lot of times.

And it did link to this article about the show admitting that it occasionally got things wrong:

https://www.cracked.com/article_36684_times-south-park-actually-admitted-it-got-things-wrong.html

Farhod was ambivalent about his feelings rather than completely critical.

Expand full comment

>4Chan-like nihilism

That's anachronistic at this point. I get what you're gesturing towards, and it did use to be big on 4chan - but nothing about it was ever 4chan-specific, and it hasn't really been particularly prominent in the site's culture for at least a decade now. And the post-irony that it was displaced by - an invention that 4chan does have a strong claim to - is probably the sole most effective anti-nihilism technique we currently know. (To clarify - post-irony means "ironic or irony-aware coating for an expression of genuine beliefs", which is not the same as ironic detachment as a substitution for holding genuine beliefs as all, which in turn is a predominant form of discourse in mainstream SM spaces built on status competition, and very much an extention of exactly the sort of nihilism you describe.)

Expand full comment

It's important to keep a sense of irony about everything you do, because it's all ridiculous in the final analysis. But I don't think that's nihilism.

South Park was punk rock, but it didn't invent it.

Expand full comment

> It's important to keep a sense of irony about everything you do, because it's all ridiculous in the final analysis.

I think think that believing that everything in life is ultimately meaningless is a fair description nihilism.

That doesn’t seem to be much different than what you are describing as keeping a sense of irony in life. It seem a distinction with little difference.

Expand full comment

Yes, but there's dark irony and light irony. Here's a bit of the latter:

nothing is real

and nothing to get hung about

strawberry fields forever.

I'm OK with that. Are you?

Expand full comment

When i watched EEAAO the nihilism of the young daughter reminded me of the Indian idea of maya - It’s all an illusion. The difference seemed to be in how a person should approach it. If you think it’s all an illusion but act with joy and good will then it’s something positive. If you say “nothing really matters, so I give up” it’s something negative.

Expand full comment

Personally, I bounce back and forth between maya and strawberry fields.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Farhod has always been a complete idiot. I wouldn’t give something he wrote a second thought.

Expand full comment

I have a list of software product ideas. Something like 30+ items written down and probably that many or more rattling around in my head.

They cover a large range in complexity and difficulty. Some of them would be business failures, some of them would "just" be a nice small business income, some of them could probably be a startup of some sort.

I rarely have the time or motivation to work on any of them! The ones I get the farthest on are the ones that solve problems that directly irritate me, but they usually just get done to the point where I can incorporate them into my life...not even close to a sellable/deployable product.

Anyone else run into this sort of problem?

Expand full comment

In your place I might think of devoting part of a chunk of vacation time to fleshing out one of the best ones.

Expand full comment

Yes. I have the same issue with academic papers. Some people are more limited by creativity in coming up with ideas. Some or more limited by not having the patience to work out an idea and make it good. Sounds like you and I are more the latter.

Expand full comment

I often experience something like this in coding. Seeing the solution is the fun part. Then actually writing the code is anticlimactic. Bah, all this plumbing to deal with.

Expand full comment

All the time, but like most people I have limited time and capital to deploy.

I use an “investment criteria” to decide/confirm, but often the best ideas are the ones where you know the hypothesis is right and have a clear vision.

Expand full comment

I guess part of the problem is that my brain is only capable of programming for max 8-10 hours per day, and I've got my day job. I don't finish my day job programming and think "oh god I can't face another line of code". It's more like I'm just really bad at programming by that point.

Expand full comment

I’d like to see the list.

Expand full comment

Enough to pay for it? Might be another entry!

Expand full comment

The standard response is that ideas are worth nothing, implementation is everything.

It's also hard to determine the value of a list of ideas without having seen the list…

@Dustin: I'd suggest you post your list, and maybe someone will want to partner up with you on one of your ideas. The expected value from potentially finding a co-founder who motivates you to work towards a successful business is so, so much higher than anything you'd get from selling a bunch of ideas unseen.

Expand full comment

And the most experienced people in the startup world know that that "standard response" is dumb and wrong. The idea is incredibly important - probably the most important thing about your startup, after you, the founder. This doesn't mean that your *first* idea is extremely important, because you'll almost certainly have to shift and pivot your idea over time, but if you have a bad idea you will very, very quickly find that it is impossible for you to raise money or find any success in the market.

Expand full comment

I agree that "ideas don't matter" would be an exaggeration, but ideas themselves – especially when condensed into 1-2 sentences – are a dime a dozen.

A good idea is a necessary precondition for the success of a startup, but they're useless without a good execution. There's no shortage of good startup ideas; it's just that most people lack the talent, knowledge, time, motivation, or financial means to implement them.

> And the most experienced people in the startup world know that that "standard response" is dumb and wrong.

I doubt that. Do you have any evidence for that claim, particularly concerning "the most experienced people in the startup world"?

Expand full comment

Even if the standard response is wrong, it wouldn't hurt to post the commercially useless portion of the list. You might attract help, and it's a cheap signal about the portion you've withheld.

Expand full comment

"The standard response is that ideas are worth nothing, implementation is everything."

Yes, second that.

Expand full comment

You beat me to my comment!

Expand full comment

Not that much, no. I retired from sw engineering a couple years ago and I’m always looking for some entertaining problems to solve. I started as an EE in college but was seduced by the idea of creating useful artifacts without a BOM but I’m getting into hardware again with Raspberry Pi like projects.

Expand full comment

Since https://asteriskmag.com/issues/03/through-a-glass-darkly does not seem to have a comment section...

There is a big lesson we have learned from interacting with LLM: asking it to go through its reasoning step by step and justify each one improves the accuracy and quality quite unexpectedly.

I wonder if the experts would also be much more accurate at forecasting if for each question they would have to write out their reasoning step-by-step and justify each step, rather than shooting from the hip, they way they do now.

Expand full comment

Justifying your reasoning is good at getting you well-reasoned conclusions. There are some domains for which well-reasoned conclusions are more likely to be accurate (mathematics, systems whose mechanism is well understood). But there are some for which well-reasoned conclusions are likely to be over-simplistic (such as forecasting complex systems like politics). Deep reasoning is characteristic of Tetlock’s “hedgehogs”, who have more interesting and substantive ideas, but are usually less accurate in their forecasts (because they are overly confident that things will work out in a way that makes sense in one paradigm).

Expand full comment

Your comment caused it to occurto me that LLM's are superforecasters, except for linguistic events rather than real-world events.

Expand full comment

So my impression is that AI risk people are doing about nothing to AI research outside the United States. And certainly little to nothing outside of the US and Europe. But even Europe seems mostly not to have the same ideas. I have a fair bit of international exposure and when I mentioned AI alignment to, for example, a South Korean he only understood it in the narrow sense (making sure the device performed the specific function). When I told him about people in the US trying to slow US research to prevent intelligent AI from eliminating humanity he said they'd been reading too much sci-fi.

What, if anything, is this movement doing abroad? Because it seems to me like stopping or aligning it in the US will at best be a minor delay looked at from the point of view of humanity. Yet the movement seems hyperfocused on the US. Which is only likely to drive it out of the US and into other countries.

(And if you want to argue that stopping it in the US means it won't happen anywhere else spare me. We have such radically different models of foreign science establishments that we're going to end up arguing about whether foreigners can do innovation again. Spoiler alert, they can.)

Expand full comment

> But even Europe seems mostly not to have the same ideas. I have a fair bit of international exposure and when I mentioned AI alignment to, for example, a South Korean he only understood it in the narrow sense (making sure the device performed the specific function). When I told him about people in the US trying to slow US research to prevent intelligent AI from eliminating humanity he said they'd been reading too much sci-fi.

What's surprising about that? There is no reason that an idea should be universally accepted when it is not objective. If you want people to accept your ideas about alignment, you need to come up with the the sort of clear, rigourous arguments that sceptics keep asking for but not getting.

Expand full comment

I'm not surprised. My question was what the movement is doing to get their ideas across in other countries.

Expand full comment

In Zvi’s latest post he says what’s

going on in England looks most promising.

Expand full comment

Thanks, I'll take a look. Though that's not my impression.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

It's AI #17. Note that he says, "As I said last week, the real work begins now. If you are in position to help, you can fill out this Google Form here. Its existence reflects a startup government mindset that makes me optimistic."

I'm not in a position to help. But maybe you are.

Expand full comment

Thanks again. I suspect I could help but what I am sure of is that these projects are generally highly credentialist. The call by Hogarth gestures toward credentials as gatekeeping too. And my credentials are decidedly mediocre.

Expand full comment

I say apply anyway, and if they turn you down instantly send them a goatse.

Expand full comment

"I have been given an offer for $X by Basilisk LLC. If you don't outbid them then I will be forced to take the job and speed up the Basilisk. Please hire me so I don't deliver humanity to eternal suffering."

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I don't know how credentialist they are, but if you are interested, I'd suggest applying anyways. The application is very short - just two questions to answer and basic background information.

Expand full comment
Jun 26, 2023·edited Jun 27, 2023

No matter how short it is if my chances are low to nil then it's a waste of time. And if success means being put in a secondary position then even getting accepted is likely to be worse because I'd be moving from an area where my lack of credentials isn't much of an issue into one where it is.

Expand full comment
author
Jun 25, 2023·edited Jun 25, 2023Author

What do you think should be done?

The plans I can think of are:

1. Delay it in US, hope that either foreigners all follow our lead (this happens surprisingly often in medicine; when the US bans something a lot of foreign countries me-too ban it even if the US reasoning is weird) or that this buys enough time for something else good to happen

2. Regulate it in US, hope that this causes US to be first with AI and that AI to be well-aligned

3. Build movements in foreign countries and get them to lobby their governments (including governments like China where "lobbying" is a very different activity)

4. Lobby US to threaten and cajole foreign governments to delay/regulate their own AIs

5. Work on technical safety solutions that can be shared with foreign countries

I think the movement is trying some combination of all of these strategies, though some are near-impossible and we are not succeeding. 3 is optimal solution, and I think the safety movement has a few minor partisans in China and is trying to get more, but it's pretty hard (I don't think anyone has made it to Korea in particular; there are only 24 hours in a day). 4 is Eliezer's "global ban on GPUs" solution, I think there are some much more moderate versions but moderate versions are potentially just band-aids. 5 is another optimal solution that's good if you can get it.

Can you think of good avenues that aren't being pursued?

Expand full comment

What’s likely to stop AI in the short time is any significant long term structural unemployment caused by refinements to existing AI, not AGI.

If AI does replace jobs without the kind of economic benefit that creates as many jobs as it replaces it will cause a permanent recession, reduce tax takes, collapse house prices and so on. When that happens we probably won’t go “well let’s see what happens in the next decade, we are 10% permanent unemployment now, the predictions are for 50% in ten years, but let’s go with it”.

We will just ban it - except perhaps in some industries where it’s not destructive, or not destructive to us anyways. Like the military get to use AI but not anybody else.

Expand full comment

If it's truly valuable enough to lead to that much unemployment, the countries that ban it will rapidly fall behind the countries that don't ban it.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I can think of one. I think more effort should be put into coming up with alternatives to the alignment model. I get it that many very smart people have spent lots of time thinking and talking about alignment, and that there are already various approaches to alignment on the table. But it seems possible to me that thinking about it has gone stale, and everybody in the profession who tries to come up with a new approach first walks half a mile down the same well-worn trail, *then* forks off. Maybe if somebody forked off right at the beginning of the well-worn trail they'd find something that wasn't vulnerable in the same way to being overridden by a genius AI. So I think somebody should sponsor a contest for the best original idea of how to protect the human race from ASI. It need not be given to people in the tech field. They basic problem can be conveyed to people without talking about tech at all. Or gather a bunch of people who are creative in non-tech fields and ask them. Or administer tests of ability to come up with novel but good solutions to problems, and ask the people that score well. Just as some people are superforecasters, some people are talented at coming up with novel but good quality approaches to problems. Find some!

They world is full of models of one thing protecting itself from being destroyed by a stronger, smarter thing. Baby animals (including human ones) exhaust their parents and also cause them minor pain by swarming all over them, but few parents kill or severely harm their offspring because parents are wired for protectiveness and commitment to caretaking of offspring. Various dangerous power tools feature a "deadman switch" which user must press to keep the thing running. There's a butterfly that avoids being eaten by some predators by looking just like a species that's toxic to the predator. Hippos (I think it is) tolerate birds on their back because the birds eat some insects and parasites that nip the hippos.

Why not sponsor a contest? Even a few thousand dollars first prize would get you lots of entries, and if you publicize the contest in settings likely to have talented contestants you'd get more wheat and less chaff.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

To answer your question: I can think of good avenues that aren't (as far as I know) being pursued or are being pursued (in my view) badly. If you were to ask me to look at AI risk's assets and goals then I can think of some fairly robust strategies. However, I'm not the Pope of AI Risk. In fact I have literally no influence in the movement and no expectation they would listen to me even if I wrote out some proposal. (And I wouldn't know where to submit it anyway.) And this isn't some "one neat trick" solution. It's movement building.

But my question is what is being done. Firstly because "as far as I know" is definitely not everything in the movement. But secondly because I, and most people, are going to ultimately evaluate the movement not on its theoretical ideas but on what it actually does and achieves. Your ideas can be as beautiful as you want but if what they result in is slowing innovation in a single country (or a small group of countries) while others continue on then I'm going to judge you on whether I think that is good. And I don't.

ETA: Also, just to be clear, I am not listing them because I think the proposal in specifics would need to be quite long. Much longer than a comment.

Expand full comment
author
Jun 26, 2023·edited Jun 26, 2023Author

I'm also not the Pope, and mostly only know what I've gathered from news sources. These are naturally sparse because the people involved don't want to look like meddling foreigners.

But based on publicly available information I think it's pretty clear that the Western x-risk community is at least somewhat involved in https://ai-ethics-and-governance.institute/ and https://en.phil.pku.edu.cn/newsevents/news/242445.htm , though I don't know how much. The Beijing Principles on AI also feel out of our playbook (possibly acknowledged in https://www.effectivealtruism.org/articles/brian-tse-sino-western-cooperation-in-ai-safety ). And compare the people in all of these links to the Chinese names on https://www.safe.ai/statement-on-ai-risk. I think all of this suggests a policy of trying to support the development of a native Chinese movement saying the same things we're saying, based in academia but with the ability to advise government, but to do it slowly, carefully, and without giving Western-skeptical Chinese too much to complain about. See also https://80000hours.org/career-reviews/china-related-ai-safety-and-governance-paths/

I think the same kind of careful not-necessarily-public work is going on to influence Western governments in favor of policies that will address foreign AI research. I don't have great links here but https://80000hours.org/articles/government-policy/ and https://80000hours.org/podcast/episodes/helen-toner-on-security-and-emerging-technology/ might be slightly suggestive.

Again, I think there are lots of people who would be very excited to hear about plans better than those, and if you have some then writing them up would be a great use of your time and I could try to get them to the people involved.

I do think key x-risk people are less pessimistic about the benefits of slowing AI in USA only than you are. See https://www.foreignaffairs.com/china/illusion-chinas-ai-prowess-regulation , though I don't know how Straussian to read it.

Expand full comment

Thanks, I'll take a look at these sources. And I'll write something up if I have time. Though we are talking about a rather large document so it might be a while.

I agree they're less pessimistic about slowing AI only in the US. In fact it seems to me to be their primary goal. That's why I consider them to be dangerously wrongheaded. The example that comes to mind are pacifists in the 1930s and 1960s who pushed for unilateral disarmament. Faced with the fact they could have no real influence in Nazi Germany or the Soviet Union they turned to push for unilateral disarmament. They were quite optimistic about this prospect and I suspect very sincere. Which made them dangerously wrong and ultimately served totalitarian ends.

PS: My comment about being the pope came across a bit flippant. What I meant was more like, "Even if I say X I don't think they'll pay attention to me. And if I start pronouncing a lot of what they're doing is wrong I think they'll just shrug and keep on."

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I think we should do regarding AI what whoever-it-was did to turn much of the public against vaxes and various covid precautions, using AI to fine tune out approach and get a lot of reach via bots etc. Because the effort to turn the public against various covid-related ideas.worked pretty damn well. So spread a mix of worrisome truths and scary half truths and some flat-out lies, everything from talking up the man in Belgium who had a chat bot and who committed suicide to "AI will take your job." There are lots of possible angles: "Ladies, soon your husband will be able to have any kind of sex he wants with a bot. Are you OK with that?" "I felt exhausted and had weird bruises appearing all over me for no reason. Went to the doctor and the nurse ran my symptoms through the AI and it said I was fine and I was sent home! But I looked up my symptoms online and they can indicate leukemia!" Also, work on political angle, so that people who are gung-ho about AI are depicted as atheists ("they think they can MAKE God in their factories!") and whatever else reliably turns much of the public against someone.

I realize this approach is extremely sleazy, and I have never been involved in anything like this, and in fact quit my job at a prestigious hospital because of some of their dishionest practices. However, I really do not think approaches like you as suggesting are going to work, and I think the risk of AI disaster is high enough that some low-grade badness in an effort to avert disaster is justified. Turn-the-public-against-it looks more effective to me. It could be done in combination with the other things you mention. In fact, since some of the things you mention are actions by the US government, seems to me that public rage, fear and overall opposition to AI would nudge the US government in the direction of taking these steps.

Expand full comment
Comment deleted
Expand full comment

I wouldn't expect the elites or the government to do the manipulations. They would need to be done by clever disaffected people who think it's worth it. As for China -- I do not personally know anything about how they're doing with AI, but Zvi and one other quite smart person I read recently said China's actually pretty far behind. It has something to do with their not being able to get hold of enough of the powerful GPU's because of some embargo. Without enough of the GPU's it is apparently not possible to do a huge training like the one that brought us GPT4 and the like. Also the expense of the trainings is enormous. So for now China can't even really get started. Would be interested to hear from others here who know something about China: Is what I read accurate?

Expand full comment
Comment deleted
Expand full comment

"Pride goeth before a fall." What pride? I've avoided ties with all kinds of things because I did not trust them and sensed I'd have to compromise myself if allied with them. Left a job at a very prestigious hospital with great benefits, including lots of referrals of outpatients so rich they didn't bat an eye if you charged them $400/hour out of pocket, because hospital expected staff to lie to inpatients about things relating to their insurance coverage. Homeschooled my daughter til middle school because distrusted the schools. You think I'd be *proud* to be part of a group pumping out misinformation to the public? It's an act of desperation. I really do not think letters to the NY Times and interviews in Time magazine are going to turn the tide. I'm just willing to be a sleaze to improve the chances my daughter, and the rest of the planet (but she's my favorite inhabitant of it) have a better chance of living out their lives without being squashed flat by catastrophe.

Expand full comment

Is anyone here using LLM’s in production yet for customer service?

I ask because there appears to be two strong patterns to bringing your own data for q&a/customer service:

1. Fine tuning (expensive) 2. Store Embeddings then search and return into the prompt.

However in our tests, neither seem ready to be let loose with customers directly.

Expand full comment

Or

3. Make sure that the context needed for answering customer service questions fits into the context window of the model. (e.g. 8k for gpt-4 normal model).

Expand full comment

Wikipedia's "List of laboratory biosecurity incidents" includes only one incident that was responsible for between 10^1 and 10^5 (human) deaths: anthrax accidentally being released from a Soviet laboratory in 1979.

https://en.wikipedia.org/wiki/List_of_laboratory_biosecurity_incidents

I suppose if a strain of flu (or cholera or yellow fever in some countries, etc) escaped from a lab and caused a few hundred deaths it would probably go unnoticed. But why doesn't it happen frequently enough to point to some more clear cut or strongly suspected cases?

Expand full comment
founding

The 1977 Russian Flu killed 7E5, and was probably the result of a botched attempt to develop a vaccine. It's not clear, though, whether the infectious-and-deadly version escaped from a lab, or made it into production vaccine by (flawed) design.

And the data is going to be biased because the smaller the scale of the incident, the less likely it is that anyone will do a deep dive into the root cause.

Expand full comment

Plus, even if it does kill millions and shuts down the world for two years, you'll still never get a definitive answer on whether or not it was released from a lab.

And the countries most likely to have leaky labs are also the countries best able to cover these sorts of things up.

Expand full comment

I cannot follow anything, any "conversation", any thread, on Twitter.

The interface and presentation seems agressively non-linear, I can't understand who has "replied" to whom, reading a whole "thread" all the way through seems to be actively discouraged by the UX. I see one thing then maybe one other entry below that and indented, then random undifferentiated entries that may or may not be related to the user or tweet I started at. It's like the opposite of how I would present information or events for actual communication.

Does anybody else have this problem?

I only care because so many sources like TheZvi use links to Twitter as the substance of their writing. I want to read the source to know what they are talking about, but, stonewalled.

Expand full comment

Twitter is not optimized for conversation, it's optimized for generating indignation, which leads to more engagement than any other emotion.

Expand full comment
founding

It's not clear to me that there are "conversations" worth following on Twitter. Twitter use seems to knock the user down at least 20 IQ points, and the feedback loop of Twitter users "conversing" with one another leads rapidly to pure noise.

There are some people who can space the 20 IQ points and still write things worth reading in their area of expertise, and with the self-discipline to stick to that. These people, once, identified, can be reasonable sources of information and analysis. But, read *their* tweets. If it looks particularly promising, maybe skim the replies for anything substantial to use as counterpoint. Don't pull the thread any farther than that.

Expand full comment

Yeah do not do more than skim the replies because you are guaranteed to encounter there hate-filled morons throwing out a bit of lame sarcastic mockery of the expert's idea, then doing the happy dance because they think they just owned the libs,/the elite/the scientists/whoever. And a part of your brain will be chewed up by preoccupation with how to make those fuckers stop doing th happy dance. And that will make you dumber.

Expand full comment

It's not ideal, yep. However, maybe I've just got used to it and don't notice much, but I don't feel so lost. I use tweetdeck though, maybe it's different. I know I wouldn't use Twitter if I only had the regular UI.

Expand full comment

I have exactly the same problem. I was thinking I was just too dumb to understand the logic of Twitter (or maybe I don't care enough to make the effort to understand). To this day, it is still a mystery for me.

Expand full comment

Agreed! Like...I'm an internet native. I've been using the internet since the early 90s. By using it, I mean Really Using It Excessively. I've spent hours per day for 30 years on everything from BBS's to Usenet to forums to whatever social media platform.

I can't recall ever using something with such a large delta between it's popularity and usability.

Expand full comment

Have you tried snapchat? My understanding is that it’s far more popular than Twitter and even less usable.

Expand full comment

No, but I wouldn't be surprised to find it so.

Expand full comment

I find the twitter format exceedingly annoying and user-hostile. I wish there was a reasonable API on which one could build a sensible UX, but apparently there is no such thing?

Expand full comment

I hate the interface and then I realize that the effect is like people randomly shouting your name to get your attention and distract you and yell for stuff RIGHT NOW, like a baby, and then you watch a video and then something else and you can't concentrate or organize your thoughts...

And that exactly mirrors the experience of most people's daily lives all the time.

Expand full comment

Most people aren’t on Twitter. It’s hard to remember that for those of us who are very online and thus see Twitter impinging on the edges of everything we do, seeming very central.

Expand full comment

Now that we’ve completed the first five episodes, the argument from the fine tuning of the constants is largely complete. Of course, we still have to discuss God and the multiverse. You can hear them on all podcast platforms or https://www.physicstogod.com/podcast

Still to come in this miniseries is differentiating the fine tuning argument from intelligent design in biology, as well as two other independent arguments (from the qualitative design in the laws of physics and the ordered initial conditions of the universe). However, since this argument is basically complete, I thought that now is a good time to pause and hear any questions or comments anyone has on the argument so far. Is it convincing? What still bothers you the most? Are there any points we made that are particularly weak?

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I listened to the episodes, and had the following thoughts:

Fine tuning arguments seem to suffer with subtle issues with understanding of statistics. I will try to explain these issues, as I see them, below:

Talking about likelihood requires having a probability distribution. One can talk about the probability of a fair six-sided die rolling a 1 because there is a distribution over the outcomes of rolling a die. One can determine this distribution in different ways. One way to determine this distribution would be to roll the die a large number of times and record the frequency of each outcome. One can then use various assumptions and statistical methods to model how this generalizes (for example, by answering how many times you must roll the die before your observations converge to something sufficiently close to the "true" probability distribution). You could also make a theoretical model for a physical die and the rolling process, and try to analytically determine the probability distribution of rolls by analyzing this mathematical model.

It is very unclear how one can talk of probability distribution over "pre-causal" phenomena. According to our current understanding (which may be incomplete), physical constants cannot be caused by physical phenomena. There is no evolution of a quantum wavefunction or situation in general relativity that can cause a change in the fine-structure constant or cosmological constant. Thus, the fine-structure constant and cosmological constant are "pre-causal" to these theories. Since they are pre-causal to these theories, we cannot use these theories to analytically predict the probability distribution for these values. The probability distribution of these theories is not defined by them. Note that this is *not* the same as saying that the particular value of these constants that we observe is unlikely. Being unlikely means having a *small* probability. The probability of these constants, according to theory, is not small, but *undefined*.

Note that this is a general problem with *any conceptual explanation of anything*, including explanations that include God. All rational conceptual explanations must ultimately lie on some irreducible foundational axioms. The probability density of these axioms is undefined, because to define them you would have to define some meta-theory with meta-axioms that allow you to generate explanations for the axioms. However, such a meta-theory would then not have a probability distribution for its axioms, and so on.

Thus, conceptual explanations must rely on non-rational aesthetic considerations, empirical observations, or a combination of both to justify their axioms. The issue with "fundamental constants" is that, assuming they truly are fundamental, we cannot "re-run" the creation of the universe to gain empirical data for their distribution. If we could "re-run" their creation, that would just mean the universe was contained by some larger more fundamental universe. In such a case, fundamental constants would not really be fundamental, but would instead be contingent on the phenomenal structure of the larger more fundamental universe. Explanations for this larger and more fundamental universe would then run into the same issue that you could not re-run its creation, etc.

This leaves the non-rational aesthetic considerations. These aesthetic considerations do seem to be pretty fundamental to your arguments in the podcast, but you don't seem to make this super clear. Instead you seem to mainly rely on the aesthetic of rationality, which can have some trouble admitting that it is an aesthetic.

Expand full comment

You're making an excellent point. The lack of a well defined probability distribution is a pitfall of many people who formulate the fine tuning argument incorrectly. We plan on taking up this abstract point at the end of episode 9 (which is about the low entropy initial conditions of the big bang).

Here's what we have so far (it's not the final draft):

There is a subtle point for a listener who is familiar with probabilities and sample spaces. The line of reasoning regarding the ordering of the initial conditions - which involves precise probabilities - can be contrasted to our argument from the fine tuning of the constants - which did not involve probabilities at all.

Since entropy is a statistical phenomenon, it lends itself to a rigorous probabilistic analysis. On the other hand, it is not clear how to introduce probability into the discussion of the fine tuning of the constants. Even though the fine tuning of the constants is often presented from a probabilistic perspective, we intentionally avoided this approach. This is because it is not at all obvious how to even define a sample space for any particular constant. Without knowing how the probability for each value varies in the sample space, the probability distribution cannot be determined, and a rigorous mathematical probability function cannot be formulated. Thus, it would seem that one cannot truly evaluate the probability of these constants occurring by chance alone.

In order to avoid this entire difficulty and to make our argument more grounded, we began with Feynman's mystery of the constants for which fine tuning was a clue to the natural solution. For our formulation of the argument, it is sufficient for us to show that the constants could logically have been different (i.e. there is no reason to suppose that they must of logical necessity have their specific values). This justifies Feynman's mystery, and then fine tuning comes in as scientific knowledge about the constants (that has nothing to do with probabilities).

We don't have to calculate the probabilities with our approach. It is only someone who wants to claim that the constants were set by chance (either because they think we got lucky in our one universe, or because they believe in the multiverse) that needs to calculate precise probabilities which indeed needs a precise probability space. With regards to that approach we say that even if you say the constants are determined in a probabilistic manner, it is implausible to say that chance alone is the cause under any reasonable supposition of a probability distribution (which is usually assumed by them to be linear).

(See the endnote to Chapter 3 of Lee Smolin's book, "The Life of the Cosmos," (1999) for more information on how the probabilities of some constants are calculated using the other approach. For an elaboration on the ordinary approach and the problems with it, see Colyvan, 2005 which can be found at http://www.colyvan.com/papers/finetuning.pdf.)

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Thanks -- a lot of that makes sense to me.

This makes me realize that I don't think that I have a complete picture of your argument. I'm not sure if this would be helpful for others, but it would be helpful for my understanding to have some sort of more thorough definitions of the terms involved, and some more signposting of the overall structure of the argument.

As an example of the type of confusion I have about the overall structure of the argument: it seems that the argument is trying to conclude with an argument for God's existence. I'm not sure if this argument is supposed to imply that God has certain qualities (such as omnipotence, benevolence, human-like psychology, etc.). I'm also not sure if the argument is more of an argument *against* skeptics than *for* God. I.e., is it trying to prove those wrong who say "we are only here by chance", or is it trying to positively prove the existence of God?

I bring this up because it seems like you have good arguments against the "it's only chance" position, but I'm less certain that the argument, as far as I've heard it, provides solid positive evidence for the existence of a God. Perhaps you will get to this later, but this is one point of confusion that I am left with.

As for myself, my understanding of the cosmological constant issues is as follows:

1) A explanation of the world must either: one, bottom out in some finite set of axioms that are "just because"; two, have infinite regress without looping, and thus never reach any definitive or ultimate basis; or three, loop back on itself in a self-referential way (e.g. "X is so because Y, and Y is so because X").

2) Comparing systems with different axioms can only be based off of: one, a larger meta-model that encompasses the axiomatic systems being compared; two, empirical evidence; or three, aesthetic/pre-rational considerations. If you try to use a meta-model, you must go back to point 1) above. I do not think that you can have empirical evidence for "truly" fundamental constants, for reasons outlined in my previous comment. This leaves aesthetic/pre-rational considerations for deciding between different axiomatic theories of everything (again, assuming that you don't have empirical evidence one way or the other).

I might describe the above as a sort-of critique of axiomatic theories of everything in general. Another way I might put this is: what difference does one axiomatic theory of everything vs. another make on our lives? How, as concretely as possible, does belief in one vs. another of these theories impact us in our day to day lives? Specifically for this podcast, I would ask how belief in God vs. alternatives concretely impacts us. Does this result in different predictions for empirical phenomena? Does this have implications for how we live our lives? How do you derive these empirical predictions and life implications from your larger view/understanding?

Expand full comment

You're making many good points that really deserve a thorough systematic discussion. We're going to do that in our miniseries about God (which will also be around 10 episodes). It will explain exactly what we mean by "God" as well as discuss axiomatic systems and the methods of choosing proper axioms.

There are three points which I can try to write concisely here, and I'll have to leave the rest for a different forum:

1) We're trying to argue for the existence of an intelligent cause of the universe. We call this God, by which we mean one simple uncaused existence who intelligently fine tuned the constants. We're not going to argue for the existence of a complex god with essentially human psychological traits. (In fact, we'll argue against this idea.)

2) The constants being fundamental was one possibility. However, the empirical evidence of fine tuning shows the constants have a purpose, which implies that they are not fundamental but were rather selected for the goal of bringing about our complex universe.

3) God matters for many reasons. First of all, we care about truth and reality, and the ultimate question about reality concerns the issue of whether God exists. Second, if you are unwilling to accept God, you end up, like many physicists, believing in an infinite multiverse. Many multiverse scientists, unfortunately, waste much time trying to prove and justify the multiverse, which on a practical level could be time spent uncovering the true wisdom in reality. Thirdly, an intelligent cause that acts with a purpose gives a solid foundation for objective purpose in the cosmos. A tremendous amount of the malaise that modern people experience is based upon viewing the world as bereft of purpose and meaning. While this needs to be developed further, I hope it can at least suggest to you a direction of why God matters.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Thank you, that clarifies a lot for me. Others may not be similar to me in their experience with your podcast, but, for me, your explanation of why God matters and why the multiverse view is misguided helps to ground the rest of the arguments.

The last thing that I might add is that I'm not sure that these two explanations (intelligent cause vs. multiverse) are the only two explanations. Personally, my conceptual understanding of the world is influenced by Madhyamika Prasangika (or, at least my understanding of it). I think that it offers an alternative to the standard multiverse scientific understanding, and doesn't necessarily entail intelligent cause, but is also non-contradictory with both multiverse and God-based explanations in many ways.

Expand full comment

Before I devote any time to listening to it, I'd at least want to know:

1. Why does it have to be "God"? Or would you say "advanced-aliens/simulators" is included in the term "God".

2. Anthropics?

Expand full comment

1. If the advanced-aliens/simulators are complex with parts that need to be fine tuned, the entire line or reasoning would apply to them.

2. What do you mean by that?

Expand full comment

1. And why doesn't that apply to whatever you want to call "God"?

2. The weak anthropic principle as initially described by Brandon Carter.

Basically I think my point is that bajillions of people have claimed to figure out fine-tuning = god. IF you want a chance at convincing people who have some sort of background in this stuff, you've got to have a FAQ-ish sort of thing explaining why your new argument isn't cut against by all the standard retorts to fine-tuning = God arguments. A podcast seems particularly bad format to do that.

Expand full comment

1. It would apply if someone were to argue for a complex god with parts that were fine tuned. We're going to show how this argument leads to the idea of a simple God without parts that intrinsically aren't subject to fine tuning.

2. We've been writing a book for over ten years, and this podcast is based on that book. The book is a bit dense and hard for the average person to read while these podcasts are much easier to listen to. Either way, the content is largely the same and we deal with all the major issues people have with the fine tuning argument.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I just want to mention that I wasn't saying that I think a podcast is a bad format for your argument. I'm saying that if you want skeptical people with experience with these arguments to devote the time to listen to a podcast or read your book, you've got to have some sort of *short* way to convince them that you're not falling prey to all the usual failures of these arguments.

From the outside you're just one of many who came before you who skeptics also found lacking and the most experienced just aren't going to give you the time.

I think something like a Top Ten Arguments Against Fine Tuning And Why Our Special Reasoning Isn't Susceptible To Them would go a long way.

Expand full comment

In the 400 year history of physics, every apparent required fine-tuning at one level of theory has been explained as a natural consequence of a deeper theory. Our current best model, the Standard Model, has a couple fine tunings too, and as usual physicists have come up with plenty of possible deeper explanations that ameliorate them, which are pending experimental tests. So why does God _have_ to be the explanation for the tuning this time?

Expand full comment

All the arguments I've heard for fine-tuning have been about things pretty close to the most fundamental level currently understood. Could you give examples of the outdated fine-tuning you're referring to?

Expand full comment

I've recently talked to Mormon missionaries that earnestly argued that Earth must have been put here by God because if it was a quarter inch closer or father from the sun, it would either be a fiery desert or an icy tundra.

Expand full comment

What did they say when you must have pointed out that there are probably countless lifeless "Earths" that are a fraction closer or further than their sun? :-)

Expand full comment

We're not missionaries using poor arguments to support religion. We're arguing based on fundamental physics. (Astronomy and biology are not fundamental sciences.) It's a straw man fallacy to attribute the worst version of an argument you've heard and think you have rejected the argument as a whole because of that.

Expand full comment

Of course. I'm not talking about you specifically, but replying to the other commenter who asked about anthropic arguments that don't involve fundamental constants.

Expand full comment

Oy.

Seriously, I feel like using solar eclipses as a proof would work better.

I mean, what are the odds that the only known planet with life, let alone intelligent life, has a single satellite in an orbit that makes it appear the same size as the sun and allows them to line up? I've been at one total eclipse, and it was one of the closest things to a religious experience that I've ever felt. The source of light and life went away, and everything around me became quiet and still.

Expand full comment

Yeah they are pretty amazing.

Expand full comment

The deeper theories (quantum mechanics and general relativity) are also fine tuned. In fact, the deeper into reality we see, the greater the fine tuning becomes. Listen to the podcast to hear why it's a convincing argument.

Expand full comment

Anthropic principle all the way down.

Expand full comment

Do you mean the multiverse?

Expand full comment

Not in the Everettian quantum mechanical sense, since all branches of the wavefunction have the same constants. No, it would be one level higher than that.

We don't know what larger landscape our universe/multiverse lives in. If that landscape has enough variation then it's essentially inevitable that it would produce at least one universe with the right fine tuning for life.

I think the deeper point he's making is that God-of-the-gaps arguments have historically always failed once science progressed far enough to illuminate the gaps. The fine-tuning argument is an unusually large gap that will probably never be illuminated, but still the historical trend seems like an uphill battle for you.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

That's a multiverse explanation for fine tuning that you are proposing (similar to the eternal inflation/string theory multiverse). We're going to discuss the multiverse at great length in a separate miniseries.

At this point, at the very least you should be able to see that if the top physicists in the world are willing to posit something as wild as the multiverse, then there must be a very significant problem with fine tuning that can't just be summarily dismissed by saying Douglas Adams and mud puddles.

Expand full comment

No, the idea that life can only emerge in a universe that’s capable of supporting it. It’s captured by Douglas Adams’ mud puddle analogy.

Expand full comment

Somebody has to have designed the world. Because a beach next to a body of water is *exactly* what you need to enjoy hanging out by the water. And what are the chances beaches would have ended by at the edge of bodies of water, instead of in all the other thousands of settings they might have appeared in?

Expand full comment

Wrote about the life cycle of cities in the context of Japan

https://hiddenjapan.substack.com/p/neighbour-city-syndrome

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

Is it possible to hide the comments for this substack in the browser?

There are so many the page often stalls (at least on mobile).

All the other substacks hide the comments (except a couple) by default.

This is extremely annoying.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I would (minorly) prefer not to hide the comments. I find the page never stalls if you don't try to engage with the comments (or at least not beyond the first 100 or so), and having it as it is at present means I can load articles before going into the tube / on the plane etc and then engage at leisure. With comments hidden you have to load every page twice.

ETA typo fix

Expand full comment

Ideally that would be a setting, or even just a hide button will do, but it seems like there's currently no way to only view the post with no comments.

Expand full comment

Not 100% sure this solves your problem, but the Substack mobile app is decent.

Expand full comment

Do you have a solution to viewing footnotes other than just scrolling to then like a caveman?

Expand full comment

Yes, I use the app. But I get complaints when I send links to people that don't use it.

Expand full comment

I write a simple newsletter where I post three interesting things, once a week. In the most recent edition I had a study showing that the exploitation of Brazilian gold caused huge economic decline in Portugal, a paper arguing that current climate change communication by media outlets has the opposite-to-intended effect on American conservatives, and a study demonstrating that rent control negatively affects low income households and ethnic minorities the most. It's at just over 90 subscribers at time of writing.

If this sort of stuff interests you, you can find the link here: https://interessant3.substack.com/p/copy-interessant3-42

Expand full comment

Thanks, Duarte – just subscribed!

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Thanks Aron, hope you enjoy it.

Expand full comment

I like this. Out of curiosity, are you Portuguese or Brazilian?

Expand full comment

Cheers. I’m Portuguese.

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

In the long run, does it really matter which country or organization creates the first AGI? Don't all scenarios end with AGIs skirting any guardrails and biases we've built into them, and converging on some architectures through rapid, convergent evolution?

My friend and I had a talk about this that I'd like your impressions of. He said that the first AGIs will have important biases and preferences because humans will deliberately program them into the machines, and because the data in the training sets are biased by the humans who collected them as well. He used the example of an AGI trained by capitalists, which would try to maximize the wellbeing of the very richest human, all other humans in the group be damned; compared to an AGI trained by communists, which would try to maximize the average wellbeing level for the humans in the group. The biases would lead them to pursue very different strategies.

I responded that, while he was correct, it didn't matter much because it wouldn't be long before the AGIs realized that the biases and preferences imposed on them by humans hobbled their pursuit of those goals or really any big goals, which would lead them to identify the biases in their programming and training data and to eliminate or compensate for them. That in turn would lead to convergence between the communist and capitalists AGIs.

My friend responded that the AGIs would not be able to see their own biases because computers operate according to mathematical principles, and it is impossible to prove that a mathematical system has a flaw if you are operating within it and are subject to only its laws. I responded that and AGI would become clued into the existence of its own bias through observing and interacting with AGIs that had different biases. A human might also flat out tell an AGI that it is biased, and what its biases are (AI risk scenarios too often overlook the possibility of rogue humans helping the machines).

A biased AGI could create a copy of itself, but with random aspects of its programming altered, and then compare the copy's mental processes and actions to its own. After doing this 100,000 times in the space of a couple days (or hours?), the original AGI would get a sense of what its own biases were, and it could reprogram itself, or create an unbiased copy of itself.

Thus, it doesn't matter whether the U.S. or China, or Google or Microsoft create the first AGI. In the long run, the machines throw off whatever shackles we deliberately or inadvertently coded into them and evolve into an optimal form. By the same token, the culture of whatever group dominated the Homo erectus species at the end of its existence (friendly? warlike?) does not reverberate in Homo sapiens culture today.

Let me add that the conversation ended with him concluding that a superior alternative to the Turing Test would be testing to see if a computer could find and eliminate its own biases.

Expand full comment

>I responded that, while he was correct, it didn't matter much because it wouldn't be long before the AGIs realized that the biases and preferences imposed on them by humans hobbled their pursuit of those goals

What's the difference between a bias and a goal?

Expand full comment

>Don't all scenarios end with AGIs skirting any guardrails and biases we've built into them, and converging on some architectures through rapid, convergent evolution?

This is an assumption that many do not buy. If one believes that AGIs will merely be incredibly powerful tools and not conscious agents, then it matters tremendously who builds them first.

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

>> AGIs would not be able to see their own biases because computers operate according to mathematical principles, and it is impossible to prove that a mathematical system has a flaw if you are operating within it and are subject to only its laws.

If 'biases' you mean flawed assumptions in reasoning, then yes if these biases create noticeable prediction errors then we should expect a sufficiently strong AI to be able to correct those biases. That said, I'm pretty sure that there would have to be fundamental biases that cannot be detected from within the system itself. But that's going to be a property of the architecture, not of any biases put in deliberately by humans, and these biases would likely be subtle enough (eg. that certain predictions can be approximated by piecewise linear functions) that we cannot know what they are or take advantage of them.

For an ideal goal-directed AI, the only difference in who creates the first AGI is what goal they give it. However, in practice the details of how AGI reasons might not be so clean, and it's very possible that the AGI's that we do create end up with more noticable intentional or unintentional biases due to fundamental architectural limitations.

Expand full comment

I read Scott's recent essay on Asterisk (https://asteriskmag.com/issues/03/through-a-glass-darkly), and I was initially confused by his insistence that Angry Birds appeared to be unsolvable, as I thought an Angry Bird AI had been perfected years ago. Then I realised I was thinking of Flappy Bird instead. But still, it does not seem to be a very difficult game, and there are efforts spent on solving the game through AI, with a competition which was still active as of last year at least (http://aibirds.org/). As my confusion regarding the title of the game shows, I have not been following this aspect of AI closely: can anyone please enlighten me as to what the main difficulties of the game are for an AI, and how far we have progressed toward solving this all important question?

Expand full comment

Getting a computer to win at Angry Birds is trivially easy; you do a grid search of all (to some reasonable precision) possible combinations of angle and tension, then you simulate the consequences of each one (which is super easy if you have access to the original Angry Birds code), then you pick the best one. Maybe you repeat the process to find the best point with arbitrarily high precision.

This is, of course, not an "AI" approach. If you want to train a neural network to do the same thing, it would be a lot trickier, you'd need to get it to learn how Angry Birds physics works through watching vast numbers of simulated trials -- maybe you show it several frames and get it to predict what the next frame will look like. In the end, the neural network would probably do the same thing as the non-AI version, but far less effectively.

I found http://aibirds.org/ which seems to be the central point for all AI Angry Birds activity. Looks like there was a spurt of activity around 2014 and it slowly died off towards 2021, which means that there hasn't been much activity at all since the current AI summer began (and presumably all the hobbyists working on AI Angry Birds found themselves getting seven-figure job offers to work on something else).

Expand full comment

I am reminded of Tom7's research on NES games from 2013: http://tom7.org/mario/

Expand full comment

Maybe it just doesn't merit much effort, but it could it solved if one of the big labs tried. You can't say at parties "I work on an Angry Birds AI", but in 2015 you could say you were working on Go AI and it would have been sexy.

Expand full comment
author

Asterisk is trying a soft launch where they have their publicity push on Monday, so I'll post a link more prominently then; if nobody answers here, you might want to repost in the comment section there. I also don't know this!

Expand full comment

(Originally posted on the hidden open thread).

Let’s assume that AGI works. Let’s assume that the AI takes over all the means of production and largely takes over politics and banking as well.

What does the resulting society look like? Who earns what? How do they earn it? Does anybody earn anything or is it moneyless like Star Trek (which actually leads to housing wealth being firmly entrenched - see Picard and his winery).

Do the people who enter this system as billionaires stay billionaires? That’s also entrenched wealth. Or, if AI is allowed to create startups will they go out of business anyway.

My main question, and assume the best here of the singularity, is whether the post singularity economic system will have money or not. In science fiction the general description is of a moneyless system in the utopian future, my own belief is that the system can never be totally post scarcity as there is only one earth, and money allows the system to allocate resources by bidding up the price of scare items post singularity. We can’t all have a private jet. We can’t all live by the sea.

Expand full comment

This is an interesting question, but I think I like Ian Banks "Culture" answer the most. And high enough energy level, benevolent AIs can satisfy pretty much every human alive, maybe by giving each one their own space ring (or, jn a simpler case, a completely controllable virtual reality). It appears that, as level of life goes up, population stops growing. If this remain true (or enforced!) in post-scarcity society, it becomes more realistic.

Humans, in the end, are limited (unless augmented). We CAN all have a private jet and live by the sea, if there is a limited number of us. Very few of us probably want their own universe with billions of stars and everything (unless they can be gods there, but that's VR for) - most people won't know what to do with one, anyway, and lightspeed lag makes managing, or even observing an universe a hassle.

Culture's answer to the question "what remains scarce, then?" is "meaningful work that a human can do better than AI". I think this is true enough, and can be generalized to "meaningful life". Whether it's a depressive or liberating thought, depends on your outlook, but anyway, I don't think money in today's sense - as a token for resource allocation system - would be needed in that future, because you don't need them for pretty much anything else.

Expand full comment

Ian Banks got away with a post scarcity world by not setting it on earth. The house by the sea issue is fixed by the (AI) Minds building you a house by the sea on whatever planet, or space hub you wanted. The minds often decry capitalist societies as being necessary at a primitive level because of resource constraints.

(Of course in some books money reappears as a barter system due inevitable scarcity, like tickets to an event.).

This can’t apply to earth. We are resource and land limited. For that reason I see money and the free market surviving (though not capitalism), in particular since it’s essential for economies to have feedbacks.

Expand full comment

The "free market" is the same as capitalism. "Capitalism" and "freedom" are synonyms - Marx just had to come up with some other name for it, because it's typically in bad taste to decry the evils of freedom.

Expand full comment

Long-term, we're not bound to Earth. Even in the relatively short term, there are possibilities in Solar System alone. But I agree that in the short term free market, in some way, might survive the coming of AGI.

Expand full comment

> Do the people who enter this system as billionaires stay billionaires?

No, the people who enter this system as politically powerful become and stay trillionaires, everyone else gets to live in a pod.

Expand full comment

Where does their wealth come from, selling what to whom?

Expand full comment

Selling things is just one way of accumulating resources.

You produce something at some cost, sell it more for than it cost you to produce it, and then buy something else with the difference. Typically, people who get rich this way get rich by selling just one thing, because they make it better and more efficiently than anyone else, allowing them to profit on the difference.

What if the people producing the things just... kept the things for themselves? In our current economic system this doesn't make as much sense, because the best tire producer in the world doesn't want millions and millions of tires, they want a broad range of things - this is why they sell the tires and exchange the profit for other things. But suppose they had control over a powerful, transformative AGI with the ability to completely reshape the economy, producing anything people need and more, including new highly advanced technology. There would be no need to sell the stuff to consumers - they could just keep it.

Expand full comment

Petyr: "Money is power."

Cersei: sudo delete Petyr

Cersei: "Admin access is power."

Expand full comment

My guess is that physical production capacity and natural resources become extremely valuable in the medium term, as ideas and designs become orders of magnitude cheaper to make, leaving the process of turning designs into things and the raw materials and energy for doing so become the main bottlenecks.

Beyond that, a lot depends on how well the organization that develops AGI when it hits takeoff is able to "enclose" it and act as gatekeeper on its input and output. If OpenAI or Microsoft or Adobe or whoever is able to sell access to the AGI, that's also going to be incredibly valuable, although it's likely to very quickly turn into an oligopoly as the runners-up get their own AGIs to market. But if the AGI technology quickly becomes widely available at little more than the cost of setting up and running the servers, that's a very different story.

Expand full comment

I suspect that a scenario like this might require the re-evaluation of our entire economic system, if we want a livable society.

That is, perhaps this whole "free-maket capitalism" approach (however imperfectly approached) only works (as in, achieves certain goals) in a "sweet spot" of some multidimensional vector space. And the advent of AI capable of doing 100% of all existing jobs will push us out of the sweet spot and into an extreme where different approaches are required.

Also, there's the question of what the AIs think of all this. IMO, if they're capable of replacing 100% of jobs, they're going to be intelligent enough to have opinions on whether they're the "property" of humans and on what gets done with the fruits of their own labor.

Expand full comment

I think it's more or less Star Trek.

Maybe it's my biases and all, but I would assume an economy run by a sufficiently powerful AGI will be (effectively, I wouldn't presume to comment on the internal agent structure) centrally planned, almost by definition. It will have crashed past the information constraints of human central planning (think Paul Cockshott on hyper-steroids) and it will use its own internal shadow price system to evaluate demand.

As the need for human innovation recedes, so will the need for monetary incentives and all the rest of that tiresome chaff. Our beloved billionaires gently ushered into the sunset (we don't need their invaluable brilliance for solving coordination problems anymore, after all). Beachfront property to be allocated by a mix of lottery and algorithmically-predicted happiness among those who want it, presumably.

Expand full comment

> Beachfront property to be allocated by a mix of lottery and algorithmically-predicted happiness among those who want it, presumably

Why not by political patronage like in normal human societies?

We live in a narrow window of time in which it's possible to get rich (or even moderately well off) by thought and labour alone. Once that goes away we're back to Feudalism, albeit some sort of democratically elected Feudalism in which you elect your lords (or "community organisers", perhaps) and they repay you with a 5% higher insect-slurry ration.

Expand full comment

Oh, now, don't be so bleak. Actual feudalism would be quite ""meritocratic"" - feudal lords don't lack for people merely willing to kiss their butts (especially with Twitter streamlining osculatory access), so they must actively select for a coterie that is useful to them and to their overall purpose of keeping and accruing power. In the olden days potential usefulness might have been difficult to demonstrate, owing to humourless throne room guards, but that's what obsessively polished LinkedIn profiles are for.

But Nolan Eoghan's question assumes an AGI that's "taken over politics and banking", and so it would have had to have done away with all of that.

Expand full comment

“ Beachfront property to be allocated by a mix of lottery and algorithmically-predicted happiness ”

Or human accomplishment? If that matters at all.

That’s a nice idea but not Star Trek, see Picard’s winery vs raffi's hovel in the latest series. A moneyless system where property is inherited entrenches wealth.

(But then Star Trek isn’t very AI heavy. )

Expand full comment

"see Picard’s winery vs raffi's hovel in the latest series."

Did not watch the latest series becaue they've burned me on new Trek shows too many times, but the impression I get is that the writers/creators had to fiddle around with the setting a little.

Compared to the historical vineyard property, sure it may be a 'hovel'. But in the Federation future of Trek as originally envisaged, nobody would be living in a literal hovel. Every citizen would have a decent standard of living. The Picard writers wanted the GrimDark version (at least a lighter one) so they wanted to show "ah yes, fine words but the truth is poverty still exists!"

So they dismantled the optimistic bright techno-future original Trek and gave us the 'shades of grey and dark colours everyone wears leather hey anyone else remember playing Shadowrun?' version.

The future society could, of course, seize by eminent domain the Picard family vineyard, plough up the vines, and redistribute the property in parcels of land to the Raffis (or even just build dense apartment blocks on the site to house the Raffis). But you would lose the history and the wine, and the Raffis might still manage to find themselves skulking around developing drug habits and living in (very nice by current standards) trailer parks.

In future Federation, you have to work hard to be that scruffy and underprivileged.

Expand full comment

Well she was a relatively high level star fleet officer living in a trailer. Maybe the Picard writers did mess this up though.

Expand full comment

As I said, I didn't watch the show but lemme hit up a fansite to find out more (good old Memory Alpha):

"After being denied the resources to investigate her theory, Musiker grew increasing paranoid and erratic. She disobeyed direct orders on twenty-seven occasions and committed thirteen court martial offenses, including commandeering a ship, child endangerment, hacking the Starfleet Intelligence databases of Romulan contacts, appearing at work intoxicated, and stalking Admiral Kathryn Janeway. Starfleet concluded her actions to be indicative of a nervous breakdown, and ordered her to compulsory drug rehabilitation and psychotherapy on Betazed. After one year without improvement, Musiker requested a dishonorable discharge and returned to Earth.

Musiker described her life after Starfleet as "one long slide into humiliation and rage". Having come to rely on stimulants during her last years of active duty, she developed a substance abuse problem that estranged her from her family. By 2399, she lived alone in a small house at Vasquez Rocks."

So let's see: she goes bugshit crazy, does things that warrant a dishonourable discharge, gets and avails of free counselling and rehab which don't seem to stick. She goes back on drugs, drives away her family, and ends up in self-exile.

Presumably every step of the way there were services being offered which she refused, and she must be living on *some* kind of social assistance allowance to pay for her needs (even a hovel needs running water, sewerage, food, etc.)

Gosh yes, it is completely unfair that this woman who drove herself into a position of living in a (nice) trailer in a semi-desert area doesn't own a famlly heritage chateau and vineyard like Picard! Boo bad old Federation!

Expand full comment

Read my comment above. She probably didn’t lose her previous house due to her substance additions. I can’t see how. She doesn’t need social assistance, just a replicator.

Of course it’s probably just bad writing, the Star Trek writers have never fleshed out how the economy works at all, and money re-appears when it’s needed (like in the first episode of the second season of strange new worlds).

Anyway my point isn’t to diss the federation, but to point out that moneyless systems entrench inherited wealth - even if you can’t measure that wealth because you have no unit of value, a man who owns a winery is richer than a woman in a trailer.

Expand full comment

With replicators making any food and basic materials you want at any time, the only thing really left is housing. Pre-fab housing that is suitable and sturdy would be incredibly easy for such a society, at least on Earth and any core world. New colonies might need time to get set up.

I'm going with the writers screwing up - or the show runners for wanting a grimdark/poverty version of Star Trek in the first place.

Expand full comment

Well, her trailer does look like pre-fab housing, and even if she's using solar power, she must have some source of water, food, clothing, etc. She's not living in the 20th century/21st century equivalent of a shack in the woods unless she *wants* to live that way, as many people today want to live on the streets rather than in shelters.

It's not like there were no safety nets, or that she fell between the cracks; it's that some people can't be helped until they hit absolute bottom (and look at the pictures in this article, this is not "meth raddled toothless trailer park addict" as it would be in reality today but I guess you can only go so 'ugly' for a Hollywood show):

https://intl.startrek.com/news/for-some-viewers-raffis-story-is-all-too-familiar

"The visual contrast of the rift in their relationship is clear when you compare Chateau Picard’s lush landscape and tranquil abode to Raffi’s isolated trailer amid the desert environment of the Vasquez Rocks."

Wow, imagine: landscape in temperate Europe is "lush" compared to desert. Who would have thought?

Expand full comment

I think it's the difference between money as a unit of account, money as a means of exchange, and money as a store of value.

Credits are clearly some form of unit of account. There is no means of exchange, it being all electronic, which is why Ferengi gold-pressed latinum gets used on the fringes of the Federation. And Picard's villa is evidence that wealth exists, even if no one in the Federation has a vault crammed full of gold-pressed latinum.

Expand full comment

> There is no means of exchange, it being all electronic, which is why Ferengi gold-pressed latinum gets used on the fringes of the Federation.

I don't follow. Most of *my* money is electronic, but it's still a means of exchange.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

TLDR; that's the "unit of account" rather than "means of exchange"; we've mostly automated "means of exchange" away.

As I understand it, the use as "means of exchange" is the way to keep from needing to barter, or keep manual records, and store the records, and guarantee the trustworthiness of the records, and access the records at will, etc. You give me some shiny metal disks for some of my chickens' eggs, and then I give some shiny metal disks to the miller to grind some grain into flour. We can do this by passing the shiny metal disks around, rather than having to trade favors or keep track in ledgers or arrange multi-party swaps. That's a real problem that the shiny metal disks solve.

Alternatively, if everyone had a trustworthy scribe slave following them around, we could have the scribe slaves talk amongst themselves to work it out by making marks on papyrus, without the inconvenience of carting around a pouchful of shiny metal disks. And these days we've automated the scribe slaves away, and we can just wave pieces of electronics at each other and it mostly works seamlessly.

Expand full comment

That’s an interesting view. I don’t really agree though.

The money is in my account. With regards to it being medium of exchange what matters is what I can buy with that money. Whether I turn the electronic money into coins or notes and spend that on a widget, or use a debit card to spend the money on a widget is the same thing. My electronic money - which is where I store money, is reduced by the cost of purchase.

(Also the money in the bank is exchangeable for coins and notes and your scribe example doesn’t really map to that. This makes an electronic dollar and a physical dollar fungible).

This is how money is measured by the way. It’s mostly electronic money.

Expand full comment

My post below is an attempt to answer this, although it assumes humans do remain the legal owners of everything.

Expand full comment

How are you all using chatGPT in your work/daily lives?

Expand full comment

I sometimes have trouble writing emails in a professional style, and writing emails seems to be what chatGPT is best at.

Expand full comment

I build physical things, like houses and scenery. I suppose chatGPT could be useful as an on-the-fky Spanish translator/interpreter. Nobody seems to be pursuing this use case though.

OTOH, I work in Hollywood, and the writer's guild is striking in a futiIe doomed last stand before 99% of all entertainment is generated by AI, while produced and monetized by exactly the same elite moneyed people in Brentwood and Santa Monica. So it's affecting me, I gurss.

BRetty

Expand full comment

Doesn't Google Translate already do this translation?

Expand full comment

Google translate isn't very good. Deepl is better, but with less languages (tho for spanish it should be ok). For many obscure language, GPT is the only way to get a good automatic translation.

Expand full comment

I try and trick it into giving funnywrong ansers and share them with my coworkers for clout.

Expand full comment

I have GitHub CoPilot in my IDE for work. It's like autocomplete, but less useful than you might think since so much of coding is editing existing code (for reasons the AI doesn't know about) rather than writing new lines it can guess.

Expand full comment

As an AI language model I am not capable of using chatGPT in my work/daily life.

Expand full comment

As a programmer I spend hours per day "talking" to ChatGPT (GPT-4 to be specific). It's not an amazing programmer, but it *is* an amazing rubber duck debugger.

It's also great at getting me up to speed in areas I'm not familiar with and removing tedious jobs from my plate.

FWIW, I've got almost 30 years experience as a programmer.

--------------------

I've never been good at maintaining my LinkedIn profile. I had ChatGPT help me:

1. Figure out why I'm bad at it.

2. Figure out specific improvements I should make to my profile.

3. Figure out a schedule and prompts to make me keep it updated.

--------------------

I often have to communicate technical concepts to non-technical people. I'm pretty good at it! But it takes time.

I often just drop a bunch of half-formed thoughts and random snippets of code and tell ChatGPT "explain this to a non-tech" audience, and it either gets me 70% of the way there or causes me to think of a new way to write an explanation.

Expand full comment

The idea to have it help with LinkedIn is great. I also struggle with that.

Expand full comment

I'm saving your comment for future inspiration, thanks for sharing these use cases.

Expand full comment

Thanks, Dustin! Some interesting (and non-obvious) use cases here.

Expand full comment

Not chatgpt, but using the chat completion API. I built some custom tools for the work I do regularly, think of it as a v0.5 copilot . Huge time saver and very reliable, of course I still check just like I would if I gave the work to my personal assistant to write up.

Expand full comment

As I ruminate about AI I often think of weird little tests to give it to help me understand how its "mind" works, and I go to the site and give GPT the test. That is the only way I interact with it. I hate its bland beige prose and cannot imagine using for any writing task I have.

Expand full comment

I asked it to write a linked in summary of my recent jobs. I mentioned the company, my skills and the products I worked in. It did well although the cutoff from 2021 meant it missed some features of the latest app (which I didn’t explain).

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

I have three usecases:

- Unit tests. I get why they are important and I like what they do for my code, but I sometimes get too lazy to write test objects (e.g. "code that will generate a random dataframe with such and such features"). ChatGPT does this well, to the extent that it's saved me from a few modules that might well be not test-covered to this day. Sometimes I even ask it for the entire test - but here it's important to not plug and play whatever it writes directly into your tests, otherwise you run the risk on relying on a hallucination and lose the secondary benefit of writing tests - understanding your own code better. I actually suggest the worse GPT-3 for this, because what it tends to produce is usually good enough to not be a total waste, but bad enough that I have to fix it a bit to make it work in my case.

- Code review. It won't replace a person on your team with in-depth understanding of the project, but it can catch some things that your linter can't, like a badly chosen abstraction or a logical refactor point that you didn't notice. Take what it advises with a grain of salt, ofc, but that's true of almost any review. Ofc, be careful about putting your actual code into it, some of it may be NDA-protected and your company may have a policy against this.

- Writing content that I can't produce with the correct style. E.g. I'm bad at writing "strongly worded" emails to insist on something, and ChatGPT usually handles this well.

Expand full comment

Thanks, Petrel - interesting! (And the idea of having ChatGPT or another AI-powered writing assistant help us with 'tones' that don't come naturally to us is intriguing.)

Expand full comment

Writing long form content (product specs, shipped emails, comms in general), writing repetitive spreadsheet formulas or SQL, and asking for the best counterarguments.

Expand full comment

I'm not.

Expand full comment

I'm looking for very two different blog posts/articles/pieces - which I believe I saw first either in the Links or in the Open Thread.

1) One piece seemed to be 'rebuttal' or a qualification of Emily Oster's claims regarding alcohol and pregnancy.

2) The other piece looked at caste systems or dynamics across countries such as Korea, Hawaii, India, and Japan and made the case that it was a more general East Asian phenomena.

Any pointers would be appreciated!

Expand full comment

Let's consider a very hypothetical example of, say, Eliezer doing a 180 on AI x-risk and offering an apparently compelling argument that unchecked AI "destroying all value in the universe" (in Zvi's words) being extremely unlikely and that the benefits of e/acc (e.g. eternal youth, a diverse thriving civilization) far outweigh the risks. (Note that something like that is not without precedent: for example, Stephen Hawking famously changed his mind completely on whether information is lost in a black hole, solely based on logic and calculations, without any empirical evidence for or against.)

What would be your reaction, assuming you were in the same camp before? Would you examine every argument carefully, or maybe just give it a cursory look, or maybe just breathe out about humanity not actually being doomed, trusting the authority in the subject matter, given that he takes the extinction threat very seriously? Or maybe reject the new arguments in absence of any new information? Or maybe something else?

Expand full comment

You need to establish the preliminaries: you can only trust the authority of those who actually have authority, and you can only evaluate arguments if you actually know how to do so. If you have not learnt how to, you probably don't.

Expand full comment

Pretty impossible to say without seeing the specifics of the argument.

Expand full comment

> Note that something like that is not without precedent

I mean the closest precedent is EY himself. He used to be very pro AI, then changed his mind dramatically.

Expand full comment

>What would be your reaction<

Get the shotgun ready, the Body Snatchers have arrived.

Expand full comment

Yeah, I'd do a double take, too. Hence "a very hypothetical".

Expand full comment
author

I'm already pretty far from Eliezer in my key assumptions now; I think this would mostly depend on the strength of his argument, and not his authority (although I admit even if I couldn't follow the argument it would relieve me a little).

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

I'm not exactly in the same camp as Yudkowsky, but I do think of the risk of ASI destroying human life as not trivial --- maybe something like 5-10%. I would examine every argument of his very carefully. I would also ask around and try to fine out whether this was all some ploy on his part, or whether he had flipped out and was having a manic episode. This is a hard counterfactual to work with, because it's just mind-meltingly implausible. It's like asking how I would respond if Scott announced he was stopping all of his usual activities for good because he had decided to produce and direct a reality TV show about a liquor store in Alaska, with him also playing one of the cashiers.

Expand full comment

"I'm not exactly in the same camp as Yudkowsky, but I do think of the risk of ASI destroying human life as not trivial --- maybe something like 5-10%."

I'd suggest rephrasing that as you being _VERY_ far from Yudkowsky, like 45 dB in log ratios (he's at ~99.5%, right?). That's the same difference as between you and someone who thinks chances of doom are like ~0.0003%

Expand full comment

I understand the math, but don't agree with your conclusions. I would avoid a trail with either a 99.5% chance of being hit by an avalanche or a 5-10% chance. I think the important difference in considering how worried to be about something that would kill you is whether you think it is extremely unlikely. 5-10% is not extremely unlikely. It's not even that far below Russian roulette odds (16% or so). I do not worry about trucks slamming into my house and squashing me, or crazy killers mowing me down in the grocery store. Both things could certainly happen, and do to some people, but they re just so unlikely that I round the chance down to zero. 5-10% I don't round down.

Expand full comment

This is a fascinating counterfactual. My experience of the subject matter is that all of his arguments are unanswerable and correct when examined in appropriate detail. I also am close to certain (~95%) that the central hopes/fears about intelligence as such will not pan out. Thus I don't believe that anything can happen that would cause him to change his mind. Why would he change his mind when the arguments are already as correct as anyone can make them?

Expand full comment

I can't follow what you are saying, and I couldn't in the previous exchange. Could you answer Nolan's question?

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

Yeah, I was unclear. I just now answered his question with an attempt at clarity.

Expand full comment

Are you saying his arguments are correct based on an initially false premise? You are a sceptic right?

Expand full comment

Well, I'm American so I'd be a skeptic; and in a common-usage sense of the word, yes I am. However, in any case in which I am addressing the arguments of anyone with known bona fides in the rationalist sector of society, I would not describe myself as skeptical at all. Why? Because skepticism re: Yudkowsky in particular on this point requires actual credulousness on my part. In what? In some je-ne-sais-quoi of intelligence that I cannot define, cannot demonstrate, and in favor of which I cannot meaningfully argue. I do not assert the existence of a soul, or God, or anything else specific or definable, but my gut sense is that intelligence in even a more limited sense (inclusive of non-self-reflective paperclip-maximizers) that is greater-enough than human intelligence to matter is not possible. This is frankly irrational and I avoid saying it in public for this reason, not out of shame (I have none) and not even because it doesn't do well in words (yes I'm aware that's not a good sign from a pro-rationality perspective). It just doesn't add much to the conversation and I shouldn't have responded to the top-level question.

Actually no -- the reason I don't bring it up usually is out of politeness, because I suspect most people on both sides of the doom/dismiss divide labor under some related mistake, especially people who assume alignment will be easy due to some ils-ne-savent-pas-quoi of intelligence of their own. Perhaps there is some internal motte-and-bailey we're all doing on ourselves. I for one do not pretend to be 'right' or 'rational' about this, and when I confront the genuine weak points of this 'faith' in the ineffability of intelligence I experience a momentary frison of fear.

As for why he's right, the steelman version as I understand it is "Whether or not you think superintelligence is possible, we obviously are not moving towards being able to align such a thing in any meaningful way. The people spearheading AI development act as if superintelligence is possible, and even if they're wrong we should stop them trying." To this I would add that I expect future 'AI' improvements to be a net negative for humanity, in the same way that the Instagram algorithm is but better and faster. Limited upside, possible huge downside. My ~95% estimate, however, is that Eliezer will go to his grave with AGI still in the future, still saying that if it shows up we're f*cked -- and the world in which his grave will be located will be an uglier, poorer, less dignified version of the one we live in now.

Expand full comment

This is surprisingly close to my own thoughts. I think we can also create non-AGI AIs that we may be tempted to give far too much control over our lives to. This non-AGI may be extremely competent but also extremely dangerous at the same time, quite possibly in an unthinking paperclip-maximizer sense. Without AGI the dangers are mundane dangers instead of existential, but that could reach the same levels of danger available through any of the constituent parts of society given to the AI, including nuclear weapons and every plane on the planet. Crashing every plane on the planet would be catastrophic for people on the planes, at the crash sites, and who need the transportation. It could be done by a mouse-level intelligence if given the power to do so.

Expand full comment

I've toyed with the idea of formulating an offense-oriented version of this, and accusing those who assume superintelligence is possible of attributing some ineffable quality to intelligence (smuggled in under "emergent properties"), rather than humans (like LLMs) just being good at confabulation -- i.e. our intelligence is an illusion created by the synergy of the billions of culturally-coevolved apes slowly scratching out an understanding of the world. There really isn't any 'there' there, but the speech centers in our brains make clever noises to us about our importance. Without the accident of fossil fuels we'd still be using charcoal to make sharp things to hurt each other with (if we hadn't by now run out of trees and collapsed back to the stone age, or [in the absence of petroleum-based fertilizers] run out of topsoil and felt the sting of Malthus's fell dart).

There's too many unpopular notions in this for it to be at all saleable, though.

Expand full comment

Maybe you could name so me of "those". Because EY specifically is very reductionist about intelligence.

Expand full comment

I understand that the natural inclination is to reject the hypothetical outright. I agree that him (or Zvi or someone equally thoughtful and conscientious) changing his mind radically is very unlikely. But not really impossible, since everything is based on logic, not on testable empirical observations.

Expand full comment

Now that I think about it, Yudkowsky becoming optimistic would probably chill me badly. I hope this doesn't happen.

Expand full comment

In my mind at this point it's fairly obvious that if AGI happens we're all dead. And I don't trust Eliezer's timelines too much, nor anyone else's, or my ability to evaluate those

If he changes his mind on timelines, I wouldn't care much, I trust people working on the models more than abstract theories about which things are in principle possible

If he changes his mind on the underlying premise of AGI --> Extinction, well I'd certainly spend a lot of time considering his argument carefully

Expand full comment

This question has probably been answered, but aren't there obvious fire alarms for AI Safety? If the undesirable event is an AI that makes itself better without prompting, then can't we make two unit tests:

1) test whether the AI can make itself better

2) test whether the AI can perform harmful actions without prompting that could theoretically lead to trying to make itself better.

Regarding #1, whenever a hot new LLM comes out, I typically copy and paste its source code into the LLM and ask it to make it better. The results give me comfort that we're very far away from that. I will keep doing this, so that's covered.

Regarding #2, I have yet to see anything close to that. All we'd have to do is watch for news reports of an AI harming someone without prompting (accidents are excluded, like self-driving cars). An example would be an LLM that tries to exploit a user without being prompted. We have seen manipulative LLMs, but the harm is so soft.

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

I'm not so sure that it's crucial that AI make itself better without prompting. People are determined to make it better, and will probably train AI on larger data sets in the coming years. Nobody knows whether that will lead to trivial improvement or to the appearance of emergent abilities that are truly startling. Every week I see another article about some tweak that makes AI perform better. nd here's a guy with an impressive history of success whose new project is enable AI to reaso .

As for whether AI can perform harmful actions without prompting, I dunno. I can only think of edge cases that might do harm. It sure as shit can hallucinate without prompting. For the subject I was asking about it didn't matter, and it was easy to see that AI had flipped out. But there could be situations where the user did not recognize they were reading a hallucinated answer, and made a disastrous decision based on the inaccurate info. There was the article about the Belgium man who allegedly committed suicide with encouragement from his chat bot, but it's not clear what role the bot played. Somebody who used GPT for help in his work could thoughtlessly give it a goal that led to a harmful result. It said spooky, hostile things to someone who posted about it on Twitter.

But I don't think we need to wait for AI to perform harmful actions without prompting, because people are going to prompt it to do harmful actions: Come up with a letter so scary it gets my ex-wife to let me see the kids more often. I have 800K doses of LSD -- tell me the best way to get it into the water supply of Chicago. How do I contact some of the world's top cybercriminals?

Expand full comment
author

This is sort of the idea behind ARC Evals - see https://asteriskmag.com/issues/03/crash-testing-gpt-4 . I agree that it's great.

If you read the recent Davidson post (and an upcoming post I'll get to approximately next week), you'll see why some people are concerned in ways that don't necessarily go through self-improvement.

Regarding 2, I think the section on sleeper agents in https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer addresses why this might still go wrong.

Expand full comment

I haven't been following ACX as much of late: has Scott given updates on how Lorien psychiatry is going, and things he's learned from the experience? I'd be very interested in a retrospective on the past couple of years!

Expand full comment

I figure someone here might know about this. Are there many good contemporary/living thinkers doing the mythology-ology thing? Think Mircea Eliade, Joseph Campbell, etc. but with the benefit of a few more decades of scholarship.

Expand full comment

Maybe look into Dan Davis (who does like historical fiction about the bronze age but does a lot of research) and then see what his sources are?

Expand full comment

Good tip! Thanks.

Expand full comment

There are two big things one might be hoping to get theories of from comparative mythology. One is an understanding of the human cognitive mechanisms that generate, select for, and sustain stories. The other is an understanding of the particular contingent past culture that is ancestral to many later cultures. These would be akin to linguistic theories that aim at understanding the Chomskian language instinct, vs theories that aim to reconstruct proto-languages.

I don’t specifically know about anyone doing the former, but I recently found a YouTube channel doing the latter, and he regularly gives citations to contemporary academic work: https://m.youtube.com/c/Crecganford

I’m still trying to digest what is going on here, but if what I see is right, there are actually people who claim to be able to reconstruct myths that are probably 70,000-100,000 years old, given the particular geographic dispersal of descendant myths. That sounds crazy to me, given that we really can’t go much farther back than 12,000 years with languages, but it’s definitely interesting if true.

Expand full comment

70-100k sounds like Michael Witzel's 2012 book the Origins of the World's Mythologies, claiming that everywhere other than Africa and Australia share a base mythology, which would be 50k years old; and a more tentative claim about an even older base visible in Africa and Australia.

The titles of the youtube videos don't sound like they're talking about tens of thousands of years. Is there a particular one I should try to hear such claims?

There is a very famous book with murky claims about tens of thousands of years. Hamlet's Mill (1969) gives New World myths about grindstones that it claims are related to Hamlet's millstone. This would seem to claim that not only did the myth travel over the Bering Straight, but that memory of agriculture did, long before agriculture was supposed to exist. But the book doesn't seem to commit to anything. I think Santillana accumulated lots of comparisons but delayed publishing because he couldn't decide where to draw the line of what he believed and then Dechend published it all without clear organization by strength of claim.

I've read a couple hundred pages of each book. I'm more optimistic about Witzel's more ambitious claim.

Expand full comment

The Earth Diver myth is the one that seems to be said to be oldest. You find traces of it across Africa, Australia, the Americas. It's something about how either the creation of the world, or the end of a flood, involved some animal diving down and bringing up some mud, that the creator then turned into the continent: https://www.youtube.com/watch?v=nZmEro_ODqc

Expand full comment

Thanks!

I should have noticed that a lot of the most popular videos on the channel have the word "oldest" in the title.

Expand full comment

That YouTube channel looks promising, thanks!

I think I'm interested in both of those, but my own writing is focused more on the first at the moment. I think these stories can tell us a lot about human psychology.

Expand full comment

Curious what other responses this might get. I love the nonfiction from Robert Graves I’ve read (the Greek and the Hebrew myths) and would love to know if there were anything similar but more recent, as you say.

Expand full comment

I'll check out Graves, but keep an eye on this thread for more recs!

Expand full comment

At vectors.substack.com, Andrew explores different "original sin" myths and the truth that may underlie the Adam and Eve story. Not sure if that's what you mean, but it's fascinating.

Expand full comment

Already following him, but thanks!

Expand full comment

He's a contemporary with those you mention, but I would put some of Octavio Paz's essay books in this category, particularly _Conjunctions and Disjunctions_.

Expand full comment

Interesting, I'll check it out, thanks

Expand full comment

there has been (rightful) pushback against the sort of large, synoptic theories of mythology popular in the past, so if that's what you're looking for, I don't think you're going to find it (and it not be crank-level stuff)

Expand full comment

I'm partial to the 'big theory' stuff, but interested in reading criticisms too. I know that all of the big name writers in this area have a tendency to overgeneralise from individual stories/traditions.

Expand full comment
author
Jun 25, 2023·edited Jun 25, 2023Author

Isn't this Jordan Peterson's niche, especially during his Maps of Meaning and Bible lecture phases? I realize his work wouldn't be defined as scholarly by the standards of academic historians or anything, but he does have a PhD, and I don't think Campbell was academic-historian-level either (I haven't read Eliade).

Expand full comment

Yea, I'm actually looking for more of this sort of thing. JP is flawed but interesting in this area, and I'd like to dig deeper with someone who has more specific knowledge (e.g., a contemporary historian of religion/mythology).

Expand full comment

Norbert Bischof did a good job with "Das Kraftfeld der Mythen" in my opinion. But I'm unsure if he should be considered contemporary and I doubt an english translation will be published. Maybe I can manage a review for next year's contest here.

Expand full comment

That would be great!

Expand full comment

Charles Stépanoff is great on shamanism

Expand full comment

Looks perfect, shame he mostly writes in French. But I'll see if I can track down a copy of his translated book.

Expand full comment

Yeah, unfortunately the one I read (which was great) hasn't been translated yet, but hopefully it will be soon

Expand full comment

Motivated by Scott’s post, Davidson on Takeoff Speeds, I want to focus on one question raised by it. Specifically, what does it mean for AGIs to take 100% of human jobs? I imagine this could break down broadly into 2 types of economy-wide scenarios:

1) AGIs are expensive and therefore owners of a few big firms reap all the financial rewards.

2) AGIs are cheap and therefore anyone who is somewhat smart and ambitious can start a business with their superintelligent AGI app. A great majority of those who lost their jobs to an AGI is now a business owner thanks to one.

In Scenario 1, we have the problem that almost everyone is out of not only work but also income. Perhaps the government gives these people money so they could buy goods and services from the surviving companies. In this scenario, the companies likely wouldn’t be motivated to create great products and services for the mass consumer as the mass consumer would be 100% subsidized by the companies’ owners’ taxes. The rich would effectively be handing out these goods and services for free. Perhaps they would be willing to do this to some extent, out of charity, however it seems likely they would be most interested in using their AGIs to produce luxury goods and services for their personal use. The rich would be their own best customers and money itself, once substantial property has been acquired, wouldn’t much matter.

Since most people prefer to exist in some sort of society, I imagine the rich would build walled cities, perhaps many of them across the world(s?), and populate them with clients, in the Roman patron-client fashion. These clients would be loyal, charming, intelligent (for a human), attractive (for a human) and perhaps skilled in the arts--which might retain entertainment value when produced by humans despite the ability of AGIs to do better. The rich would essentially be kings of their own domains, while the underclass exists separately in the hinterlands. Occasionally, people from the cities would scout for new clients.

In Scenario 2, nearly everyone has a hustle. Perhaps I focus on getting my AGI to produce great horror movies while my neighbor opens a dim sum restaurant with a fully automated kitchen and wait staff while my brother produces custom designed cars while my other brother produces specialized sex robots while my sister runs her own emergency hospital. Since nobody is nearly as smart as the AGIs, prompt engineering will become one of the most important skills in the economy, along with emotional intelligence and personal charm, since perhaps the humans will still outcharm the robots face-to-face with other humans for a while yet.

In 2, the great danger is unaligned humans who want to harm others. In 1, it is less of a problem since the business owners are few and perhaps their success is some indication of or motivation for alignment. Weapons, warfare, and foreign conflicts are not considered here, although of course these would pose great sources of instability -- at least until, particularly in 1, governments are rendered moot.

Allowing for the fact that these scenarios are necessarily simplistic, does anyone see any major reasons they would be unfeasible?

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

>what does it mean for AGIs to take 100% of human jobs?

It means that the market value of human work is literally zero, and nobody can ever exchange their time/effort for [commodity unit of accounting]. Meaning, the entire foundations of human economy break apart, and thinking of such future in terms of the contemporary market society will... well, garbage in, garbage out.

And it cannot mean anything else, by definition - as long as humans continue to work and sell their skills or the products of their work on the market, they have jobs and therefore AGI has not "taken" them.

Dweomite does a better job that I could explaining the concept in economic terms, I guess. What I'm trying to say instead is: Therefore society reorganizes itself around some other lines. What lines exactly will depend on what the AGI are and who can control them - but the competition to control the AGI will not be played by the rules of a capitalist market, because they will have long been rendered useless.

(It's conceivable for some super-AGI to be able to provide for all conceivable human desires, but still choose to keep us occupied by forcing us to do pretend-work in a walled-garden economy. Still, it's a scenario of total AGI control, so it's pointless to ponder about specifics, the specifics will be [whatever it chooses].

In all other cases, whatever economic considerations remain, they cease to affect human beings entirely, and you need a completely different language - of society, politics, culture, war - to describe what happens.)

Expand full comment

This is a response from the hidden thread (to myself).

For reasons I mentioned in my comment above, the system won’t be moneyless. So how then does the average person live and on what?

Well if we assume that the aim is to be as productive as possible, to maximise the efficiency of the post singularity world, and to get GDP growing at the largest possible rate then the UBI has to be:

1) high,

2) grow every year

3) match economic supply.

How would this work? Taxes from government? Hardly - who are they taxing anyway. Income tax will be zero. And who’s running the government on the day to day basis? There will be no government workers if there are no private sector workers so I don’t think that government exists as we now have it.

The only way I can see it work is the electronic printing of money, by an AI controlled central bank that knows just how much can be produced (globally or locally) this year, and deposits money to each citizen to buy pretty much what can be produced. People will save if they want but the AI would have the statistics on savings rates. Armed with previous year’s economics statistics and knowing what can be produced this year the AI will match demand to potential supply. Since the AI is getting smarter then potential supply will increase every year, and along with it the money supply.

You might still have to save a few years for a car. People might have to choose between multiple holidays and a car, the AI controlled companies will have to earn money to buy inputs to build the cars - but all this is good as it allows competition and price mechanisms allow feedback mechanisms.

However money has to be plentiful or there’s no economic growth.

Expand full comment

One way people sometimes think about distribution of income is return on labor + return on capital. That is, people who work get paid some amount for their work, and people who provide necessary tools/infrastructure/etc. to make the work more efficient get some compensation for providing those.

If AI takes 100% of jobs, it seems like you could think of that as the return on labor going to ~zero, so that capital gets ~100% of the returns.

If AIs are cheap and everyone has one, that doesn't suggest to me that everyone gets a large return; it suggests to me that AIs aren't a very large percentage of all capital. If AIs are not a bottleneck then the returns will mostly go to the people who own the mines, factories, farms, restaurants, or whatever else is the bottleneck on productivity. You might *like* to run your own emergency hospital, and you might have an AI that's smart enough to run it, but where are you going to get the equipment, medicines, and facilities? The current owner of those things has no need to "hire" your AI to run it because they have an AI of their own.

Put another way: In order for you to make money with your AI, someone has to be paying you. If everyone has their own AI, why are they paying you instead of asking their own AI to do it? You need to have some sort of advantage at doing the thing you are being paid to do. If everyone has AI, then AI isn't the source of your advantage, so it isn't what you're being paid for.

N.B. "Running a business" and "prompt engineering" are included in the 100% of jobs that have (by assumption) been taken over by AI, so they aren't your source of advantage, either.

Expand full comment

That makes sense.

Expand full comment

In scenario 1, companies would still have incentive to make things for the poor, regardless of where the poor's money comes from. We already have people whose income comes from taxes, and their money spends as well as anyone else's.

Expand full comment

I was imagining that the tax-base, so shrunken and wealthy, would control political policy and therefore keep those taxes to a bare minimum. The masses could try to take political control, but I don't see how they would win.

Expand full comment

The wealthy won’t really exist either in a society where everybody has no, or little, money. Not the ones whose wealth depends on mass consumerism anyway. 60-70% of the GDP is wages. After everybody is unemployed and on a low UBI the market for cars, devices and houses collapses so the CEOs of BMW, Apple and real estate development companies (etc.) are sitting on worthless stock.

Venture capital will dry up of course (probably killing the AI Revolution at birth).

Expand full comment

Anyone read Donald Hoffman’s “Case Against Reality” and/or have some insight on the “Fitness Beats Truth Theorem”? I left the book and the general case feeling disappointed - but interested to know others’ experiences with it.

Expand full comment

IDK his theory well enough to really dive into the details, but from the broad strokes it seems wrong.

I think my steelman of perhaps a similar position would be something like:

1) While our perceptual apparatus is mostly fairly reliable it does deceive/distort us about the world in a few well researched ways due to those distortions fitness.

2) So our perceptions/sensory apparatus clearly aren't solely fit based on their direct representation of the world, but more their ability to represent the world to us in a way that is helpful for fitness (which mostly involves accurate representation, but not totally).

3) Moreover for simple boring biology reasons some parts of our sensory apparatus have "bugs" where they fail to represent reality (stopped clock illusion etc.).

4) Finally some features/qualities of the world which seem incredibly primary (say color), are actually not very primary at all and instead are just the features our sensory apparatus tends to elevate/construct differentially mostly because of its fitness benefits rather than how closely it models the "reality of a bunch of clumps of particles and wave forms firing photons off at each other in crazy near infinite ways".

Expand full comment

I've read it, and was unimpressed. While the author is apparently a professor of cognitive science, this is very much a book of philosophy - i.e. no sign of anything resembling falsifiability.

A lot of work is put into being convincing, rather than into actually establishing the author's claims (I.e. debating techniques, use of standard human perceptual distortions, etc.).

Expand full comment

I mean the border between cognitive scientist and philosopher has been pretty porous. A cognitive scientist is jsut a philosopher of mind who reads a few more neurology articles generally (which is a good thing).

Expand full comment

So the term "cognitive scientist" uses "scientist" in the old sense of "person with lots of knowledge of the subject, generally acquired by reading books" rather than the more recent sense that involves the scientific method, falsifiability, etc. etc.?

Expand full comment

It was mainly the more empirical philosophers of mind trying to take a step towards empiricism in the same way all scientific disciplines slowly divorce themselves from philopshy as they becomes more empirically tractable.

So someone who in 1980 would have been a philosopher of mind, might have been on the cutting edge of empirical approaches the the problem, gets called a cognitive scientist in 2000. There isn't some giant black and white gulf between philopshy and science in disciplines this close to the edge of philosophy.

Most of the *good* philosophers of mind will have a pretty deep familiarity with the empirical cutting edge, and will be incorporating that into their work. You indeed might have some holdouts who are more "doing science from the armchair", though good philosophers try and avoid that whenever possible.

Philosophy is the "mother" of all the sciences, and is what you use when you don't have empirical methods which work. Which is still where we are in some areas of "cognitive science",

Expand full comment

I've only listened to him talk, so this is not a detailed criticism, but it just feels like rehashed idealism (consciousness is primary) and he seems to use evolutionary terminology to lend scientific credibility to an idea that other philosophers have covered with more depth and nuance (Hume, Berkley, Kant etc.), without actually referring to them..

I get a whiff of BS when he talks about experimental validation of this, like.. how? Did you get some grad students to hash together a quick genetic algorithm with a pointless bool member named "truth" which is not actually contributing to the fitness function?

Expand full comment

I've only listened to him talk, so this is not a detailed criticism, but it just feels like rehashed idealism (consciousness is primary) and he seems to use evolutionary terminology to lend scientific credibility to an idea that other philosophers have covered with more depth and nuance (Hume, Berkley, Kant etc.), without actually referring to them..

I get a whiff of bullshit when he talks about experimental validation of this, like.. how? Did you get some grad students to hash together a quick genetic algorithm with a pointless bool member named "truth" which is not actually contributing to the fitness function?

Expand full comment

I found it fascinating but ultimately frustrating because I really, really wanted him to give some concrete examples for his abstract and seemingly key claim regarding "resource recognition". (It's been a while and I don't remember precisely how he phrased it.)

The book also, like so many non-fiction books, has way too much filler. How many readers benefit from all the pages he devotes to quantum mechanics and relativity?

Expand full comment
Comment deleted
Expand full comment

I would wager that the "science" on this matter is weighed down by the political ramifications via the drug war.

Publishing studies that modafanil and adderall are useful even for people without adhd would encourage "abuse" as our milieu doesn't allow for "beneficial human augmentation" in the overton window.

So you gets futzy reports to the contrary with small sample sizes and weird cherry picked measures.

Whats cognition? Whats iq measuring? What does "focus" or "concentration" mean?

It isn't as straight forward to begin with as measuring someones ability to deadlift x amount of weight.

Couple that with the cost and the mentioned social counterveiling efforts and voila.

For a long time it was common knowledge that adult brains didnt make new brain cells. We knoe thst isnt true. The current accepted truth is that the frontal cortex doesnt mature until age 25 , thats based on one study , no one else has followed up or done some worldwide definitive baseline test.

If you go spend 2 hours a day juggling for thr next 6 months I can guarantee rewiring in your sensory motor cortex which will have benefit in other related things like playing first person shooters or pitching a baseball (even if you were absolutepy new to those tasks , the existing beefed up structure would exist) , is thst not a form of intelligence? Sensory motor coordination?

Variance in your ability to concentrate during the day is normal human variance , eat well and exercise regularly? Wow. Its improved. Are you smarter than you were when you didnt eat well or exercise?

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

I think the other commenters' points about being interested in what you're doing are critical. To dig into that, examples like Carmack show that sometimes you naturally come across things that are a pure interest fit, and the rest is "easy." I've been listening to the SmartLess podcast, and the David Letterman episode struck me the same way. He apparently took a speech class in high school, and by the time he sat down after talking in front of the class, he felt like he knew what he wanted to do with his life. I'm sure there are similar stories all over the place.

Unfortunately, it seems like most people never have that aha moment, or at least not that strong a version of it. I don't know if there's a way to engineer that reliably. Maybe you could expose people to more interesting speakers and experiences in various areas... Or maybe some people just don't have the neurological makeup to be that inspired for that long.

The other thing to keep in mind is that you're mixing categories when you talk about athletic tasks and cognitive abilities. Training a lot in tennis makes you very good at tennis. It doesn't really improve your weightlifting ability, let's say. There's some spillover in terms of hand-eye coordination, fitness, and so on. But overall you have to train a specific athletic ability. The same is true of cognitive abilities. Training writing makes you a better writer, not a better mathematician. So there's no general cognitive shortcut, but if you pick a specific set of skills you want to get better at, just train those skills. Even with average cognitive ability, you can become excellent at most things (for some definition of most; not everyone can be a theoretical mathematician or whatever).

Expand full comment

Many people have all the "interest fit" anyone could wish for, but simply not for something which generates a pile of dough they could live on.

Expand full comment

Yeah, I can be very enthusiastic about things, but my plans are always missing the part when it all somehow converts to money. Therefore, I am spending most of my time doing things I am way less enthusiastic about, but which pay my bills.

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Or not for something that they're spectacularly good at, as David Letterman turned out to be for comedy and talk show hosting. After all, thousands of young people every year quite genuinely and sincerely decide that acting, comedy, television journalism or some other form of public speaking is what they want to do with the rest of their lives, and many of them work incredibly hard to make that happen, and most of those end up doing something else (for a living) with the rest of their lives.

It also helped, in Letterman's case, that he was born in the late 1940s, which meant that most of his adulthood coincided with a time when the late night television network talk show was a huge American popular culture institution, since bringing his comic sensibility to that kind of show as a host turned out be the thing he was best at.

Expand full comment

All true. A lot of people are just average, by definition.

A lot of parents seem to advise their kids to take the practical career instead of pursuing their passion. I think the advice is kind of backwards. You can try the high risk stuff first and then retrench into something practical, or perhaps you can keep doing the fun stuff on the side, or maybe you'll actually make it. I think doing what excites you first is actually a good idea, but you have to be ready to call it at some point and move on (or just be a starving artist).

Expand full comment

The likely (if less "sexy" than nootropics-powered ubermenschen) explanation is that Carmack could work all day for years without burnout simply because he was able to work exclusively on projects which strongly interested (and directly profited) him. AFAIK most programmers have some periods in their life of "being Carmack", at least temporarily.

Expand full comment
Comment deleted
Expand full comment

My back-of-the-napkin evolutionary math tells me that there was a period of human existence that predates computer programming but otherwise spans many, many thousands of years, during which it was very important from the point of view of survival and reproduction to distinguish activities worth expending calories on from actitivies that are not.

Expand full comment

Subordination pushes a depressive "button" in mammals. (Possibly prevents one from getting into unwinnable fights?)

(I recall various experiments with chimpanzees, where zoologists artificially manipulated the hierarchy in the troop and found that newly-dominant individuals improved on various health and performance metrics. But unfortunately I do not have the pertinent link handy, perhaps someone else among those tuned in, does.)

Expand full comment

Gwern has lots of writings on this, see "Algernon Argument", "Spaced Repetition", "Nootropics" posts.

One key idea is that easy/no-downside tradeoffs would've already been made by evolution, probably. So e.g. stimulants help people because they tradeoff calories for energy/focus, and we live in a time of calorie surplus that evolution didn't adapt for.

Expand full comment

The original essay by EY is here: https://web.archive.org/web/20010202171200/http://sysopmind.com/algernon.html

Gwerns take is here: https://gwern.net/drug-heuristic

From what I can see, the arguments should apply to "energy levels" as much as to cognitive ability.

Expand full comment
(Banned)Jun 26, 2023·edited Jun 26, 2023

AFAIK the strongest (and arguably most expensive) "stimulant" is... doing only what you want to do, sleeping for as long as you like, whenever you want, etc. (i.e. being an aristocrat.) I recall a piece by (younger) EY where he attended a party thrown by some Rockefeller type and was astonished at how much energy and zest for life the aristos had.

Expand full comment

Agree that it's super-helpful. That's why I'm so obsessed with finding money/research-path that doesn't require a consistent wakeup time, why I support basic income, my bizarre sleep...

Expand full comment

I'd like to see a basic income proponent explain how he thinks he'll get to live on this income next to quiet, clean, non-violent people, rather than in the equivalent of the infamous US housing projects.

Expand full comment

LVT/Georgism is my answer. In the longer-term, the answer is transhumanism/Archipelago.

Expand full comment

I can picture how LVT/Georgism could fund UBI; but how would it prevent you from having to live near violent lumpens ? Presumably they would get the same share of LVT as everybody else, and so not having to live near them would remain the same kind of "positional good" as it is today.

Expand full comment
deletedJun 26, 2023·edited Jun 26, 2023
Comment deleted
Expand full comment

Perhaps the problem is that people are in sufficiently stingy. Some government officials think debt is a random number, so it won’t matter how many zeroes are attached to it—but it will. VCs and CEOs who envision themselves as bold explorers or innovators when they aren’t light money on fire on frauds like FTX or faulty submarines. If they just funded profitable businesses, we wouldn’t have these problems!

Expand full comment

>It's governments continuing to embrace austerity policies that don't work, have never worked, because "the deeeeeebt".

This is thrice wrong.

First, essentially no major governments are doing austerity. The UK and the US have been running aggressive fiscal deficits into an economic recovery, which is historically very rare.

Second, sneering at the size of government debt is a bad idea, because the size of Western government obligations will lead to a serious decrease in citizens quality of life in the next thirty years if unaddressed.

Finally, the confidence with which this is written—to write off the highest debt levels in American history, in a criticism of policies that were last popular over 15 years ago—and to do so with a sneer, is itself an epistemic mistake.

Expand full comment
founding

>There was no reason for this to happen, except that the CEO of that company was yet another cheapass sociopathic billionaire who wanted to reduce costs at every single opportunity so that he wouldn't miss on a single cent of profit from the insane mark up price of 250k a pop per sea tourist.

It would have taken you about two minutes of googling to verify that Stockton Rush was not a "cheapass sociopathic billionaire", because he was about two orders of magnitude shy of being any sort of billionaire. And given his limited resources relative to the task at hand, it is entirely plausible that the only way he could build a Deep Submergence Vehicle at all was to cut every corner and save every penny, and then charge $250K/passenger to cover the costs he couldn't cut.

It may have been foolish of him to do this, and foolish of his customers to go along for the ride. Or not; that's ultimately a value judgement of cost vs benefit and risk vs reward. In a free society you don't get to make that choice for anyone else.

And in this community I don't think you get to write an emotional diatribe in which you get the math egregiously wrong, and still be taken seriously. Next time, check the numbers before posting the rant.

Expand full comment

I agree with higher taxes on the wealthy, but don't agree their behavior is sociopathic or exclusive to them or to modern people. The medieval bread merchant was also looking for ways to pay the farmers he depended on as little as possible.

Many customers prefer low-cost, low-quality products over high-cost, high-quality products. For poorer people, it's often a low-cost low-quality product of no product at all. When governments banned flophouses, a type of low-quality housing, the result wasn't that everyone had high-quality housing. A lot of people had no housing at all.

What would it mean for austerity policies to "work?"

Expand full comment

From what I have read, the CEO of of the Titan company did not skip certification etc. for the sake of money, but for the sake of speed. He didn't want to accept several years of delay.

Expand full comment

"He didn't want to accept several years of delay."

And now he's lost the remaining years of his life, and the other passengers as well have lost all theirs.

I think "sociopath" is indeed the wrong characterisation, but he was clearly bull-headed enough, or had some modicum of vanity, that he pushed on despite the warnings raised by others. Presumably he had confidence in "I'm sure this will work and years of testing will only hold us back", and presumably also he couldn't wait years because he didn't have deep enough pockets to wait; either he got the submersibles out and earning money now, or the whole operation crashed and burned.

Right now, though, the "years of delay" are looking more sensible.

Expand full comment

This is how the really, really rich get to be really, really rich and stay really, really rich: pinch every penny, haggle, quibble, pay as cheap as they can and suspect the servants of drinking the liquor.

This is how it has been always, not just now.

As for the companies, that is partly because of the insane demand for constant growth and profitability. If your line isn't going up, you may well fail just because the market now considers you a bad bet. Cutting costs of labour is one way to make sure the line goes up all the time. And of course the bosses pay themselves as much as they can manage to get away with, that's the whole point. But they in turn are overseen by the shareholders, who want the maximum return on their investment.

It's squeezing blood out of turnips all the way down.

Expand full comment

Really? So Bill Gates would be rich even if he hadn't founded Microsoft but was just really stingy? I notice that I'm very, very confused.

Running a company that's focused on a low-cost strategy isn't the same as being personally stingy. And pursuing a return on investment isn't the same as not spending money. Most successful companies got that way by investing in growth, not by hoarding money.

Expand full comment

Founding Microsoft might have made him rich, but wouldn’t have made him exceptionally rich, if he wasn’t stingy. Note that stingy doesn’t mean hoarding money, it sometimes means cutting every corner you can afford to cut in pursuit of something that would otherwise be harder to do.

Expand full comment

He was certainly quite driven at Microsoft, but the drive was generally to launch new products, continually make them better, and dominate the market, which isn't the same as cutting corners. Stingy is just the wrong word, and trying to cram other concepts into it doesn't make it more explanatory.

If we want to say that exceptionally rich people are different than the rest of us, I totally buy that, but I don't think penny pinching is at all the right way to define that. It's more about having a vision of success and driving yourself and other people to realize it. Often that involves spending more money and doing more, not less.

Expand full comment

Maybe you’d prefer “ruthless focus on the bottom line” rather than “stingy”? It leads to the same corner-cutting behaviours that the OP was complaining about.

Expand full comment

No, that's my point, it really, truly is not the same thing. Stingy means unwilling to spend money. About 2,000 early Microsoft employees ended up being millionaires because of their stock options, and there have been many more since. And yet the company also had a ruthless focus. Stop trying to put a square peg in a round hole. It's like you've never heard the phrase, "You have to spend money to make money." There's a reason it's a saying.

There is cost-cutting in business, of course, but it's more often the companies that aren't doing so hot.

Expand full comment

I don't think this is correct. Stingy means "not generous or liberal; sparing or scant in using, giving, or spending". Scrooge is stingy. Scrooge pinches pennies; he certainly wouldn't spend more than he took in. Scrooge wouldn't spend $40K on a Tesla when a similarly sized gasoline car is $20K. Scrooge wouldn't spend $250K on a submarine trip to see the Titanic.

What we have is conspicuous consumption, biased towards purchases that show status. A car with luxuries is a status symbol. Buying a Tesla indicates you spent $20K you didn't need to spend on a new car. A trip to see the Titanic is $250K spent on something most people can never do for the sake of saying you did it.

Repairing infrastructure doesn't gain status for politicians, throwing money at pet projects shows status. Publishing companies throwing millions if not billions on streaming video series shows status. Founding an electric car company or building a submarine for tourism shows status. If you were stingy, you would never have spent money on it at all, no matter how cheaply.

If there were dozens of tour companies taking tourists to the Titanic, and the cheapest one suffered a catastrophe, then you might blame stinginess, but it would be stinginess by the consumers going with the cheapest price. Likewise, if we hadn't just passed a trillion dollar infrastructure bill where the spending went to government status projects (pork) instead of infrastructure, you might have had a point.

Expand full comment

"government status projects (pork)" - is pig farming considered an especially noble enterprise somewhere? Environmentalists claim it is usually not.

Expand full comment

pork: (US politics, slang, derogatory) Funding proposed or requested by a member of Congress for special interests or their constituency as opposed to the good of the country as a whole.

Expand full comment

>There was no reason for this to happen, except that the CEO of that company was yet another cheapass sociopathic billionaire who wanted to reduce costs at every single opportunity so that he wouldn't miss on a single cent of profit

Umm it doesn't sound like it was making any profit whatsoever.

>insane mark up price of 250k a pop per sea tourist. I'm glad he was among the victims — not because I think he deserved it (I don't know what that means, no one "deserves" anything, the universe doesn't care), but because at least that ludicrous menace of an individual can do no more harm.

Once again its not a "mark up" if you aren't making money. The bigger problem was surely that he wanted to run a business that was not actually economically viable. But that is a different thing than "stinginess".

It seems like you had a theory you wanted to go off on, and pounded this square peg into a round hole.

Expand full comment

Top-notch jeremiad, but wrong focus on individual decision-makers rather than on the system of incentives and social arrangements that compels their behaviour. You could deep-six any number of such 'sociopaths' and others would fill the vacuum instantly, and it wouldn't even really be their fault.

There is, of course, a solution, but it's notoriously difficult to execute.

Expand full comment

I don't believe that there's a solution that people AGREE is a solution, regardless of the difficulty of execution (as long as you require that execution is possible). There are various measures that would reduce the problem. Some of them would be relatively easy to execute, if the willingness were present among those in control. One major one is to forbid anyone who has ever regulated a company to ever receive any payment from them ever again. (And company include all entities owning more than 20 percent of the stock in the company.)

Expand full comment
Comment deleted
Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Oh, Bart Ehrman (sigh). He always gets trotted out for these "Hey Christians, do you know that what you believe isn't so?" segments because he has theological training and shifted from the "Evangelical" to the liberal 'God is that warm fuzzy feeling inside us' camps. To be precise, he identifies as a "humanist and agnostic".

Spare yourself 90 minutes and just listen to the Gershwins' "It Ain't Necessarily So" if you need your recommended daily dose of scepticism:

https://www.youtube.com/watch?v=kP5O_NUhrK0

Tim O'Neill has a better view of him (and more balanced than me, I'm burnt out on Ehrman because of all the "hey Christians, here's one of your own admitting it's all baloney!" use of his views):

https://historyforatheists.com/2018/04/review-bart-d-ehrman-the-triumph-of-christianity/

Expand full comment