Regarding the .01% AGI argument, I think you are making some contentious assumptions there. Basically, I'd argue that argument is wrong for the same reasons Pascal's wager fails.
I mean, if the world is filled with those kinds of risks (be it nuclear war, bioweapons, secular decline, etc etc) it becomes much less clear that attention to AGI doesn't take away from efforts to reduce those other risks.
Also, for the AGI argument to have force you need to think that working on AGI risk is relatively likely to reduce rather than increase that risk and that it won't increase other risks.
For instance, my take on AGI is basically somewhat like my take on law enforcement use of facial recognition. It was always going to happen (if technically possible) and the choice we had was wether to handwring about it so that it was sold by the least responsible company (Clearview) or encourage somewhat more responsible and technically proficient companies (Amazon/google) from offering it.
Basically, I don't think you can avoid the fact that public concern about AGI will create pressure for western countries to regulate and for prestigious computer scientists not to work on it and that seems like a very bad thing. So even if there is a serious risk there we may want to STFU about it if that makes the outcome plausibly worse.
Also, I fear AGI concerns trade off against taking other concerns about AI seriously.