Much careless thinking elsewhere results from taking “intelligence” as homogeneous and/or quantitative, and glossing over the vagueness of the concept. That’s common in both the “it’s perfectly safe, AI inherently can’t [whatever], so there’s nothing to worry about” and the “imminent doom is certain” sorts.
Also from overlooking that effective action depends on dynamics of interaction with the actual world, which is only partially modelable even in principle. “Intelligence” is inherently insufficient.
I’m less unworried than collin, but these are good reasons to be less worried than hardcore doomers.
I am strongly against arguments of the form “Oh, it’s just parroting the data set - it’s not really thinking.” AI does a lot of things that we call “thinking” when we do them slower and worse. The fact it can also do those things should make us humble and curious, not proud and dismissive. But I think it’s equally silly to lump all these capacities together into “intelligence” and say “Intelligence is going up, so soon it will do everything intelligence can do.”