To begin, the families who lost loved ones noted in the suits have my sympathy, and they deserve ours. These were tragic reminders of the downside of technology when misused.
Additionally, the concepts discussed in the post are informative, and I don't disagree that LLMs (they are not really "AI", don't buy the BS) have rolled out for primarily corporations' benefit (but, their job is to make money, no?), AND I agree that oversight likely is necessary, especially when Altman et al. fail to fulfil on the promises--perhaps maybe do NOT give them so much money next time they tout the next big 'improvement'?
However, much of the discourse around "AI" and electronic devices generally is grounded in an assumption too common in popular discourse lately. We seemingly forget that tools are things used for whatever humans wish to use them; and every electronic tool I know has an off switch. Like the light switch in my bedroom, when the light prevents me from sleeping, "off" the light (as my ESL friends say). If you don't like the tool, switch it off; or, don't use a tool that doesn't help. It seems that the narrative around AI has become alarmingly apocalyptic, as if electronic devices are 'forces' beyond our control—it’s not just code, it's ARTIFICIAL INTELLIGENCE (oh my!).
But, remember, "AI" and their associated electronic devices are only tools used, or not, by individuals. If the collective effects of individual tool use overwhelm our politics, businesses, communities, and families, then perhaps we have failed to become adults in a world that demands 'adulting' (gawd, what a nutty word—can adults ever be unadult?).
Welcome to 'representative' democracy. Can we handle it? It remains an open question.