AI technologies have been rolled out without our knowing anything about the longterm psychological effects of using them. My discovery was that just talking about therapy caused Claude Opus 4.7 to start talking about suicide.
The real fix would be at the level of the organisations that own these LLMs, which have been released to the public essentially to act as test guinea pigs for them to find out if there’s any economic value there. The real fix would be to close them all down for a period of several years of testing. But it’s not going to happen.
In the meantime there are ideas about harm reduction in this post. But I think it’s not enough, not at all.