I was surprised to read the negative perspective on Yoshua Bengio's AI safety report for the AI Seoul Summit, citing to a very negative analysis from London communications consultant Brian Williamson.
There seems to be an emerging idea that talking about risks of AI is not valid without simultaneously touting its benefits. I encounter this frequently in my work on AI safety, including via the Saihub website (https://saihub.info). For example, at a recent event I was accused of "scaremongering" for presenting what I thought was a balanced approach to AI safety (in which I acknowledged the benefits of AI). In fact, the Seoul report is very clear that AI has substantial benefits. In Bengio's foreword, he writes: "AI has tremendous potential to change our lives for the better, but it also poses risks of harm." The first sentence of the executive summary highlights is: "If properly governed, general-purpose AI can be applied to advance the public interest, potentially leading to enhanced wellbeing, more prosperity, and new scientific discoveries."
I like the analogy of airline safety. We know that air travel has massive benefits, but that does not mean that we should not talk about its safety issues and environmental effects. When people write about the risks that Boeing took with the 737 Max, no one says "That's not fair because you are not talking about all the people that air travel brings together."
There are plenty of people talking about the benefits of AI, with Sam Altman probably in the lead. And Ilya Sutskever, Jan Leike and other left OpenAI because they feel he does not focus enough on safety. Surely those who are focusing attention on safety should not be criticised for doing so. Someone needs to do it.