GPT-LLM-ML: This “we fired Sam Altman not because we thought he had made any major mistakes but because we did not think we could control him if we ever did want to because he had made a major mistake” emanating from the friends of former OpenAI board members Tasha McCauley and Helen Toner is, from one perspective, simply weird, and from another perspective obviously true. True, because it turned out that Sam Altman had the backing of the loyalty of OpenAI’s staff and of the money of Satya Nadela’s Microsoft, and they indeed could not control him. Weird, because Helen Toner and Tasha McCaulay’s AI-worries are now revealed as being scared of shadows—of taking big actions based on phantasms down the strategy and circumstances tree that do not yet have even a shadow of reality. Thus the cause they thought they were supposed to advance is now much weaker. And also weird because not even Ilya Sutskever and Adam D'Angelo’s friends will hint at what they thought that they were doing—other than as of some Thursday in November they suddenly thought that Sam Altman ought to go:

Ezra Klein: Interview with Casey Newton & Kevin Roose <nytimes.com/2023/12/01/…>: ‘People saw—I saw—Altman fired by this nonprofit board meant to ensure that A.I. is built to serve humanity. And I assumed—and I think many assumed—there was some disagreement here over what OpenAI was doing…. [But] I think, I can say conclusively… that was not what this was about. The OpenAI board did not trust and did not feel it could control Sam Altman, and that is why they fired Altman. It’s not that they felt they couldn’t trust him on one thing, that they were trying to control him on X but he was beating them on X. It’s that a lot of little things added up. They felt their job was to control… that they did not feel they could control him, and so… they had to get rid of him. They did not have, obviously, the support inside the company to do that….

All the OpenAI people thought it was very important… that they had this nonprofit board… building A.I. that served humanity that could… shut down the company… if they thought it was going awry in some way or another. And the moment that board tried… now, I think they did not try to do that on very strong grounds… it turned out they couldn’t. That the company could… reconstitute itself at Microsoft or that the board… couldn’t withstand the pressure….

[The new] board members do not hold the views on A.I. safety that… Helen Toner… and Tasha McCauley… held…. These are people who… serve on corporate boards in a normal way where the output of the corporate board is supposed to be shareholder value, and that’s going to influence them even if they understand themselves to have a different mission here…

5 Likes
12:37 AM
Dec 2