The Man Who Tried to Overthrow Sam Altman
Ilya Sutskever, bless his heart. Until recently, to the extent that Sutskever was known at all, it was as a brilliant artificial-intelligence researcher. He was the star student who helped Geoffrey Hinton, one of the “godfathers of AI,” kick off the so-called deep-learning revolution. In 2015, after a short stint at Google, Sutskever co-founded OpenAI, and eventually became its chief scientist; so important was he to the company’s success that Elon Musk has taken credit for recruiting him. (Sam Altman once showed me emails between himself and Sutskever suggesting otherwise.) Still, apart from niche podcast appearances, and the obligatory hour-plus back-and-forth with Lex Fridman, Sutskever didn’t have much of a public profile before this past weekend. Not like Altman, who has, over the past year, become the global face of AI.
On Thursday night, Sutskever set an extraordinary sequence of events into motion. According to a post on X by Greg Brockman, the former president of OpenAI and the former chair of its board, Sutskever texted Altman that night and asked if the two could talk the following day. Altman logged on to a Google Meet at the appointed time on Friday, and quickly learned that he’d been ambushed. Sutskever took on the role of Brutus, informing Altman that he was being fired. Half an hour later, Altman’s ouster was announced in terms so vague that for a few hours, anything from a sex scandal to a massive embezzlement scheme seemed possible.
I was surprised by these initial reports. While reporting a feature for The Atlantic last spring, I got to know Sutskever a bit, and he did not strike me as a man especially suited to coups. Altman, in contrast, was built for a knife fight in the technocapitalist mud. By Saturday afternoon, he had the backing of OpenAI’s major investors, including Microsoft, whose CEO, Satya Nadella, was reportedly furious that he’d received almost no notice of his firing. Altman also secured the support of the troops: More than 700 of OpenAI’s 770 employees have now signed a letter threatening to resign if he is not restored as chief executive. On top of these sources of leverage, Altman has an open offer from Nadella to start a new AI-research division at Microsoft. If OpenAI’s board proves obstinate, he can set up shop there and hire nearly every one of his former colleagues.
[From the September 2023 issue: Does Sam Altman know what he’s creating?]
As late as Sunday night, Sutskever was at OpenAI’s offices working on behalf of the board. But yesterday morning, the prospect of OpenAI’s imminent disintegration and, reportedly, an emotional plea from Anna Brockman—Sutskever officiated the Brockmans’ wedding—gave him second thoughts. “I deeply regret my participation in the board’s actions,” he wrote, in a post on X (formerly Twitter). “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” Later that day, in a bid to wish away the entire previous week, he joined his colleagues in signing the letter demanding Altman’s return.
Sutskever did not return a request for comment, and we don’t yet have a full account of what motivated him to take such dramatic action in the first place. Neither he nor his fellow board members have released a clear statement explaining themselves, and their vague communications have stressed that there was no single precipitating incident. Even so, some of the story is starting to fill out. Among many other colorful details, my colleagues Karen Hao and Charlie Warzel reported that the board was irked by Altman’s desire to quickly ship new products and models rather than slowing things down to emphasize safety. Others have said that their hand was forced, at least in part, by Altman’s extracurricular-fundraising efforts, which are said to have included talks with parties as diverse as Jony Ive, aspiring NVIDIA competitors, and investors from surveillance-happy autocratic regimes in the Middle East.
[Read: Inside the chaos at OpenAI]
This past April, during happier times for Sutskever, I met him at OpenAI’s headquarters in San Francisco’s Mission District. I liked him straightaway. He is a deep thinker, and although he sometimes strains for mystical profundity, he’s also quite funny. We met during a season of transition for him. He told me that he would soon be leading OpenAI’s alignment research—an effort focused on training AIs to behave nicely, before their analytical abilities transcend ours. It was important to get alignment right, he said, because superhuman AIs would be, in his charming phrase, the “final boss of humanity.”
Sutskever and I made a plan to talk a few months later. He’d already spent a great deal of time thinking about alignment, but he wanted to formulate a strategy. We spoke again in June, just weeks before OpenAI announced that his alignment work would be served by a large chunk of the company’s computing resources, some of which would be devoted to spinning up a new AI to help with the problem. During that second conversation, Sutsekever told me more about what he thought a hostile AI might look like in the future, and as the events of recent days have transpired, I have found myself thinking often of his description.
“The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,” Sutskever said. Although large language models, such as those that power ChatGPT, have come to define most people’s understanding of OpenAI, they were not initially the company’s focus. In 2016, the company’s founders were dazzled by AlphaGo, the AI that beat grandmasters at Go. They thought that game-playing AIs were the future. Even today, Sutskever remains haunted by the agentlike behavior of those that they built to play Dota 2, a multiplayer game of fantasy warfare. “They were localized to the video-game world” of fields, forts, and forests, he told me, but they played as a team and seemed to communicate by “telepathy,” skills that could potentially generalize to the real world. Watching them made him wonder what might be possible if many greater-than-human intelligences worked together.
In recent weeks, he may have seen what felt to him like disturbing glimpses of that future. According to reports, he was concerned that the custom GPTs that Altman announced on November 6 were a dangerous first step toward agentlike AIs. Back in June, Sutskever warned me that research into agents could eventually lead to the development of “an autonomous corporation” composed of hundreds, if not thousands, of AIs. Working together, they could be as powerful as 50 Apples or Googles, he said, adding that this would be “tremendous, unbelievably disruptive power.”
It makes a certain Freudian sense that the villain of Sutskever’s ultimate alignment horror story was a supersize Apple or Google. OpenAI’s founders have long been spooked by the tech giants. They started the company because they believed that advanced AI would be here sometime soon, and that because it would pose risks to humanity, it shouldn’t be developed inside a large, profit-motivated company. That ship may have sailed when OpenAI’s leadership, led by Altman, created a for-profit arm and eventually accepted more than $10 billion from Microsoft. But at least under that arrangement, the founders would still have some control. If they developed an AI that they felt was too dangerous to hand over, they could always destroy it before showing it to anyone.
Sutskever may have just vaporized that thin reed of protection. If Altman, Brockman, and the majority of OpenAI’s employees decamp to Microsoft, they may not enjoy any buffer of independence. If, on the other hand, Altman returns to OpenAI, and the company is more or less reconstituted, he and Microsoft will likely insist on a new governance structure or at least a new slate of board members. This time around, Microsoft will want to ensure that there are no further Friday-night surprises. In a terrible irony, Sutskever’s aborted coup may have made it more likely that a large, profit-driven conglomerate develops the first super-dangerous AI. At this point, the best he can hope for is that his story serves as an object lesson, a reminder that no corporate structure, no matter how well intended, can be trusted to ensure the safe development of AI.