Make sense of the news fast with Opinions' daily newsletterArrowRight

Exactly what happened remains murky, but broadly, it seems as though the people inside OpenAI were having the same debate as the rest of us: how to balance the potential benefits of AI against its risks — especially the “alignment problem” of ensuring that intelligent machines don’t become inimical to human interests.

“Altman’s dismissal by OpenAI’s board on Friday,” reports the Atlantic, “was the culmination of a power struggle between the company’s two ideological extremes — one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.”

Advertisement

If you’ve read the think pieces, you know the broad outlines of the debate. Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.

Follow this authorMegan McArdle's opinions

Follow

Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots. Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.

OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization. It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.

Advertisement

Eventually, it became clear that building products such as ChatGPT would take more resources than a nonprofit could generate, so OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).

On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.

All this was astonishing enough, but what happened next was … well, no AI fiction engine could generate such a scenario. After a flurry of bad publicity and general chaos, Microsoft announced that it was hiring Altman and Brockman to lead a new advanced AI research team there.

Advertisement

Meanwhile, most of OpenAI’s employees signed a letter threatening to quit and join Altman at Microsoft unless the board resigned and reappointed Altman as CEO. Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

Josh Tyrangiel: Sam Altman lost his job but not his place in AI history

This peculiar drama seems somehow very Silicon Valley, but it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won. Oh, sure, that part might not survive as OpenAI; it might migrate to Microsoft. But a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.

More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem. And that’s why we are probably not going to “solve” it so much as hope we don’t have to. Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.

Advertisement

When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen. But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.

Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.

Share

Comments

Popular opinions articles

HAND CURATED

View 3 more stories

Loading...

It has been not quite a year since OpenAI launched ChatGPT, sparking a million think pieces about whether humanity had just robo-signed its own death warrant. Now, it’s an open question whether the company will make it to the Nov. 30 GPT-versary as a viable firm, after a few days of astonishing corporate drama in which the board ousted Chief Executive Sam Altman, President Greg Brockman resigned, and virtually the entire staff threatened to follow them to their new jobs.

Exactly what happened remains murky, but broadly, it seems as though the people inside OpenAI were having the same debate as the rest of us: how to balance the potential benefits of AI against its risks — especially the “alignment problem” of ensuring that intelligent machines don’t become inimical to human interests.

“Altman’s dismissal by OpenAI’s board on Friday,” reports the Atlantic, “was the culmination of a power struggle between the company’s two ideological extremes — one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.”

If you’ve read the think pieces, you know the broad outlines of the debate. Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.

Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots. Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.

OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization. It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.

Eventually, it became clear that building products such as ChatGPT would take more resources than a nonprofit could generate, so OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).

On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.

All this was astonishing enough, but what happened next was … well, no AI fiction engine could generate such a scenario. After a flurry of bad publicity and general chaos, Microsoft announced that it was hiring Altman and Brockman to lead a new advanced AI research team there.

Meanwhile, most of OpenAI’s employees signed a letter threatening to quit and join Altman at Microsoft unless the board resigned and reappointed Altman as CEO. Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

Josh Tyrangiel: Sam Altman lost his job but not his place in AI history

This peculiar drama seems somehow very Silicon Valley, but it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won. Oh, sure, that part might not survive as OpenAI; it might migrate to Microsoft. But a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.

More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem. And that’s why we are probably not going to “solve” it so much as hope we don’t have to. Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.

When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen. But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.

Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.

QOSHE - OpenAI drama explains the human penchant for risk-taking - Megan Mcardle
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

OpenAI drama explains the human penchant for risk-taking

4 1
21.11.2023

Make sense of the news fast with Opinions' daily newsletterArrowRight

Exactly what happened remains murky, but broadly, it seems as though the people inside OpenAI were having the same debate as the rest of us: how to balance the potential benefits of AI against its risks — especially the “alignment problem” of ensuring that intelligent machines don’t become inimical to human interests.

“Altman’s dismissal by OpenAI’s board on Friday,” reports the Atlantic, “was the culmination of a power struggle between the company’s two ideological extremes — one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.”

Advertisement

If you’ve read the think pieces, you know the broad outlines of the debate. Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.

Follow this authorMegan McArdle's opinions

Follow

Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots. Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.

OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization. It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.

Advertisement

Eventually, it became clear that building products such as ChatGPT would take more resources than a nonprofit could generate, so OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).

On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who,........

© Washington Post


Get it on Google Play