Abstract Design
Abstract Design
Abstract Design

Can we trust OpenAI?

Artificial Intelligence is the most important technology in history. We ought to hold the companies developing it to an exceptionally high standard.

Artificial Intelligence is the most important technology in history. We ought to hold the companies developing it to an exceptionally high standard.

  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image
  • Work Image

Employees who care about safety, or who stand

up to Altman, tend not to last at OpenAI.

Employees who care about safety,

or who stand up to Altman,

tend not to last at OpenAI.

Jan Leike

Co-lead of Superalignment

Departed early 2024

“Over the past years, safety culture and processes have taken a backseat to shiny products."

Jeffrey Wu

Member of Technical Staff

Departed mid 2024

“We can say goodbye to the original version of OpenAI that wanted to be unconstrained by financial obligations… It seems to me the original nonprofit has been disempowered and had its mission reinterpreted to be fully aligned with profit”

Miles Brundage

Head of Policy Research

Departed late 2024

“In short, neither OpenAI nor any other frontier lab is ready [for the arrival of powerful AI], and the world is also not ready.”

Carrol Wainwright

Member of Technical Staff

Departed early 2024

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

Todor Markov

Member of Technical Staff,

Safety Systems and Preparedness

Departed early 2024

Reported by the NYT to have resigned in protest of OpenAI’s restrictive and threatening NDAs, accusing leadership of misleading staff and alleging that the company could not be trusted to build advanced AI.

Gretchen Kreuger

Policy research

Departed early 2024

"We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

Ilya Sutsekever

Co-founder, co-lead of

Superalignment team

Departed early 2024

Departed to form a new company focused on safe superintelligence, after reportedly approaching the nonprofit board with concerns about Sam Altman and OpenAI’s lack of focus on safety.

Geoffrey Irving

Member of Technical Staff

Departed late 2019

Despite caveating that he was nice to him personally, Irving alleges that OpenAI CEO Sam Altman “lied to me on various occasions” and “was deceptive, manipulative, and worse to others, including my close friends.”

Jan Hendrik Kirchner

Member of Technical Staff

Departed mid 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Amanda Askell

Research Scientist

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Jack Clark

Policy Director

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Paul Christiano

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Benjamin Mann

Member of Technical Staff

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Pavel Izmailov

Member of Superalignment Team

Departed early 2024

Focused on alignment research throughout tenure at OpenAI.

Lilian Weng

VP of Research
Head of Safety Systems

Departed late 2024

The most recent safety-related departure, Lilian Weng had just been promoted to VP and worked on safety issues throughout her tenure at OpenAI.

William Saunders

Member of Technical Staff

Departed early 2024

“OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

Helen Toner

Former member of the Board

Departed late 2023

“My experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and why it's imperative that policymakers step in.”

Leopold
Aschenbrenner

Member of the

Superalignment Team

Departed early 2024

“I’m most worried about things just being totally crazy around superintelligence, including things like novel WMDs, destructive wars, and unknown unknowns.”

Cullen O'Keefe

Research Lead

Departed early 2024

“While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk… AI may bring risks of a similar magnitude this century.”

Daniel Ziegler

Member of Technical Staff

Departed early 2021

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

Mira Murati

Chief Technical Officer

Departed late 2024

The New York Times reported that the year before departing, Murati wrote a private memo to CEO Sam Altman “raising questions about his management” and shared her concerns with the board.

Suchir Balaji

Member of Technical Staff

Departed late 2024

“If you believe what I believe, you have to just leave the company… it is time for Congress to step in.”

Yuri Burda

Member of Technical Staff

Departed late 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Mati Roy

Member of Technical Staff

Departed mid 2024

Left to join a competitor, as part of a self-professed long-term focus to “make the transition to AGI go well.”

Chris Olah

Member of Technical Staff

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Sam McCandlish

Research Lead

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Tom Henighan

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Chris Clark

Head of Nonprofit and

Strategic Initiatives

Departed early 2024

It’s not clear if Clark’s departure had anything to do with OpenAI’s ongoing attempt to abandon its nonprofit status, though given his role, one might suspect this played a part.

Sherry Lachman

Head of Social Impact

Departed mid 2024

Lachman contributed to safety efforts, including Superalignment Fast Grants. The reasons for Lachman’s departure are unclear.

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen … I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

Elon Musk

Co-founder

Departed 2018

"OpenAI's path from a non-profit to for-profit behemoth is replete with per se anticompetitive practices, flagrant breaches of its charitable mission, and rampant self-dealing.”

Rosie Campbell

Trust and Safety + Policy

Departed late 2024

“I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture.”

Richard Ngo

Research Scientist

Departed late 2024

“While the ‘making AGI’ part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the ‘go well’ part of the mission.”

Tasha McCauley

Former member of the Board

Departed late 2023

“We also feel that developments since [Altman] returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance.”

Jacob Hilton

Researcher

Departed early 2023

“Given that OpenAI has previously used access to liquidity as an intimidation tactic, many former employees will still feel scared to speak out.”

John Shulman

Head of Alignment Science

Departed late 2024

Announced he was leaving to focus on AI safety work, but also claimed that company leaders had been supportive of safety efforts.

Steven Bills

Member of Technical Staff

Departed mid 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Dario Amodei

VP of Research

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Daniela Amodei

VP of Safety and Policy

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Tom Brown

Research Engineer

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Nicholas Joseph

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Collin Burns

Member of Technical Staff

Departed mid 2021

Burns contributed to safety efforts, including Superalignment Fast Grants. The reasons for Burn’s departure are unclear.

Jonathan Uesato

Member of Technical Staff

Departed mid 2024

Uesato contributed to safety efforts while working at OpenAI. The reasons for Uesato’s departure are unclear.

Jan Leike

Co-lead of Superalignment

Departed early 2024

“Over the past years, safety culture and processes have taken a backseat to shiny products."

Jeffrey Wu

Member of Technical Staff

Departed mid 2024

“We can say goodbye to the original version of OpenAI that wanted to be unconstrained by financial obligations… It seems to me the original nonprofit has been disempowered and had its mission reinterpreted to be fully aligned with profit”

Miles Brundage

Head of Policy Research

Departed late 2024

“In short, neither OpenAI nor any other frontier lab is ready [for the arrival of powerful AI], and the world is also not ready.”

Carrol Wainwright

Member of Technical Staff

Departed early 2024

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

Todor Markov

Member of Technical Staff,

Safety Systems and Preparedness

Departed early 2024

Reported by the NYT to have resigned in protest of OpenAI’s restrictive and threatening NDAs, accusing leadership of misleading staff and alleging that the company could not be trusted to build advanced AI.

Gretchen Kreuger

Policy research

Departed early 2024

"We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

Ilya Sutsekever

Co-founder, co-lead of

Superalignment team

Departed early 2024

Departed to form a new company focused on safe superintelligence, after reportedly approaching the nonprofit board with concerns about Sam Altman and OpenAI’s lack of focus on safety.

Geoffrey Irving

Member of Technical Staff

Departed late 2019

Despite caveating that he was nice to him personally, Irving alleges that OpenAI CEO Sam Altman “lied to me on various occasions” and “was deceptive, manipulative, and worse to others, including my close friends.”

Jan Hendrik Kirchner

Member of Technical Staff

Departed mid 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Amanda Askell

Research Scientist

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Jack Clark

Policy Director

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Paul Christiano

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Benjamin Mann

Member of Technical Staff

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Pavel Izmailov

Member of Superalignment Team

Departed early 2024

Focused on alignment research throughout tenure at OpenAI.

Lilian Weng

VP of Research
Head of Safety Systems

Departed late 2024

The most recent safety-related departure, Lilian Weng had just been promoted to VP and worked on safety issues throughout her tenure at OpenAI.

William Saunders

Member of Technical Staff

Departed early 2024

“OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

Helen Toner

Former member of the Board

Departed late 2023

“My experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and why it's imperative that policymakers step in.”

Leopold
Aschenbrenner

Member of the

Superalignment Team

Departed early 2024

“I’m most worried about things just being totally crazy around superintelligence, including things like novel WMDs, destructive wars, and unknown unknowns.”

Cullen O'Keefe

Research Lead

Departed early 2024

“While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk… AI may bring risks of a similar magnitude this century.”

Daniel Ziegler

Member of Technical Staff

Departed early 2021

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

Mira Murati

Chief Technical Officer

Departed late 2024

The New York Times reported that the year before departing, Murati wrote a private memo to CEO Sam Altman “raising questions about his management” and shared her concerns with the board.

Suchir Balaji

Member of Technical Staff

Departed late 2024

“If you believe what I believe, you have to just leave the company… it is time for Congress to step in.”

Yuri Burda

Member of Technical Staff

Departed late 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Mati Roy

Member of Technical Staff

Departed mid 2024

Left to join a competitor, as part of a self-professed long-term focus to “make the transition to AGI go well.”

Chris Olah

Member of Technical Staff

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Sam McCandlish

Research Lead

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Tom Henighan

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Chris Clark

Head of Nonprofit and

Strategic Initiatives

Departed early 2024

It’s not clear if Clark’s departure had anything to do with OpenAI’s ongoing attempt to abandon its nonprofit status, though given his role, one might suspect this played a part.

Sherry Lachman

Head of Social Impact

Departed mid 2024

Lachman contributed to safety efforts, including Superalignment Fast Grants. The reasons for Lachman’s departure are unclear.

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen … I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

Elon Musk

Co-founder

Departed 2018

"OpenAI's path from a non-profit to for-profit behemoth is replete with per se anticompetitive practices, flagrant breaches of its charitable mission, and rampant self-dealing.”

Rosie Campbell

Trust and Safety + Policy

Departed late 2024

“I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture.”

Richard Ngo

Research Scientist

Departed late 2024

“While the ‘making AGI’ part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the ‘go well’ part of the mission.”

Tasha McCauley

Former member of the Board

Departed late 2023

“We also feel that developments since [Altman] returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance.”

Jacob Hilton

Researcher

Departed early 2023

“Given that OpenAI has previously used access to liquidity as an intimidation tactic, many former employees will still feel scared to speak out.”

John Shulman

Head of Alignment Science

Departed late 2024

Announced he was leaving to focus on AI safety work, but also claimed that company leaders had been supportive of safety efforts.

Steven Bills

Member of Technical Staff

Departed mid 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Dario Amodei

VP of Research

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Daniela Amodei

VP of Safety and Policy

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Tom Brown

Research Engineer

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Nicholas Joseph

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Collin Burns

Member of Technical Staff

Departed mid 2021

Burns contributed to safety efforts, including Superalignment Fast Grants. The reasons for Burn’s departure are unclear.

Jonathan Uesato

Member of Technical Staff

Departed mid 2024

Uesato contributed to safety efforts while working at OpenAI. The reasons for Uesato’s departure are unclear.

Jan Leike

Co-lead of Superalignment

Departed early 2024

“Over the past years, safety culture and processes have taken a backseat to shiny products."

Jeffrey Wu

Member of Technical Staff

Departed mid 2024

“We can say goodbye to the original version of OpenAI that wanted to be unconstrained by financial obligations… It seems to me the original nonprofit has been disempowered and had its mission reinterpreted to be fully aligned with profit”

Miles Brundage

Head of Policy Research

Departed late 2024

“In short, neither OpenAI nor any other frontier lab is ready [for the arrival of powerful AI], and the world is also not ready.”

Carrol Wainwright

Member of Technical Staff

Departed early 2024

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

Todor Markov

Member of Technical Staff,

Safety Systems and Preparedness

Departed early 2024

Reported by the NYT to have resigned in protest of OpenAI’s restrictive and threatening NDAs, accusing leadership of misleading staff and alleging that the company could not be trusted to build advanced AI.

Gretchen Kreuger

Policy research

Departed early 2024

"We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

Ilya Sutsekever

Co-founder, co-lead of

Superalignment team

Departed early 2024

Departed to form a new company focused on safe superintelligence, after reportedly approaching the nonprofit board with concerns about Sam Altman and OpenAI’s lack of focus on safety.

Geoffrey Irving

Member of Technical Staff

Departed late 2019

Despite caveating that he was nice to him personally, Irving alleges that OpenAI CEO Sam Altman “lied to me on various occasions” and “was deceptive, manipulative, and worse to others, including my close friends.”

Jan Hendrik Kirchner

Member of Technical Staff

Departed mid 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Amanda Askell

Research Scientist

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Jack Clark

Policy Director

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Paul Christiano

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Benjamin Mann

Member of Technical Staff

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Pavel Izmailov

Member of Superalignment Team

Departed early 2024

Focused on alignment research throughout tenure at OpenAI.

Lilian Weng

VP of Research
Head of Safety Systems

Departed late 2024

The most recent safety-related departure, Lilian Weng had just been promoted to VP and worked on safety issues throughout her tenure at OpenAI.

William Saunders

Member of Technical Staff

Departed early 2024

“OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

Helen Toner

Former member of the Board

Departed late 2023

“My experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and why it's imperative that policymakers step in.”

Leopold
Aschenbrenner

Member of the

Superalignment Team

Departed early 2024

“I’m most worried about things just being totally crazy around superintelligence, including things like novel WMDs, destructive wars, and unknown unknowns.”

Cullen O'Keefe

Research Lead

Departed early 2024

“While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk… AI may bring risks of a similar magnitude this century.”

Daniel Ziegler

Member of Technical Staff

Departed early 2021

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

Mira Murati

Chief Technical Officer

Departed late 2024

The New York Times reported that the year before departing, Murati wrote a private memo to CEO Sam Altman “raising questions about his management” and shared her concerns with the board.

Suchir Balaji

Member of Technical Staff

Departed late 2024

“If you believe what I believe, you have to just leave the company… it is time for Congress to step in.”

Yuri Burda

Member of Technical Staff

Departed late 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Mati Roy

Member of Technical Staff

Departed mid 2024

Left to join a competitor, as part of a self-professed long-term focus to “make the transition to AGI go well.”

Chris Olah

Member of Technical Staff

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Sam McCandlish

Research Lead

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Tom Henighan

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Chris Clark

Head of Nonprofit and

Strategic Initiatives

Departed early 2024

It’s not clear if Clark’s departure had anything to do with OpenAI’s ongoing attempt to abandon its nonprofit status, though given his role, one might suspect this played a part.

Sherry Lachman

Head of Social Impact

Departed mid 2024

Lachman contributed to safety efforts, including Superalignment Fast Grants. The reasons for Lachman’s departure are unclear.

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen … I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

Elon Musk

Co-founder

Departed 2018

"OpenAI's path from a non-profit to for-profit behemoth is replete with per se anticompetitive practices, flagrant breaches of its charitable mission, and rampant self-dealing.”

Rosie Campbell

Trust and Safety + Policy

Departed late 2024

“I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture.”

Richard Ngo

Research Scientist

Departed late 2024

“While the ‘making AGI’ part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the ‘go well’ part of the mission.”

Tasha McCauley

Former member of the Board

Departed late 2023

“We also feel that developments since [Altman] returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance.”

Jacob Hilton

Researcher

Departed early 2023

“Given that OpenAI has previously used access to liquidity as an intimidation tactic, many former employees will still feel scared to speak out.”

John Shulman

Head of Alignment Science

Departed late 2024

Announced he was leaving to focus on AI safety work, but also claimed that company leaders had been supportive of safety efforts.

Steven Bills

Member of Technical Staff

Departed mid 2024

Left this year to join a competitor with a stronger reputation for taking safety concerns seriously.

Dario Amodei

VP of Research

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Daniela Amodei

VP of Safety and Policy

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Tom Brown

Research Engineer

Departed late 2020

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Nicholas Joseph

Member of Technical Staff

Departed early 2021

Part of a mass departure widely rumored to have stemmed from disagreements about safety at OpenAI.

Collin Burns

Member of Technical Staff

Departed mid 2021

Burns contributed to safety efforts, including Superalignment Fast Grants. The reasons for Burn’s departure are unclear.

Jonathan Uesato

Member of Technical Staff

Departed mid 2024

Uesato contributed to safety efforts while working at OpenAI. The reasons for Uesato’s departure are unclear.

What happened to the nonprofit mission?

What happened to the nonprofit mission?

What happened

to the nonprofit

mission?

2015
2015

OpenAI is founded as a 501(c)(3) nonprofit.

OpenAI is founded as a 501(c)(3) nonprofit.

Its articles of incorporation define its mission as “advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." Sam Altman would later say, as to why OpenAI chose to be a nonprofit, "we don’t want to ever be making decisions that benefit shareholders. The only people we want to be accountable to is humanity as a whole."

2019
2019

OpenAI spins out a for-profit subsidiary to raise money

OpenAI spins out a for-profit subsidiary to raise money

OpenAI needs additional capital to keep building large AI models, so they decide to create a for-profit subsidiary. How is this compatible with their nonprofit history, and the donations they've received in light of it? The justification given is that the majority ownership, and therefore the primary oversight, remains in the hands of the non-profit entity, an organization legally required to prioritize the interests of society over private financial gain.

2023
2023

The non-profit board tries (and fails) to gain control of the for-profit subsidiary

The non-profit board tries (and fails) to gain control of the for-profit subsidiary

In late 2023, the non-profit board votes to fire OpenAI CEO Sam Altman for dishonesty. However, Altman and his allies, as well as Microsoft (a major investor in the for-profit) and employees whose equity relies on future fundraising for the for-profit, put up a fight. After a fierce contest between the nonprofit board and the investors and employees, Altman returns, and two of the board members who voted to fire him resign.

2024
2024

OpenAI begins efforts to fully abandon its non-profit status

OpenAI begins efforts to fully abandon its non-profit status

In June, rumors circulate that Sam Altman was considering changing OpenAI to a for-profit entity, and that he may be granted an equity stake in the company. By October, OpenAI officially began talks with regulators to convert to a for-profit entity.


The conversion would keep the non-profit entity in existence but, crucially, would give control over OpenAI’s operations and intellectual property to a for-profit entity. In exchange, the non-profit may be compensated with a non controlling equity stake (reported to be at least 25%) in the new for-profit company. The process to determine the value of the nonprofit's assets - namely control over the company and intellectual property - is being negotiated behind closed doors by OpenAI and Microsoft.


Whatever the nonprofit’s final compensation — even if it exceeds 50% — it will almost certainly be non voting shares. This means that the nonprofit will no longer be able to conduct oversight of OpenAI and carry out its original mission to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

OpenAI has a history of broken promises.

OpenAI has a history of broken promises.

OpenAI has a

history of

broken promises.

July 2023

+

Arrow

May 2024

+

July 2023

+

Arrow

May 2024

+

July 2023

+

Arrow

May 2024

+

December 2023

+

Arrow

May 2024

+

December 2023

+

Arrow

May 2024

+

December 2023

+

Arrow

May 2024

+

December 2023

+

Arrow

December 2024

+

December 2023

+

Arrow

December 2024

+

December 2023

+

Arrow

December 2024

+

June 2023

+

Arrow

November 2023

+

June 2023

+

Arrow

November 2023

+

June 2023

+

Arrow

November 2023

+

July 2019

+

Arrow

May 2024

+

July 2019

+

Arrow

May 2024

+

July 2019

+

Arrow

May 2024

+

January 2024

+

Arrow

December 2024

+

January 2024

+

Arrow

December 2024

+

January 2024

+

Arrow

December 2024

+

July 2019

+

Arrow

December 2024

+

July 2019

+

Arrow

December 2024

+

July 2019

+

Arrow

December 2024

+

An Open Letter:

Originally published May 16, 2024

Since pivoting from their nonprofit origins to begin rapidly scaling AI models in 2020, OpenAI and its senior executives have shown repeated, critical failures of oversight and accountability.

While the “move fast and break things” attitude may be celebrated at some tech companies, it is not remotely appropriate for a technology as consequential and dangerous as this one. We must hold the people and corporations building AI to an unusually high standard. “Good enough” won’t cut it for something so important.

This website contains evidence supporting our conclusion. The signatories of this letter demand that OpenAI be held accountable for its past and future actions to ensure that it does not cause further avoidable harm.

Relevant accountability mechanisms may include, but are not limited to:

  • Appointing a nonprofit board predominantly composed of leaders in AI safety and civil society, as opposed to its current overwhelming bias toward industry leaders.

  • Ensuring the nonprofit is fully insulated from the financial incentives of the for-profit subsidiary.

  • Committing to provide early model access to third party auditors including nonprofits, regulators, and academics.

  • Expanding internal teams focused on safety and ethics, and pre-assigning them meaningful “veto power” in future development decisions and product releases.

  • Publishing a more detailed, and more binding, preparedness framework.

  • Publicly announcing the release of all former employees from non-disparagement obligations. (Edit: Fulfilled May 23, 2024)

  • Accepting increased government scrutiny to ensure that OpenAI follows all applicable laws and regulations, and refraining from lobbying to water down such regulation.

  • Accepting clear legal liability for the current and future harms to people and society caused by OpenAI’s products.

Our future is at stake. OpenAI and its leaders must act cautiously and with real accountability as they enter uncharted territory developing advanced AI. Local and federal government agencies, lawmakers, the media, and the global public must work proactively to hold OpenAI accountable.

Since pivoting from their nonprofit origins to begin rapidly scaling AI models in 2020, OpenAI and its senior executives have shown repeated, critical failures of oversight and accountability.

While the “move fast and break things” attitude may be celebrated at some tech companies, it is not remotely appropriate for a technology as consequential and dangerous as this one. We must hold the people and corporations building AI to an unusually high standard. “Good enough” won’t cut it for something so important.

This website contains evidence supporting our conclusion. The signatories of this letter demand that OpenAI be held accountable for its past and future actions to ensure that it does not cause further avoidable harm.

Relevant accountability mechanisms may include, but are not limited to:

  • Appointing a nonprofit board predominantly composed of leaders in AI safety and civil society, as opposed to its current overwhelming bias toward industry leaders.

  • Ensuring the nonprofit is fully insulated from the financial incentives of the for-profit subsidiary.

  • Committing to provide early model access to third party auditors including nonprofits, regulators, and academics.

  • Expanding internal teams focused on safety and ethics, and pre-assigning them meaningful “veto power” in future development decisions and product releases.

  • Publishing a more detailed, and more binding, preparedness framework.

  • Publicly announcing the release of all former employees from non-disparagement obligations. (Edit: Fulfilled May 23, 2024)

  • Accepting increased government scrutiny to ensure that OpenAI follows all applicable laws and regulations, and refraining from lobbying to water down such regulation.

  • Accepting clear legal liability for the current and future harms to people and society caused by OpenAI’s products.

Our future is at stake. OpenAI and its leaders must act cautiously and with real accountability as they enter uncharted territory developing advanced AI. Local and federal government agencies, lawmakers, the media, and the global public must work proactively to hold OpenAI accountable.

Signatories:

Signatories:

Lucie Philippon
French Center for AI Safety
Charbel-Raphael Segerie
Executive Director, Centre pour la Sécurité de l'IA (CeSIA)
Geoffrey F. Miller
University of New Mexico
Michelle NIe
CeSIA
Amaury Lorin
EffiSciences
Maxime Fournes
Pause AI
Melissa de Britto Pereira
Student, USP
Roman Yampolskiy
Author of AI: Unexplainable, Unpredictable, Uncontrollable
Thomas Burden
Alberto Reis
Digital Artist
Arturo Villacañas
University of Cambridge
Alvin Ånestrand
Co-founder, AI Safety Gothenburg
Kieran Scott
Doctoral candidate, ML author
Harry Lee-Jones
Process Engineer, BHP
Janusz Kaiser
Translator
Oisin Tummon Swensen
Thomas Emond
Construction Manager
Carlo Cosmatos
Anubhav Awasthy
Granicus Technologies
Noah Topper
Klaus Lönhoff
Digital Solutions Project Manager
Bruce McLennan
Nickster Shovel
Nicole Richards
Diego Dorn
EPFL
Paolo Massimo
Software Engineer
Simon Steshin
DL Researcher
Lewis McGregor
Filmmaker
Michael Huang
University of Melbourne
Søren Elverlin
Founder, AISafety.com
Simon Karlsson
Manuel Roman
CFO at a EU tech company
Quinton thorp
Machinest
Raphael Royo-Reece
Solicitor
John Slape
IT System Analyst
Jaime Raldua
Software developer working in LLM evaluations
Thomas Moore
Senior Software Engineer
Fredi Bach
Developer
Kai Brügge
Emerson Spartz
Nonlinear
Lawrence Jasud
Retired Educator
Michelle Runyan
Stanford University
Mateusz Bagiński
Ron Karroll
Michaël Trazzi
Host, The Inside View
Aaron Stuckert
Felix De Simone
Organizing Director, PauseAI
Neerav Sahay
Alistair Stewart
Campaign Coordinator, Plant-Based Universities
Matthew Loewen
Student, WWU
D'Arcy Mayo
Copywriter
Carlos Parada
Cambridge MLG
Emily Dardaman
Independent researcher, Ex-BCG
Dawid Wojda
Software and machine learning engineer
Manuel Bimich
Co-fondateur, Centre pour la sécurité de l'IA
Eric Paice
Market Researcher
Eric Ciccone
DevOps Engineer
Peter S. Park
MIT
Siebe Rozendal
Brenna Nelson
MSP-employed IT professional
Zachary Magliaro
ASU Sustainability student
Del Jacobo
Anthony Bailey
Kylie Turner
Ben Cady
Wellington Financial
Marcus Faulstone
Sean McBride
Rasoul Mohammadi
AI engineer
Alex Kaplan
Pieter Louw
Audit trainee Moore Pretoria South Africa
Alex McKenzie
Data Scientist, on sabbatical
William Justin Wilson
Fabian Scholl
Writer
Tara Steele
Writer
Christopher Smith
Game Developer
Mac Burnie Rodolphe
Graphist
Max Winga
UIUC Physics Graduate, AI Safety Researcher
Anne Beccaris
Member of PauseAI
Océane Beccaris
Student, member of PauseAI
Stephen Casper
PhD Student, MIT
Terry Faber
Economist. IBISWorld
Dion Bridger
Wendy A.
Severin Field
PhD Candidate
Alexandra Santos
Startup ecosystem operator
Liron Shapira
PauseAI activist
Piotr Zaborszczyk
Conrad Barski
Medical Doctor, retired
Léo Dana
Student
Florent Berthet
EffiSciences
Johan Hanson
Sjoerd Meijer
Student
Paolo Massimo Veneziani
Web Developer
Tess Hegarty
PhD student, Stanford University
Joseph Miller
PauseAI
Ori Nagel
Nathan Metzger
Co-founder, AI-Plans.com
Patricio Vercesi
Coleman Snell
Global Risk Researcher and President of Cornell Effective Altruism
Joshua Eby
Stockholm University
Jeffrey C Choate
Holly Elmore
Executive Director, PauseAI US
Tyler Johnston
Executive Director, The Midas Project

Add your name to the letter

Add your name to the letter

Submissions will be manually approved and listed on this page soon.

Submissions will be manually approved and listed on this page soon.