OpenAI’s Surreal Surrender of, and then to, Sam Altman

OpenAI's Surreal Week with Sam Altman

OpenAI’s board has the job of governance over the non-profit’s mission to safely shepherd the commercialisation of Artificial General Intelligence. Firing a Founder and CEO is no small exercise of that burden. Now Altman returns in a counter coup. Given the transformative nature of this technology, and its potential impact on the future of humanity, the world deserves an explanation.

At the beginning of this month, a gathering of the world’s artificial intelligence leadership, corporate principals and government officials attended the first AI Safety Summit at Bletchley Park in the UK.  It was, for all intents and purposes, a programme sponsored by the UK Government, but with far reaching, global implications.  Those global implications have now become manifest with the world’s leading proponent of “safe” AI development, OpenAI, firing the one CEO arguably responsible for its own existence.

Eight years ago, prior to the official foundation of OpenAI, Elon Musk voiced concern that artificial intelligence posed an existential threat to humanity. He wasn’t alone among those who understand what artificial intelligence actually is, and what it could achieve. Stephen Hawking, a pre-eminent theoretical physicist and philosopher told the BBC in a 2014 interview that, “The development of full artificial intelligence could spell the end of the human race.”

Mr. Musk repeated this long-held concern at the Bletchley Park gathering just a few weeks ago. The rationale for the UK’s AI Safety Summit is a growing awareness that, if left unchecked, advances in AI are being achieved at an increasingly rapid rate that could result in real political, social and economic damage beyond what we can imagine today.

One would hope that no one wants that.

But preventing it might require more than wishful thinking.

UK Prime Minister Sunak with Sam Altman at the UK AI Safety Summit on Nov. 1, 2023
UK Prime Minister Sunak (right) with Sam Altman at the UK AI Safety Summit on Nov. 1, 2023 (image: WPA/Getty).

While Elon Must made his point at the Summit, one man stood prominently at the side of the UK’s Prime Minister, Rishi Sunak.

That man was Sam Altman, CEO of OpenAI, developer of the now ubiquitous AI consumer product known as ChatGPT.

The Founding of OpenAI

Mr. Altman, it might be remembered, is also a Founder of OpenAI, as is Elon Musk. The title of Founder means something more than an initiator among the Silicon Valley heavyweights that Altman helped to assemble in 2015 around the idea of circumventing the dangerous rush toward AI by tech giants. The idea was simple. Establish a non-profit, open-source group employing top researchers working as much for principle as for a paycheque to ensure AI is available for the benefit of all.

Artificial intelligence could be a transformative technology that improves human prosperity if developed and guided by that purpose. It would take money and stature to achieve this. Altman was a perfect blend of both. As President of Y-Combinator, the leading Silicon Valley tech incubator backed by some of the worlds largest private and State-owned capital pools, Altman had been a significant contributor to the rise of technology as the largest generator of wealth and economic change in human history.

Together with Elon Musk, Altman assembled a core team around Greg Brockman and Ilya Sutskever.  Brockman brought corporate experience as former Chief Technology Officer for digital payments startup, Stripe.  Sutskever was, at the time, instrumental in Google’s efforts to build artificially intelligent neural networks and learning systems, called the “Google Brain”.

The group’s objective was simple, if not overwhelming: to build an organisation controlling a private AI lab that operated outside the all consuming commercial interests of the tech giants like Google, Facebook and Microsoft. 

“The best thing that I could imagine doing,” Brockman is quoted as saying, “was moving humanity closer to building real AI in a safe way.” The vast majority of the most experienced AI researchers of the day agreed. But enticing them away from some of the highest paid jobs in tech at some of the best financed companies in the world would take money. A lot of money.

Wired published an excellent article in 2016 offering a deep background on the formation of OpenAI.

Over the last two quarters of 2015, Altman went to work on his network of ultra-high-net-worth individuals, most of whom made their money in tech and knew the stakes. Along with Elon Musk, Altman assembled capital partners including famed venture capitalist Peter Thiel and LinkedIn’s Reid Hoffman to contribute capital to a US$1 billion startup company.

On December 8, 2015 a Delaware company, OpenAI, Inc. was formed.


You may be interested in this related article:

Robots On Laptops

Cooler Heads to Prevail Over Generative AI

The creative apocalypse is unlikely to unfold as the calculating minds of media executives on corporate boards think through the risks of Generative AI.


By December 15th of that year, the cat was out of the bag and every major tech company with AI research departments began offering as much as three times salary to acquire and retain employees as OpenAI began recruiting in earnest.

Microsoft’s VP of Research at the time, Peter Lee is quoted as saying that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the NFL. Microsoft, like Google and Facebook, were determined to keep their AI research talent.

But OpenAI had something more than money to offer.  OpenAI had a mission: to ensure that artificial general intelligence benefits all of humanity. That mission helped OpenAI assemble a team of AI researchers and developers that have now delivered generative AI to the world.

A Non-Profit Business

OpenAI, Inc. is registered with the United States Internal Revenue Service as a 501(c)(3) organisation under the U.S. Internal Revenue Code. It is not a typical company. Rather, OpenAI exists for a specific purpose that the IRS agrees is worthy of exemption from tax. No small thing.

Corporate purposes that rise to this level of altruism include scientific research and testing of consumer products for public safety. These are two of the exemptions underpinning OpenAI’s non-profit status in the eyes of a highly aggressive government agency tasked with collecting taxes, and one that is highly familiar with the otherwise taxable revenue streams of both OpenAI’s principal area of interest and its shareholders.

As a non-profit organisation, all profits received by OpenAI, Inc. must be retained to advance its tax exempt purpose. Shareholders get no dividends.

The board members of OpenAI, Inc. are specifically obliged to protect the company’s not-for-profit purpose and therefore defend its exemptions from taxation. Those obligations of OpenAI governance carry significant legal and financial risk for its directors should they fail to perform their duties in an objectively prudent manner. The more the board knows, the more burden it has to act on that knowledge to protect the company’s mission.

OpenAI’s mission, as stated on its own website, is “to ensure that artificial general intelligence benefits all of humanity.”

To define its mission further, OpenAI describes artificial general intelligence as “highly autonomous systems that outperform humans at most economically valuable work.”


You may find this article interesting:

Value of AI in Human Hands

The Value of Generative AI is in the Hands of Those Who Wield It

Some professional writers fear it for the same reason professional investors love it.


Sam Altman’s job as Chief Executive Officer was to demonstrate progress in OpenAI’s mission. Within seven years of formation, OpenAI released ChatGPT as a free-to-use example of generative AI. One year later, ChatGPT has spawned its own industry and sparked widespread increases in productivity as well as deep anxiety. That anxiety is practical in nature, as demonstrated by the industrial strike actions that crippled Hollywood since ChatGPT’s release. But it is also more technical.

The acronym “G.P.T.” in ChatGPT stands for Generative Pre-Trained Transformer and is a critical interface between the large language models behind ChatGPT’s artificial intelligence and its now hundreds of millions of users.  Each interaction between ChatGPT and a human user contributes to additional training and development of the artificial intelligence systems being developed by OpenAI.

The ultimate objective is what is known as artificial general intelligence, or AGI.  AGI is the form of AI that both Elon Musk and Steven Hawking have warned us all about.

So successfully has Altman led OpenAI in its mission since inception that Microsoft has now become one of the largest shareholders with an investment thought to be nearly US$10 billion. 

Under Altman’s leadership as CEO, OpenAI has expanded with a for-profit subsidiary rolling out ChatGPT Plus to paid subscribers and larger corporate, enterprise users. The speed at which ChatGPT has been developed into a commercial product with material revenues has been impressive.

ChatGPT recently reported over 100 million active weekly users.  If half of those are consumers paying only the lower tier US$20 per month in subscription fees, then OpenAI could be generating as much as US$1 billion a month in revenues. The likelihood is that OpenAI generates far more, and is only getting started.  For a non-profit organisation, this would go a long way toward paying the costs of pursuing OpenAI’s mission and developing a true AGI.

Sam Altman announcing 100 million weekly active users at the OpenAI Dev Day on November 6, 2023.
Sam Altman presenting at the OpenAI DevDay on Nov. 6, 2023 (image: Getty).

Over eight years, OpenAI has assembled one of the most effective, and productive, research and development teams on the planet and a corporate organisation with a workforce that has rolled out generative AI products that have already had a profound impact on productivity globally.  That includes some 770 highly experienced employees, over 740 of whom on Monday threatened to quit OpenAI.

The reason: Sam Altman was fired on Friday.

Why Was Sam Altman Fired?

This is the question that requires an answer.

To be clear, as of this article’s publication date, the board of directors of OpenAI, Inc. have not provided a detailed explanation for sacking the Founder and CEO of an organisation self-tasked with protecting humanity from the dangers of AI by managing its development and commercialisation.

Questions and speculation have been rife.

Humanity, it should seem, deserves an answer.

In a statement released on November 17th, 2023, the board reported a determination that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” 

In the same statement, it was announced that Greg Brockman would step down as Chairman of the Board.  Brockman has since resigned as President of OpenAI.

As of Tuesday, it was reported widely, Sam Altman has been offered a triumphant return to the office of CEO at OpenAI together with a revised governance structure and a new board of directors to oversee his performance. A remarkable turn of fortune for Altman with profound implications for everyone.

At the time of his firing, the OpenAI board consisted of co-Founder Ilya Sutskever plus three independent directors: Adam D’Angelo (CEO of Quora), Tasha McCauley (a tech entrepreneur) and Helen Toner.  Toner is Director of Strategy for the Center for Security and Emerging Technology at Georgetown University in Washington, DC.

It should be noted that co-Founder Sutskever was the only executive board member, and very well could have been outvoted by the other three in favour of terminating Altman as CEO. Written minutes of the relevant meeting have not been publicly disclosed (yet).

It should also be noted that Sutskever himself signed a letter with his colleagues demanding Altman’s return after posting on X (formerly Twitter) that he regrets participating in the board’s actions and professing a desire to reunite the company.

OpenAI’s most prominent shareholder was caught completely by surprise by the board’s decision. Microsoft’s CEO, Satya Nadella, is reported to have led considerably difficult discussions over the weekend to reverse the decision to no avail. Microsoft reacted by offering Altman and his entire workforce a home at Microsoft.

But why would a board of well informed, and well advised, professionals make a sudden decision that has the very real prospect of ending OpenAI as a viable organisation? 

An attempted coup by Sutskever has been both suggested and dismissed in an article by Ross Anderson in the Atlantic. This explanation may be the most viable given the deficit of information from the OpenAI board.

However, Reuters reported earlier today that a mysterious letter from OpenAI insiders to the board last week may have revealed a dangerous breakthrough in OpenAI’s development of AGI under a project code-named, “Q Star” or “Q*”. If reports of Q* are true, then a summary termination of OpenAI’s CEO because of a transformative breakthrough in AGI that might benefit humanity would seem almost criminally negligent for a non-profit organisation tasked with achieving precisely that.

If anything, a competent board would demand a full report from the CEO and provide guidance on acceptable paths forward.

The contradictions and perplexity of the OpenAI board’s behaviour raises serious questions from all quarters. From a commercial perspective, it seems foolish at best.  From a political perspective it seem highly dangerous. But from a social perspective, it seems almost existentially explosive.

The only truth about OpenAI known to humanity at the moment is that something forced a highly knowledgeable board of directors to suddenly fire a highly successful CEO who was instrumental in both formulating the company’s mission and in achieving it. The action could arguably be foreseen as necessary to halt the imminent commercialisation of AGI by OpenAI. What was not foreseeable, perhaps, was the exodus of OpenAI’s highly experienced research teams as a result.

That scenario seems unlikely without something else being applied to it.

That mysterious something is likely also responsible for preventing the complete disintegration of OpenAI by forcing the reinstatement of Altman as CEO.

That something seems to be at the heart of OpenAI’s very reason for being, which by extension, matters to all of us. So, what is that something?

Something that Matters

Sam Altman’s appearance with UK Prime Minister Sunik offered a degree of stature that an experienced capital management professional would run with. The Media C-Suite’s sources confirm what is now widely reported; that Altman spent much of the past three weeks actively pursuing capital commitments from investors in the Middle East for an AI chip manufacturing venture code-named, Project Tigris. 

AI chips are specialised microprocessors also known as AI accelerators, and are essential for the hardware infrastructure necessary to increase the capacity of generative and other AI systems to service expanding usage demands. They would be vital for any widespread applications of AGI.

AI chips like GPUs (Graphics Processing Units), TPUs (Tensor Processing Units) and FPGAs (Field-Programmable Gate Arrays) have become the workhorses of AI computations, driving forward advancements in areas ranging from natural language processing to autonomous vehicles. AI computations are notorious for their high power demands. AI chips address a critical sustainability requirement, offering greater energy efficiency for AI tasks. This aspect is crucial, considering the environmental impact and operational costs associated with running large-scale AI models.

Like all semiconductor microprocessors, AI chips require highly sophisticated design engineering and advanced manufacturing facilities. And, like all semiconductor microprocessors, AI chips serve an increasingly vital role in aerospace, military and other defence systems that are crucial to the national security of every modern nation.

Microprocessor manufacturing is more than an economic sector; it’s a cornerstone of U.S. national security policy. As the global landscape evolves, modern nations, and particularly the U.S., find themselves in a semiconductor showdown, with high stakes for military, economic and technological leadership. The centrality of computer chips in national security cannot be overstated.

The combination of a breakthrough in AGI and dominance in AI specific microchips is at the heart of the UK AI Safety Summit’s resulting “Bletchley Declaration”, which includes:

“Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.”

There is speculation among some, including within the Media C-Suite, that the board of OpenAI, in addition to new information on a breakthrough in AGI, may have been acting on information from inside the national security apparatus of the United States. A warning, perhaps, of the implications posed by large investors with interests that are arguably not aligned with the benefit of all humankind.

This offers a rationale for the secrecy behind Sam Altman’s termination as CEO of a company that arguably cannot function viably without him. It also aligns with his sudden return as a means of putting the genie back in the bottle. But speculation is insufficient for events which purport to effect the future of humanity itself.

Questions of this magnitude should not be left unanswered. That mysterious something needs to be made known.

The question for OpenAI’s new board of directors is simple: Can the genie be kept in the bottle long enough to matter? If Altman and his loyal crew had been forced to scatter to the four winds and find employment elsewhere, that answer would certainly be, “no”.

The question for the wider world is this: What happens when the primary proponent of safe AI is no longer the primary influence over its controlled development for the benefit of us all?

Without transparency from OpenAI, which is now acting as a form of international public utility with respect to the future application of AGI, the answer may result in a tweet on X from Elon Musk saying, “I told you so.” 

For certain, no one wants that.

1 Comment

  1. Great overview of a complicated story. Really appreciate this journey through the intricacies of the events.
    It seems unlikely that a company with such an enormous profit potential can remain its status as a charity.

Leave a Reply

Previous Story

Media C-Suite Week No. 47, Issue No. 33

Next Story

Media C-Suite Week No. 48, Issue No. 34