Sam Altman: What on earth is happening at OpenAI?
Sam Altman: What on earth is happening at OpenAI?
Picture a boardroom battle at a multi-billion-dollar company whose futuristic tech might either save or destroy the world.
Its chief executive, who has the ear of world leaders, toppled as senior colleagues turn on him – only for the rest of the company to demand they themselves should be fired.
No, that's not my pitch for a Netflix drama, that's basically been the past few days at OpenAI.
Tech journalists, enthusiasts and investors have been binge-watching it all unfold – though opinions differ as to whether it was a high-stakes thriller or a farce.
How it started…
The battle at the top of OpenAI, the creator of the AI chatbot ChatGPT, began very suddenly on Friday, when the board of directors announced that it was firing the co-founder and chief executive, Sam Altman.
In a blog post the board accused Mr Altman of not being "consistently candid in his communications", and said as a result they had "lost confidence" in his leadership.
There are only six people on that board – and two of them were Sam Altman and his co-founder Greg Brockman who quit after Mr Altman was dismissed.
So four people who knew Mr Altman and the business well reached a breaking point of such seriousness that they sprung into action immediately, blindsiding the entire tech community including, reportedly, their own investors.
Elon Musk – also an original co-founder at OpenAI – wrote on X, formerly Twitter, that he was "very worried".
Ilya Sutskever, the firm's chief scientist, was a member of that board and "would not take such drastic action unless he felt it was absolutely necessary", he wrote.
Mr Sutskever has now expressed his own regret – and is one of the many signatories of a dynamite letter to the board of directors, calling for Mr Altman and Mr Brockman to return and suggesting they may leave OpenAI if the men are not reinstated.
The BBC is not responsible for the content of external sites.
Allow Twitter content?
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy, external and privacy policy, external before accepting. To view this content choose ‘accept and continue’.
The BBC is not responsible for the content of external sites.
What caused this row?
So what was it that sparked this rapidly rolling snowball? We actually still don't know – but let's consider some options.
There are reports that Mr Altman was considering some hardware projects, including the funding and development of an AI chip, which would have been quite a different direction in which to take OpenAI. Had he made some commitments that the board was not aware of?
Or could it boil down to a very old, and very human tension: money?
In an internal memo, whose contents have been widely reported, the board made it clear that it was not accusing Mr Altman of any "financial malfeasance".
But we know that OpenAI was founded as a non-profit organisation. That means, a company which does not aim to make money. It takes back enough of what it brings in to cover its own running costs – and any extra gets invested back into the business. Most charities are non-profits.
In 2019, a new arm of the firm was formed – and this part of it was profit-orientated. The firm set out how the two would co-exist. The profit side would be led by the non-profit side, and there would be a cap imposed on the returns investors could earn.
Not everybody was happy about it – it was said to have been a key reason behind Elon Musk's decision to walk away from the firm.
OpenAI, however, now finds itself in the happy circumstance of being worth an awful lot of money. A staff stock sale, which has not gone ahead today, was reportedly valued at $86bn (£68bn).
Could it be that there were ambitions to make the for-profit side of the business more powerful?
How will this end?
OpenAI is in pursuit of AGI – artificial general intelligence. It doesn't exist yet, and it is a cause of both fear and awe. It's basically the idea that there will one day be AI tools that will be able to do a number of tasks, as well as, or better than, humans (that's us) currently can.
It's got the potential to shift the entire way in which we do things. Jobs, money, education – all of that gets thrown up in the air when machines can do stuff instead. It's an incredibly powerful bit of kit – or at least, it will be.
Is OpenAI closer to that than we realise, and does Mr Altman know this? At a very recent speech he said what was coming next year would make the current ChatGPT bot look like "a quaint relative".
I think it's unlikely. Emmett Shear, the new interim chief executive of OpenAI, posted on X that "the board did *not* remove Sam over any specific disagreement on safety".
He says there will be an investigation into what happened.
But Microsoft, OpenAI's biggest investor, has decided not to take a chance on Mr Altman taking this tech elsewhere. He will be joining the Seattle-based tech giant, it has been announced, to lead a yet-to-be-created AI research team. His co-founder Greg Brockman goes with him, and judging from the number of staff members posting on X today, it looks like he'll be taking some of OpenAI's top talent too.
Many OpenAI staff members are sharing the same post on X. It reads: "OpenAI is nothing without its people".
Is that a warning to Mr Shear that he might have some hiring to do? A BBC colleague outside OpenAI's headquarters just told me at 0930 in San Francisco, there were no signs of people arriving for work.
Or is it just a reminder that for all this saga has been about a form of technology that is reshaping the world, it is, at its heart, a very human drama.
Related Topics
-
-
Published56 minutes ago
-
-
-
Published2 days ago
-