OpenAI made ChatGPT available a year ago, which immediately made it one of the most serious companies dealing with artificial intelligence (AI). On Friday, Sam Altman, the leader of the AI revolution, the co-founder and boss of OpenAI, was unexpectedly fired by the company’s board of directors. Altman signed with Microsoft a few hours later.
“Altman’s departure came after a board meeting concluded that the CEO had not been completely honest with the board and that this had greatly hindered him from carrying out his duties.” – the company wrote in a statement released on Friday, shortly after which Emmett Shear, the former head of the video streaming service Twitch, was appointed as director.
It was even more surprising that the next day, Satya Nadella, the boss of Microsoft – which is one of OpenAI’s largest investors – published on X,
Altman and a significant number of OpenAI’s departing staff are joining the software giant to lead a new advanced artificial intelligence research group.
OpenAI, the company that created ChatGPT, is not withdrawing from Europe
CEO Sam Altman is currently visiting the world’s leading countries to convince decision-makers of the usefulness of systems based on artificial intelligence.
Since then, various theories have come to light about why Altman was actually fired, even though he was the founder of the company, because its operation is supervised by the board of directors, which can even remove the founder-CEO from his position in this way. These were collected in a bouquet by Portfolio.
One theory is that Altman must have been running a very expensive internal project that he allegedly forgot to inform the board of directors about. It was also suggested that the management was not satisfied with the business model, which was characterized by the name of the CEO, namely that they offer their products extremely cheaply, which are easily accessible to the general public.
According to other assumptions, Altman took OpenAI too much in a corporate direction, he aggressively sought to introduce corporate and consumer applications, that is, he ran the company profit-oriented. It was also speculated that a major cyber security scandal, data theft or similar incident (or its concealment?) could be behind the dismissal. For example, a tech portal wrote that Microsoft allegedly suspended the internal use of ChatGPT a few days ago, and OpenAI subsequently stopped allowing new registrations.
Others believe that Microsoft may have been involved in a bold investment move that Altman prepared in secret, but which eventually became known to the board.
The White House invited the CEOs of Google and Microsoft to an artificial intelligence meeting
Artificial intelligence is a rapidly developing technology that caused concern in society mainly with the appearance of the ChatGPT chatbot.
What happened at OpenAI points to a broader divide in Silicon Valley
The Economist writes about the case. According to the paper, on one side are “apocalypticists” who believe that, if left unchecked, AI poses an existential threat to humanity, and are therefore calling for stricter regulation. Opposite them are the “boomers”, who do not share the fear of an AI apocalypse and emphasize that artificial intelligence can accelerate human development.
Whichever camp gains more influence can either encourage or block tighter regulation, which in turn could determine who will benefit most from AI in the future.
OpenAI’s corporate structure is somewhere between the two. The company, which was founded as a non-profit organization in 2015, created a for-profit subsidiary three years later in order to finance its expensive developments.
Altman sympathizes with both groups in principle, publicly calling for protective limits to make AI safe while pushing OpenAI to develop more powerful models and launching new tools, such as a platform where users can build their own chatbots. Its biggest investor, Microsoft, which pumped more than $10 billion into the company for a 49 percent stake without a seat on the parent company’s board, is said to be unhappy because it only learned of Altman’s firing minutes before. Perhaps that is why the company offered Altman and his colleagues a new job.
Harari: Artificial intelligence has hacked the operating system of our civilization
What will happen to the course of history when AI takes control of culture and starts manufacturing stories, tunes, laws and religions? – asks the world-famous historian.
OpenAI gained about 100 million users in the two months after ChatGPT launched, closely followed by Anthropic, which was founded by “dissidents” from OpenAI and is now valued at $25 billion. The paper on large language models and software trained on massive amounts of data, which forms the basis of chatbots, including ChatGPT, was originally written by Google engineers. The company produces increasingly bigger and smarter models, as well as a chatbot called Bard. Microsoft’s advantage, meanwhile, rests largely on its stake in OpenAI. Amazon would be willing to invest up to $4 billion in Anthropic right now.
In May, before the US Congress, Altman expressed concern that the industry could cause significant damage to the world and called on policymakers to create specific regulations in the field of artificial intelligence. That same month, a group of 350 AI researchers and technology executives signed a one-sentence statement painting the horror of an AI-induced apocalypse, equating it with the threat of nuclear war and pandemics.
Despite facing daunting prospects, no company has paused its own work to create more powerful AI models.
Politicians tried to show that they take the risks seriously. In July, US President Joe Biden pressured seven leading tech companies, including Microsoft, OpenAI, Meta and Google, to make “voluntary commitments” to have their AI products peer-reviewed before they are released to the public. At the beginning of November, about the safe use of technology and the exploration of risks signed a statement at the two-day high-level international specialist conference convened by the British government.
Days earlier, the Biden administration imposed much more serious measures, which oblige big tech companies to notify the government of all their AI-related developments and share their test results.
So far, it can be said, the regulators have apparently been receptive to the arguments of the pessimists.
However, the extent to which these rules can be implemented is rather doubtful.
Rishi Sunak: It should not be left to tech companies to test the safety of AI models
According to British Prime Minister Rishi Sunak, the safety control of new models of artificial intelligence (AI) cannot be left to the exclusive competence of technology companies.
Not all big tech companies take clear sides. Meta’s decision to make its AI models public seems like a pretty innovative move. The company is betting that the innovation momentum caused by open source tools will eventually help create new forms of content. Apple, the world’s largest technology company, on the other hand, is particularly silent about its AI developments. At a presentation in September, the company showed off a number of AI-driven features without even mentioning the term artificial intelligence.
The future will decide which company has the winning strategy. In any case, Microsoft shares rose nearly 2 percent Monday morning on the news of Altman’s “signing.”
Featured Image: Sam Altman (Photo: Win McNamee/Getty Images)