The evolution of OpenAI’s mission statement.
OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”
While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.
OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.
As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.
And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.
a test for whether AI serves society or shareholders
Gee I wonder which one’s gonna win.
Don’t be evilDodge v. Ford Motor Company, 1919.
This case found and entrenched in US law that the primary purpose of a corporation is to operate in the interests of its shareholders.
Therefore OpenAI, based in California, would be under threat of lawsuit if they didn’t do that.
This goose is already cooked.
… its new structure is a test for whether AI serves society or shareholders
Gee, I can’t wait to see the results of this test!
Datacenter popping up everywhere, sucking energy and water like entire cities, economy crashing…
We are literally watching dystopia being built around us. Its an interesting experience to watch.
Im happy I got to experience the 80s. That was peak humanity. Lots of cool movies and music and before big tech ruined humanity. People could date and create a family, buy a house.
What is there to test? The answer to this is so clear that just asking that question seems pretty dumb to me.
OpenAI is the same as any other publicly traded corporation: it serves society, but this service primarily focuses on the shareholders. We’re looking at a vehicle designed to take money, and give it to shareholders. (private in this case or otherwise)
Focus on growth of data centres at public expense, AI slop, the circular nature of some of the investments going into AI, and the productivity (or lack of), are part of it. We are not looking at any exceptionalism. AI isn’t unique in its capability for catastrophic harm. What we eat and drink can easily be on that list.
AI and these American companies, just want the money train to continue unabated, and any regulation to go away.
The marketing department removed some meaningless word from the marketing bla-bla-bla brochure nobody was even supposed to read.
WE ALL ARE GOING TO DIE!!!
ai serves society? you are boozing brake fluid
lol, lmao even
Either they think they’re evil 1337 h4xx0r overlords that are gonna enslave the planet or they genuinely think their statistical apparati do anything worthwile outside of making statistics on by now 70% other statistical machines.
+just wait until AI bros about Zip compression being more efficient at classifying than “AIs”.







