Encode, an artificial intelligence (AI) advocacy group, filed a brief in support of Elon Musk’s recent lawsuit against OpenAI, arguing that enabling the conversion toward a for-profit entity could endanger public interest.
In the brief, Encode is described as “a youth-led organization advocating for safe and responsible artificial intelligence (AI)” with “a network of over 1,000 volunteers across 40 countries.”
“If the world truly is at the cusp of a new age of artificial general intelligence (AGI), then the public has a profound interest in having that technology controlled by a public charity legally bound to prioritize safety and the public benefit rather than an organization focused on generating financial returns for a few privileged investors,” the brief said.
OpenAI CEO Sam Altman has admitted that AI poses severe risks to humanity, Encode said. Altman signed a statement along with numerous luminaries, including Nobel Prize winners, saying that “mitigating the risk of extinction from AI should be a global priority.”
People worldwide are already facing challenges from AI technologies including disinformation, algorithmic bias, labor displacement, and democratic erosion, which makes keeping AI safe a “pressing, immediate concern,” the advocacy group said.
OpenAI currently runs a capped-profit subsidiary that is fully controlled by the OpenAI nonprofit parent company, which is expected to ensure the safe use of AGI.
In Delaware, where OpenAI is incorporated, the boards of nonprofit charitable corporations owe fiduciary duties toward their beneficiaries, which in this case would be “humanity,” Encode said.
By transferring operations to a Delaware public benefit corporation (PBC), OpenAI’s priorities would shift from ensuring the “safety of advanced AI” to shareholder interests. Allowing such a transition is harmful to the public interest, the brief said.
OpenAI Plans, Musk Conflict
OpenAI began as a research lab in 2015. It had a goal of advancing AI in a way “most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the post said. Out of the $137 million in donations it collected initially, less than a third came from Musk.After OpenAI management realized the project would require “far more capital,” they created the current for-profit structure controlled by the nonprofit in a bid to collect funds from investors.
According to OpenAI, the new PBC “will run and control OpenAI’s operations and business, while the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science.”
“In 2017, Elon not only wanted, but actually created, a for-profit as OpenAI’s proposed new structure. When he didn’t get majority equity and full control, he walked away and told us we would fail,” it said.
“Now that OpenAI is the leading AI research lab and Elon runs a competing AI company, he’s asking the court to stop us from effectively pursuing our mission.”
The Epoch Times contacted Musk for a comment but received no reply by publication time.
Besides the ethical concerns, Musk alleged in the complaint that OpenAI and its investor Microsoft roughly control around 70 percent of the generative AI market and engage in “anticompetitive conduct.”
OpenAI and Microsoft ban investors from funding OpenAI’s competitors, specifically Musk’s artificial intelligence company xAI, it said.
“OpenAI’s path from a nonprofit to for-profit behemoth is replete with per se anticompetitive practices, flagrant breaches of its charitable mission, and rampant self-dealing,” the complaint said.
xAI was formed in July 2023. The company introduced the Grok-1 AI model on the X social media platform a few months later, in November 2023, and has introduced updates to the tool.