Artificial Intelligence ‘Safety Brakes’ Needed: Microsoft President

Artificial Intelligence ‘Safety Brakes’ Needed: Microsoft President
First lady Melania Trump, right, participates in a discussion with Microsoft president Brad Smith at the company's headquarters in Redmond, Wash., on March 4, 2019. AP Photo/Patrick Semansky
Naveen Athrappully
Updated:
Microsoft President Brad Smith is pushing for stricter regulation of artificial intelligence, involving government-led AI safety frameworks and establishing “safety brakes” in AI systems.
There is a need to ensure that “machines remain subject to effective oversight by people, and the people who design and operate machines remain accountable to everyone else,” Smith wrote in a May 25 blog post. To this end, laws and regulations are required, he argued. The first priority is to “implement and build upon new government-led AI safety frameworks.”
Smith pointed out that the U.S. National Institute of Standards and Technology has already completed and launched a new AI risk management framework four months back. The Microsoft president suggested implementing and building upon this foundation.
Smith advocates “effective safety brakes” for AI systems tasked with managing critical infrastructure like electrical grids, city traffic flows, and water systems.
“These fail-safe systems would be part of a comprehensive approach to system safety that would keep effective human oversight, resilience, and robustness top of mind,” Smith wrote.
“In spirit, they would be similar to the braking systems engineers have long built into other technologies, such as elevators, school buses, and high-speed trains, to safely manage not just everyday scenarios, but emergencies as well.”
Laws should require AI system operators to build such safety brakes into high-risk AI systems by design and to test such measures regularly to ensure they are effective, the Microsoft chief said.
Smith also asked for placing different regulatory obligations on various actors depending on their role in managing the various aspects of AI. Microsoft proposed “specific regulatory responsibilities on the organizations exercising certain responsibilities at three layers of the technology stack: the applications layer, the model layer, and the infrastructure layer.”

OpenAI Chief’s Warning

Microsoft’s push for regulating AI comes as Sam Altman, the CEO of OpenAI, which is behind the creation of the AI chatbot ChatGPT, called on lawmakers to regulate artificial intelligence.
During a recent hearing before the U.S. Senate Committee on the Judiciary, Altman warned that “some regulation would be quite wise on this topic.”
Altman wants the government to establish a new agency licensing AI companies that would be responsible for ensuring compliance with ethical standards and addressing accuracy issues related to the technology. The OpenAI chief exec also admitted that AI could pose a threat to democracy, specifically from targeted misinformation campaigns during elections.
“My worst fears are that we—the field, the technology, the industry—cause significant harm to the world. I think that can happen in a lot of different ways.” Altman wants independent audits to be conducted on AI firms.
With regard to the EU, Altman has suggested that the organization may leave Europe if it cannot reach compliance with the region’s upcoming AI regulations.
One of the proposed rules is to make companies that deploy generative AI tools like ChatGPT disclose the use of any copyrighted content used in developing their systems.

AI Regulations

Lawmakers globally are proposing regulations to rein in AI technologies. In the United States, Sen. Michael Bennet (D-Colo.) and Sen. Peter Welch (D-Vt.) introduced the Digital Platform Commission Act on May 18, which seeks to create a dedicated federal agency to regulate digital platforms, specifically AI.
“Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest,” Bennet said in a press release.
In the EU, the AI Act has been proposed as rules for regulating artificial intelligence. The law categorizes AI into three groups based on risk. Firstly, applications and systems deemed to create an unacceptable risk would be banned. This would include applications similar to the social credit score system used by the Chinese communist regime.
Secondly, applications deemed to be high-risk would be subject to some specific legal requirements. For instance, a tool that scans CVs to rank job applicants.
Thirdly, apps that are neither listed as high risk nor explicitly banned would largely be left unregulated. Once approved, the AI Act will become the world’s first set of rules on artificial intelligence.
Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.
Related Topics