A Senate working group led by Senate Majority Leader Chuck Schumer (D-N.Y.) released its much-anticipated Artificial Intelligence (AI) policy recommendations on May 15.
The document recommends that Congress allocate $32 billion per year in non-defense spending for AI innovation by 2026, as suggested by the National Security Commission on Artificial Intelligence.
Other recommendations include enforcement of existing AI laws, developing new standards for the technology, establishing a federal data privacy framework, addressing the problems posed by “deepfakes,” and mitigating other potential risks and threats.
Outlining the plan at a press conference, Mr. Schumer stressed that the bipartisan group—which also included Sens. Mike Rounds (R-S.D.), Martin Heinrich (D-N.M.), and Todd Young (R-Ind.)—had decided from the start that their process should “supplement, not supplant” the role of committees in crafting legislation.
“We always knew we would have to go to the committees to get the specifics done, and there’s so many different aspects of AI in so many different areas that it will take many committees to do it,” he said.
Nonetheless, the senators touted their recommendations as the “first steps” toward identifying areas of common ground where legislation could be drafted sooner rather than later.
The working group hosted nine closed-door hearings on the benefits and risks of AI last year amid rising concerns about the AI chatbot ChatGPT and its ability to mimic human behavior.
The star-studded “AI Insight Forums” featured testimony from a long list of prominent AI experts and tech leaders, including Microsoft founder Bill Gates, Meta CEO Mark Zuckerberg, X chairman Elon Musk, OpenAI CEO Sam Altman, and Alphabet CEO Sundar Pichai.
“In terms of regulatory suggestions, I didn’t hear much,” Sen. John Kennedy (R-La.) told The Epoch Times after the inaugural forum.
“Do we have some sort of overarching regulatory framework that we’re close to agreeing on that addresses the dangers and the potential of artificial intelligence, in my judgment? No. We just don’t right now.”
That view was echoed by multiple groups on May 15, amid the unveiling of the Senate’s policy guidelines.
The New York-based AI Now Institute dismissed the working group’s collective efforts as “a stalling tactic” that puts the AI industry “in the driver’s seat.”
And the advocacy group Accountable Tech called it a mere “hand-waving” toward AI’s most pressing challenges.
“The AI roadmap released today by Senator Schumer is but another proof point of Big Tech’s profound and pervasive power to shape the policymaking process.
“The last year of closed-door ‘Insight Forums’ has been a dream scenario for the tech industry, who played an outsized role in developing this roadmap and delaying legislation,” said Nicole Gill, Accountable Tech co-founder and executive director, in a statement.
“Lawmakers must move quickly to enact AI legislation that centers the public interest and addresses the damage AI is currently causing in communities all across the country,” she added.
But TechNet, a national network of tech executives, heralded the Senate’s recommendations for investments in AI research and development, holding that they would strengthen the nation’s global competitiveness in AI and other technologies.
“We applaud Leader Schumer and Senators Rounds, Heinrich, and Young for their leadership in crafting this roadmap,” TechNet President and CEO Linda Moore said.
“We look forward to working with Congress to ensure America remains the world’s leader in AI and wins the next era of innovation.”
Senate committees will now be tasked with crafting specific legislation in the areas outlined by the working group.
The Senate Rules Committee is slated to vote on May 15 on three bills that would ban faked AI content aimed at influencing federal elections, require disclaimers on political ads created with AI, and establish voluntary guidelines for state election offices that oversee candidates.
Experts have warned that the United States is falling behind as other nations have moved to rein in the ever-evolving AI industry.
In March, the European Union approved new restrictions for AI products and services considered to impose the greatest risk, including medicine, law enforcement, and critical infrastructure.
The law includes regulations for generative AI systems like ChatGPT.
“It’s time for Congress to act,” said Alexandra Reeve Givens, CEO of the Center for Democracy & Technology. “It’s not enough to focus on investment and innovation. We need guardrails to ensure the responsible development of AI.”
Mr. Schumer, during the press conference, said he and his fellow group members had taken both the pros and cons of AI into consideration while developing their guidance.
“The word for this roadmap is balance,” he said, adding that he was hopeful that the committees would continue the working group’s “bipartisan momentum” as they move forward.