AI Expert Urgently Appeals to Governments to Make Regulatory Oversight a High Priority

AI Expert Urgently Appeals to Governments to Make Regulatory Oversight a High Priority
Founder and scientific director of the Mila - Quebec AI Institute and professor at the Universite de Montreal Department of Computer Science Yoshua Bengio testifies during a committee hearing on Capitol Hill in Washington on July 25, 2023. (Alex Wong/Getty Images)
Amanda Brown
Updated:
0:00

One of the top minds in artificial intelligence (AI) and an accomplished computer scientist is calling on governments to make the regulation and control of AI a high priority.

Yoshua Bengio—who has achieved global recognition as one of the world’s leading experts on artificial intelligence, most known for his pioneering work in machine learning and winning the 2018 A.M. Turing Award—has voiced his concerns about AI superseding human intellectual capability, a rapidly evolving risk to humanity.

Mr. Bengio, when asked by CTV’s Question Period host Vassy Kapelos if he thought AI posed a serious risk to humans, emphatically replied, “Yes.”

“And it’s been difficult, in some sense, to shift my views on this,” he said. “If you’ve been working on something for decades, and you’ve built your whole career and motivation on the idea that you would bring good to the world, and indeed AI is bringing a lot of good … but now you start realizing that there are much greater dangers than we even thought about, things that I thought about a few years ago, it’s challenging.”

Mr. Bengio said that everyone is a stakeholder in the AI conversation and that we need to deal with the issues objectively.

“I think we have also a responsibility, when we are scientists in this area, to look at things in a neutral way, not just our emotions about it. And if we think that there is a risk and danger, we need to engage with everyone—citizens, other experts outside of AI, and governments—so that we can better understand what are the potential bad scenarios and how we can prevent them.”

Risks are particularly consequential, he said, when systems such as ChatGPT are so advanced it becomes difficult to tell them apart from a real human being in conversation, referring to the kind of correspondence a person might have with an AI chatbot.

“That’s where it gets dangerous. They could easily propagate disinformation in a way that’s much more powerful than we already have with social media,” he said.

The AI specialist explained that the pace at which the government and its agencies are moving is exceeded by the strides in the technology. Mr. Bengio said defensive policymaking is failing to institute safeguards fast enough for society.

“I really didn’t expect that it would come so quickly,” he said. “I thought that the level of competence we see now may happen in 20 or 50 years, if you had asked me just a few years ago, but of course, in the last years, we’ve seen the acceleration coming along.”

The Canadian Press reported in April that the federal privacy commissioner, in conjunction with three provincial counterparts, is conducting an investigation into ChatGPT, the AI-powered chatbot that has gained considerable popularity in recent times. The investigation comes after a complaint was lodged accusing the program of collecting, using, and disclosing personal information without copyright holders’ consent.

During a U.S. congressional hearing on July 25, Mr. Bengio, who spoke alongside a number of his industry peers, raised concerns about how AI might negatively impact sectors such as banking and wield an unwelcome influence in the democratic process.

“I talked to some people in the banking community about this, small banks, they are going to see AI used to scam people, pretending to be your mom’s voice, or more likely your granddaughter’s voice, actually getting the voice right, making a call for money.

“How can Congress ensure companies that create AI platforms [ensure they] cannot be used for those deceptive [purposes]? What kind of rule should be put in place so that does not happen?” Mr. Bengio asked.

“[It] may sound drastic ... in order to reduce the chances that AI systems will massively influence voters through social media, one thing that should have been done a long time ago is that social media accounts should be restricted to actual human beings that have identified themselves—ideally in person. And right now, social media companies are spending a lot of money to figure out whether an account is legitimate or not,” he added.

Many AI industry experts are pressing governments to rein in companies developing machine learning programs.

“In the short term, one of the easy but important things we need to do is to make it very difficult, illegal, and punish very strongly, to impersonate humans,” he told Mr. Kapelos. “So when a user is interacting with an AI, it has to be very clear that it’s an AI.”

“In fact, we should even know like where it comes from, like which company made it,” he added. “So counterfeiting humans should be as bad as counterfeiting money.”