An expansive federal law proposal to combat harmful online content is not only “fundamentally flawed” but also violates Canadians’ freedom of expression and privacy rights, warn internet law experts, who are calling on the Liberals to overhaul their approach.
The main problem with the so-called “online harms” proposal lies in its ability to filter content and block websites, which endangers the “survival of a free and open internet in Canada and beyond,” reads a submission to the Department of Canadian Heritage by the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) at the University of Ottawa’s Faculty of Law.
“The online harms proposal combines some of the worst elements of other laws around the world,” the experts say in their submission.
“We are seriously concerned about numerous elements of the proposed law—such as the lack of adequate transparency requirements, the loosened requirements for the Canadian Security Intelligence Service (CSIS) to obtain basic subscriber information, the various jurisdictional issues raised by the law, and whether an administrative body like the Digital Recourse Council should be able to determine what speech is legal under Canadian law.”
In July, Canadian Heritage launched a public consultation to gather feedback on the proposed law, which it said “will be part of an overall strategy to combat hate speech and other harms.”
“The government aims to present a new legislative and regulatory framework this fall, with rules to make social media platforms and other online services more accountable and transparent in combatting harmful online content,” said a Canadian Heritage press release upon announcing the public consultation, which ended Sept. 25, shortly after the election.
Specifically, the new law will target online posts in five categories: terrorist content, content that incites violence, hate speech, non-consensual sharing of intimate images, and child sexual exploitation content.
It’s one of three pieces of controversial legislation related to internet regulation crafted by the Liberals.
The CIPPIC said it finds the scope of the strategy concerning, particularly the stipulation that “platforms block unlawful content within 24 hours of being flagged, as well as alarming requirements for online service providers to proactively monitor and filter content as well as report information on users to law enforcement.”
The group argues that the 24-hour blocking requirement will lead to an overzealous attitude by social media platforms to remove content—even vast amounts of lawful content—to avoid the risk of liability under the proposed legislation.
‘Massive New Bureaucratic Super-Structure’
Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa, said in his submission that one of the fundamental problems in the government’s approach is to treat the five categories of harmful content as “equivalent and requiring the same legislative and regulatory response.”“It makes no sense to treat online hate as the equivalent of child pornography,” he wrote. “By prescribing the same approach for all these forms of content, the efficacy of the policy is called into question.”
Geist said the proposed approach “envisions a massive new bureaucratic super-structure to oversee online harms and Internet-based services” that would be unwieldy and could jeopardize due process.
“For example, adjudicating over potentially tens of thousands of content cases is unworkable and would require massive resources with real questions about the appropriate oversight. Similarly, the powers associated with investigations are enormously problematic with serious implications for freedom of the press and freedom of expression.”
Part of the online harms proposal includes requiring online service providers to report some kinds of content to the RCMP and the Canadian Security Intelligence Service. The CIPPIC said that such reporting, when combined with the proactive monitoring, “pose an unacceptable risk to the privacy rights of Canadians.”
“Such measures should have no place in the laws of a free and democratic society,” it said.
“In our view, any legislative scheme that purports to unite all of these disparate kinds of content under a single framework is incoherent, counterproductive, and constitutionally untenable,” said Citizen Lab, whose research includes the areas of communication technologies, human rights, and global security.
“In truth, the categories are united by almost nothing—constitutionally, factually, practically, or ethically—other than the proposed remedy of content removal.”