DOJ Wants Internet Companies to Stop Censorship, Get Tough on Illegal Content

DOJ Wants Internet Companies to Stop Censorship, Get Tough on Illegal Content
Attorney General William Barr speaks about an initiative to prevent online child sexual exploitation, at the Justice Department in Washington on March 5, 2020. Samira Bouaou/The Epoch Times
Petr Svab
Updated:

The Department of Justice (DOJ) is calling for legislation that would restrict liability protections for internet companies, saying these companies should lose such protection when they unfairly or arbitrarily censor their users and when they fail to counter illegal content.

In a series of recommendations released on June 17, the DOJ outlined a reform of Section 230 of the Communications Decency Act of 1996. The law largely absolves internet companies of civil liability for content posted by their users.

The law has become a linchpin for debates about social media censorship as well as efforts to curb criminal activities, including child sex exploitation, human trafficking, drug dealing, and terrorism.

“Courts have interpreted the scope of Section 230 immunity very broadly, diverging from its original purpose” and this “has reduced the incentives of online platforms to address illicit activity on their services and, at the same time, left them free to moderate lawful content without transparency or accountability,” the DOJ stated.

“The time has therefore come to realign the scope of Section 230 with the realities of the modern internet so that it continues to foster innovation and free speech but also provides stronger incentives for online platforms to address illicit material on their services.”

Restrict Restrictions

The law shields companies from liability for restricting access to content the companies in “good faith,” consider “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

The problem is that courts differ in interpreting what’s “otherwise objectionable,” the DOJ stated.

“Some construe the phrase to confer virtually unlimited discretion on platforms to remove any content they object to, for whatever reason,” it stated, explaining that “unconstrained discretion is particularly concerning in the hands of the biggest platforms, which today effectively own and operate digital public squares.”

The vagueness of the statute “risks giving every platform a free pass for removing any and all speech that it dislikes, without any potential for recourse by the user.”

It proposed deleting the phrase “otherwise objectionable” and “adding a new immunity for moderation of material the platform believes, in good faith, violates federal law or promotes violence or terrorism.”

The department also proposed clarifying what is “good faith.”

In Section 230 context, it should mean that a company has “publicly available terms of service or use that state plainly and with particularity the criteria the platform will employ in its content-moderation practices.”

Also, “any restrictions of access must be consistent with those terms of service or use and with any official representations regarding the platform’s content-moderation policies.”

Thirdly, the company must have “an objectively reasonable belief” that the content falls within one of the objectionable categories the law enumerates.

Lastly, the platform must give the user “a timely notice explaining with particularity the factual basis for the restriction of access,” the DOJ stated, adding an exception for cases in which the platform “reasonably believes that the content relates to criminal activity or notice would risk imminent harm to others.”

Illegal Content

Section 230 immunity should be stripped from any company that “purposefully facilitates or solicits third-party content or activity that would violate federal criminal law,” the DOJ stated in a document summarizing its key points.

“Platforms that accidently [sic] or even negligently facilitate unlawful behavior would not lose immunity.” A company would also have to ensure “its ability to identify unlawful content or activity occurring on its services” and “maintain the ability to assist government authorities to obtain content (i.e., evidence) in a comprehensible, readable, and usable format pursuant to court authorization (or any other lawful basis).”

Such a provision, it appears, would eviscerate the ability of encrypted communication apps to shield their users’ content from disclosure to the government.

Moreover, it added companies should never enjoy immunity for content that violates terrorism, child sex abuse, and cyber-stalking laws.

Under current legal standards in civil cases, companies “would only have to take reasonable steps” to address such content, “they would not have to achieve perfect success.”

Immunity also shouldn’t apply in cases in which a company has been notified of illicit material on its platform, the DOJ argued.

“If a platform has actual notice of specific material or activity that is unlawful under federal criminal law, does not remove the material, report the activity, and preserve related evidence, the platform should not be entitled to immunity for harm resulting from that specific material.”

The DOJ also recommends giving Section 230 protections only to companies that include a mechanism for users to easily flag unlawful content.

The immunity also shouldn’t apply to civil cases brought by the federal government, the department stated.

“In some cases, online platforms have argued that Section 230 creates an immunity from antitrust claims,” the DOJ stated, recommending to adjust the law so it’s clear that’s not the case.

Petr Svab
Petr Svab
reporter
Petr Svab is a reporter covering New York. Previously, he covered national topics including politics, economy, education, and law enforcement.
twitter
Related Topics