Commentary
Article 11 of The EU Charter of Fundamental Rights, which replicates a part of Article 10 of the
European Convention on Human Rights, protects the right of European citizens to “hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers,” and affirms that “the freedom and pluralism of the media shall be respected.” Sadly, the fate of freedom of expression in Europe now very much hangs in the balance, as the European Union has just enacted a law that empowers the European Commission to significantly restrict the ability of citizens to use digital platforms to engage in robust and sincere democratic discourse.
Under the recently enacted
Digital Services Act, the commission may apply significant pressures upon digital platforms to curb “hate speech,” “disinformation,” and threats to “civic discourse,” all of which constitute notoriously vague and slippery categories, categories that have historically been co-opted to reinforce the narrative of the ruling class. By giving the European Commission broad discretionary powers to oversee Big Tech content moderation policies, this piece of legislation holds freedom of speech hostage to the ideological proclivities of unelected European officials and their armies of “trusted flaggers.”
Purpose of the Digital Services Act
The stated purpose of the
Digital Services Act (DSA) that has just come into force in Europe is to ensure greater “harmonisation” of the conditions affecting the provision of “intermediary” digital services, in particular online platforms that host content shared by their customers. The act covers a bewildering array of issues, from consumer protection and the regulation of advertising algorithms to child pornography and content moderation. Among other purposes that appear in the wording of the act, we find the fostering of “a safe, predictable and trustworthy online environment,” the protection of citizens’ freedom of expression, and the harmonisation of EU regulations affecting online digital platforms, which currently depend on the laws of individual member states.
The DSA Is Not as Innocent as It Appears
At a superficial glance, the DSA might look rather innocuous. It places fairly formal requirements on “very large online platforms” such as Google, Facebook, TikTok, and X, formerly known as Twitter, to have clear appeals procedures and to be transparent about their regulation of harmful and illegal content. For example, Section 45 of the act reads as a fairly light touch requirement that providers of online digital services (“intermediary services”) keep customers informed of terms and conditions and company policies:
“Providers of the intermediary services should clearly indicate and maintain up-to-date in their terms and conditions the information as to the grounds on the basis of which they may restrict the provision of their services. In particular, they should include information on any policies, procedures, measures and tools used for the purpose of content moderation, including algorithmic decision-making and human review, as well as the rules of procedure of their internal complaint-handling system. They should also provide easily accessible information on the right to terminate the use of the service.”
But if you start to dig into the Act, you very soon discover that it is poisonous for free speech and is not in the spirit of
Article 11 of the EU Charter of Fundamental Rights, which guarantees citizens the “freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.” Below, I detail certain aspects of the act that, taken together, pose an unprecedented threat to freedom of speech in Europe:
1. The DSA creates entities called “trusted flaggers” to report “illegal content” they identify on large online platforms. Online platforms are required by the act to respond promptly to reports of illegal content provided by these trusted flaggers nominated by member state-appointed “Digital Service Coordinators.” The act requires that large online platforms “take the necessary measures to ensure that notices submitted by trusted flaggers, acting within their designated area of expertise, through the notice and action mechanisms required by this Regulation are treated with priority.”
2. Strictly speaking, although digital platforms are required to respond to reports of illegal content submitted by trusted flaggers, it appears from the wording of the act that the platforms have discretion to decide how exactly to act upon such reports. They might, for example, disagree with the legal opinion of a trusted flagger and decide not to take down flagged content. However, they will face periodic audits of their actions’ compliance with the act by auditors working on behalf of the European Commission, and these reviews will hardly look favourably upon a pattern of inaction in the face of flagged content.
3. The DSA also requires “very large online platforms” (such as Google, YouTube, Facebook, and X) to undertake periodic “risk mitigation” assessments, in which they address “systemic risks” associated with their platforms, including but not limited to child pornography, “gender violence” (whatever that means), public health “disinformation,” and the “actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, as well as public security.” Platforms have “due diligence” obligations under the act to take appropriate measures to manage these risks. Unlike with a voluntary code of practice, opting out is not an option, and a company that fails to comply with these “due diligence” obligations will be subject to hefty sanctions.
4. The sanctions attached to noncompliance with the act are remarkable. The commission, if it deems that a large online platform such as X has not been in compliance with the DSA, may fine said platform up to 6 percent of its annual global turnover. Because the idea of noncompliance is hard to quantify and pretty vague (what exactly is required in order to meet “due diligence obligations” of systemic risk management?), it seems likely that companies that wish to avoid legal and financial headaches would prefer to err on the side of caution and put on a show of “compliance” to avoid getting fined.
5. The periodic audits envisaged by this act will serve as a tool for the commission to pressure large online platforms into taking action to “manage” the “risks” of disinformation and threats to “civic discourse and electoral processes,” risks which are notoriously vague and are probably impossible to define in a politically impartial fashion. The threat lurking in the background of these audits and their associated “recommendations” is that the commission may impose multi-billion dollar fines upon online platforms for noncompliance. Because of the rather vague idea of noncompliance with “due diligence obligations,” and the discretionary nature of the financial sanctions threatened in the DSA, this act will create an atmosphere of legal uncertainty both for online platforms and for their users. It heavily incentivises online platforms to police speech in a way that passes muster with the EU Commission, around vague categories such as “disinformation” and “hate speech,” and this will obviously have repercussions for end-users.
6. The European Commission has stated: “Hate motivated crime and speech are illegal under EU law. The 2008 Framework Decision on combating certain forms of expressions of racism and xenophobia requires the criminalisation of public incitement to violence or hatred based on race, colour, religion, descent or national or ethnic origin.” It is important to point out that the EU Commission favours expanding the categories of illegal hate speech at a Europe-wide level to include not only “race, colour, religion, descent or national or ethnic origin,” but also new categories (presumably including things such as gender identity). So illegal hate speech is a “moving target” and is likely to become ever broader and more politically charged as time goes on.
The European Commission’s
own website reads: “On 9 December 2021, the
European Commission adopted a Communication which prompts a Council decision to extend the current list of ‘EU crimes’ in Article 83(1) TFEU to hate crimes and hate speech. If this Council decision is adopted, the European Commission would be able, in a second step, to propose secondary legislation allowing the EU to criminalise other forms of hate speech and hate crime, in addition to racist or xenophobic motives.”
7. The most disturbing aspect of the DSA is the enormous power and discretion it places in the hands of the European Commission—notably, an unelected commission—to oversee compliance with the DSA and decide when online platforms are noncompliant with respect to their “due diligence obligations” to manage risks whose meaning is vague and manipulable. The European Commission is also giving itself the power to declare a Europe-wide emergency that would allow it to demand extra interventions by digital platforms to counter a public threat. There will be no legal certainty about when the EU Commission might declare an “emergency.” Nor is there any legal certainty about how the European Commission and its auditors will interpret “systemic risks,” such as disinformation and hate speech, or assess the efforts of service providers to mitigate such risks, since these are discretionary powers.
8. Nor is clear how the commission could possibly undertake an audit of “systemic risks” of disinformation and risks to civic discourse and electoral processes without taking a particular view of what is true and untrue, salutary and harmful information, thus preempting the democratic process through which citizens assess these issues for themselves.
9. Nor is it clear which checks and balances will be in place to prevent the DSA from becoming a weapon for the EU Commission’s favourite causes, whether the war in the Ukraine, vaccine uptake, climate policy, or a “war on terror.” The broad power to declare a public emergency and require platforms to undertake “assessments” of their policies in response to that, combined with the broad discretionary power to fine online platforms for noncompliance with vague obligations, gives the Commission a lot of leeway to lord it over online platforms and pressure them to advance its favoured political narrative.
10. One particularly sneaky aspect of this act is that the commission is effectively making disinformation illegal through a backdoor. Instead of clearly defining what they mean by “disinformation” and making it illegal—which would probably cause an uproar—they are placing a “due diligence” requirement upon large online platforms such as X and Facebook to take discretionary measures against disinformation and to mitigate “systemic risks” on their platforms (which include the risk of “public health disinformation”). Presumably, the periodic audits of these companies’ compliance with the act would look unkindly on policies that barely enforced disinformation rules.
So the net effect of the act would be to apply an almost irresistible pressure on social media platforms to play the “counter-disinformation” game in a way that would pass muster with the commission’s auditors and thus avoid getting hit with hefty fines. There is a lot of uncertainty about how strict or lax such audits would be and which sorts of noncompliance might trigger the application of financial sanctions. It is rather strange that a legal regulation purporting to defend free speech would place the fate of free speech at the mercy of the broadly discretionary and inherently unpredictable judgments of unelected officials.
The only hope is that this ugly, complicated, and regressive piece of legislation ends up before a judge who understands that freedom of expression means nothing if held hostage to the views of the European Commission on pandemic-preparedness, the Russia–Ukraine war, or what counts as “offensive” or “hateful” speech.
P.S.: Consider this analysis as a preliminary attempt by someone not specialised in European law to grapple with the troubling implications of the Digital Services Act for free speech, based on a first reading. I welcome the corrections and comments of legal experts and those who have had the patience to wade through the act for themselves. This is the most detailed and rigorous interpretation I have developed of the DSA to date. It includes important nuances that were not included in my previous interpretations and corrects certain misinterpretations—in particular, platforms are not legally required to take down all flagged content, and the people who flag illegal content are referred to as “trusted flaggers,” not “fact-checkers.”
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.