How will the framework for risk management in the Digital Services Act manage the tension between curbing disinformation and protecting freedom of expression?
By July Baltus
On the 18th of December 2023, the European Commission announced it had initiated formal proceedings against social media platform X, formerly Twitter, for its alleged failure to comply with obligations under the Digital Services Act (‘DSA’).[1] The announcement follows warnings about the large proportion of disinformation present on the platform in the aftermath of heightened hostilities between Israel and Hamas.[2] It will be the first enforcement action under the DSA, which has been in force for Very Large Online Platforms (‘VLOPs’) and Very Large Online Search Engines (‘VLOSEs’) since late August 2023.[3] The DSA is a landmark European legislative instrument that aims to regulate digital platforms and ensure a safer and more accountable online environment mindful of the fundamental rights of users.[4] One approach the DSA takes to accomplish its goals is by introducing due diligence obligations that address the spread of online disinformation. This aspect is crucial because the spread of disinformation, including misleading reports about conflicts, deepfakes, or doctored images, can have severe consequences for democratic societies like exacerbating tensions and misleading public opinion.[5] Building upon this necessity, Articles 34 and 35 of the DSA establish a comprehensive risk management framework, requiring VLOPs and VLOSEs to actively assess how their services contribute to “systemic risks” and take mitigating measures. Disinformation campaigns can contribute to such societal risks, as explicitly stated in the DSA.[6] However, a crucial challenge when curbing disinformation is distinguishing between harmful and legitimate content, which might include critical, satirical, or controversial speech protected by the freedom of expression.
Disinformation can take the form of pro-Russian propaganda, [7] misleading statistics about the COVID-19 pandemic,[8] and manipulated videos about Hamas’ attack on Israel.[9] Especially in the technologically evolved media landscape, the spread of disinformation online can have harmful consequences for open democratic societies and accelerate polarization.[10] According to the European Commission disinformation is “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm”.[11] It is important to realize at this stage that the harmful nature of disinformation does not invariably mean it is also illegal.[12] Regulating disinformation is therefore challenging, as it might present authorities with “excessive discretion to determine what is disinformation, what is a mistake [and] what is truth”.[13] Online platforms provide users with a unique stage to exercise their freedom of expression, as held by the ECtHR.[14] However, rather than taking a neutral role regarding the content that is visible, these platforms act as gatekeepers to public debate online.[15] This discretion might lead platforms to over-remove content. Strict moderation may inhibit free expression, potentially silencing legitimate voices. Conversely, lenient policies can allow harmful disinformation to proliferate, misleading users and undermining public discourse. There are several ways in which regulating disinformation could pose a threat to freedom of expression. The United Nations Special Rapporteur on freedom of opinion and expression has warned against “vague and ambiguous” terms to describe disinformation.[16] Ambiguous language can lead harm legal certainty due to unpredictable enforcement as well as trigger ‘self-censorship’ whereby individuals refrain from expressing certain thoughts, ideas and information for fear of legal repercussions, also known as a ‘chilling effect’.[17] Therefore, attempts by the DSA to address disinformation while also protecting disinformation, must always be weighed carefully against this balance.
The risk management framework of Articles 34 and 35 DSA comes as part of the additional obligations for VLOPs and VLOSEs. Platforms are qualified as such when their average monthly active users are equal to or exceed 45 million, as roughly 10% of the European Union.[18] These additional obligations find their rationale in the way these large platforms can be used for the “shaping of public opinion and discourse” and the “societal concerns” due to their “advertising-driven business models.”[19] Therefore, the risk management framework is asymmetric by design, operating on the premise that the size of a platform directly correlates with its potential for societal impact and systemic risks.[20] In terms of regulating disinformation, the risk management framework is unique in the sense that it requires platforms to take action against not just illegal content, but also simply harmful content.[21] By taking this approach, regulators acknowledge that platforms themselves are often best equipped to evaluate risks resulting from their own technology.[22] The European Commission issued a report which analysed the application of the risk management framework to Russian disinformation campaigns surrounding the war in Ukraine using a standardised model.[23] As pointed out in this report, there is no legal guidance on how to apply the risk assessment and mitigation framework or how to assess the effectiveness of the policies.[24] Consequently, the report also shed a light on potential methodologies and metrics for assessing and mitigating risks.[25] It suggested considering the following factors in proportionally assessing content for systemic risks: (i) the context of the speech; (ii) the position and status of the speaker, including context about their intent; (iii) the content and form of the speech; (iv) reach, size and characteristics of the audience; and (v) likelihood or imminence of harm.[26]
While the DSA aims to balance controlling disinformation with preserving freedom of expression, it faces significant challenges. Articles 34 and 35’s vague language offers flexibility but risks inconsistency and potential overreach, affecting freedom of expression. The focus on large platforms acknowledges their impact on public discourse but leads to issues like “audit capture” and overlooks smaller platforms – a “regulatory blind spot”.[27] Additionally, the risk-based approach, while modern, could inadvertently create new societal risks and legal uncertainties, termed the “trade-off effect” and “risk-enhancing” effect.[28] Despite these challenges, the DSA’s introduction of due diligence obligations for platforms marks progress.[29] However, achieving an effective and fair balance in the dynamic online landscape remains an intricate task, necessitating ongoing evaluation and refinement of the regulatory framework.
[1] European Commission 2023, press release 18 December 2023 and Regulation (EU) 2022/2065 (Digital Services Act) .
[2] European Commission 2023, press release 12 October 2023.
[3] European Commission 2023, press release 12 October 2023.
[4] European Commission 2020, communication 3 December 2020.
[5] Van Hoboken et al. 2019, p. 11.
[6] Rec. 83 DSA.
[7] Directorate-General for Communications Networks, Content and Technology 2023.
[8] European Commission 2020a.
[9] Sabbagh 2023.
[10] European Commission 2018, p. 1.
[11] European Commission 2018, p. 3-4.
[12] Directorate-General for Communications Networks, Content and Technology 2018, p. 7.
[13] United Nations Special Rapporteur on Freedom of Opinion and Expression 2020, p. 13.
[14] ECtHR 1 March 2016 Cengiz et al. v. Turkey, para. 52.
[15] Leerssen 2015, p. 99-100.
[16] United Nations Special Rapporteur on Freedom of Opinion and Expression 2020, p. 3.
[17] Van Hoboken et al. 2019, p. 41.
[18] Rec. 76 DSA, Art. 33 DSA.
[19] Rec. 79 DSA.
[20] Efroni 2021.
[21] Rec. 84 DSA.
[22] Leiser 2023, p. 5.
[23] Directorate-General for Communications Networks, Content and Technology 2023, p. 10.
[24] Directorate-General for Communications Networks, Content and Technology 2023, p. 13.
[25] Directorate-General for Communications Networks, Content and Technology 2023, p. 12.
[26] Directorate-General for Communications Networks, Content and Technology 2023, p. 16.
[27] Laux et al. 2021, p. 3, 8.
[28] Efroni 2021.
[29] Leerssen 2023, p. 157.
Plaats een Reactie
Meepraten?Draag gerust bij!