Contacts

Limiting Fake News: Not a Job for Filters

, by Marco Bassini - docente presso il Dipartimento di studi giuridici, translated by Alex Foti
There are many obstacles, including legal ones, to putting limits on the phenomenon, which is increasingly widespread and even alarming for its political populist fallout. But the idea of adopting filters or censors is a cultural nonstarter


Today fake news is no longer merely a problem of digital or functional illiteracy, but a more complex phenomenon whose roots are linked to the spread of populism and correlated to the tendency of offering biased readings of everyday topics, bending interpretation for reasons of political propaganda. The phenomenon, however, seems to have reached alarming proportions, to the point of attracting the attention of worried national governments.

There have been proposals for the adoption of filtering mechanisms for the news published on the web. The task of applying these filters would be entrusted to the operators of online platforms, or alternatively, to private entities which would make this role their business. These solutions are not immune from criticism. If legislators were to follow the option to instruct Internet providers (such as search engine operators or social network companies) to apply an ex-ante control on news content, this would inevitably clash with the business model and the legal paradigm of these operators. Both are founded on the absence of control of the contents they carry, and hence responsibility for them, which remains with the users who originally spread them.

The reasons for this arrangement can be easily understood in economic and legal terms: if the operators of the platforms were asked to answer the flood of content published daily by third parties would obviously be discouraged to act as intermediaries. Administering a platform like YouTube or a social network like Facebook is, therefore, something else with respect to managing an online newspaper. If those operators were instructed to select the content published by users, provided that the procedure is humanly and technically feasible, they would inevitably have to adopt their own code, that is, a set of criteria for the removal of information considered fake. This would imply the de facto adoption of an editorial policy and the abandonment of the principles of neutrality and passivity that characterize digital platforms, to the point of having web operators function as private censors. The problem, moreover, would not go away even if the implementation of filter mechanisms were left to co-regulation or even self-regulatory procedures. These problems would not be solved even in the case in which the legislator decided for the alternative of attributing the filter function to external actors, who do not operate as Internet service providers and do not manage digital platforms.

The use of these web scavengers could raise additional questions in relation to the intermediary being chosen, and to the potentially different criteria that these bodies could follow, down to issues about the composition of such bodies. Particularly, if they were not guaranteed a sufficient degree of political independence, these virtual courts of the truth could be dangerous tools in hands of capricious political majorities.

The road to a protection from fake news thus appears littered with hurdles. The existence of legal reasons for being wary about applying filters to posted news, however, should not hide considerations of a philosophical and cultural nature. The image of the Internet as an Agora, a place open to public debate, whatever the thought conveyed and its contribution to the growth of the community, surely would suffer greatly to the point of being altered beyond recognition. And this is a price that shouldn't be paid.