Research Law

How to Protect User Rights Against an Algorithm

, by Fabio Todesco
The right to know the motivation of a decision made by artificial intelligence is difficult to enforce. Europe therefore aims to strengthen the stance of users through procedural obligations on platforms, explains Oreste Pollicino

Those affected by a decision made by an algorithm should have the right to know the reasons for it (the so-called right to explanation or right to explicability), and the European Union's General Data Protection Regulation (GDPR), in fact, provides for this.

"The actual applicability of the clause is controversial, however, for at least two reasons," warns Oreste Pollicino, Professor of Constitutional Law at Bocconi, anticipating some of the content of his talk at the 27 June Fairness in AI workshop. "The first reason is language: algorithms are produced with a technical language, which makes them opaque to the vast majority of citizens. The second concerns trade secrecy: when it comes to proprietary algorithms, there may not be a disclosure requirement."

The European legislation, with the soon to be introduced Digital Services Act, thus aims to strengthen the user's position through the introduction of procedural obligations for platforms, and especially for so-called "very large platforms" that, by collecting more data, also have greater opportunities for profiling.

Link to related stories. Image: rainbow colors. Story headline: Pride: STEM Disciplines Fight Algorithmic Bias Link to related stories. Image: two schwa. Story headline: How to Make Language Technologies More Inclusive Link to related stories. Image: CPU processor. Story headline: When Machines Learn Prejudices Link to related stories. Image: a hooded person and symbols recalling cyber bullying. Story headline: Machines Get It Wrong: How to Avoid that Woman and Gay Are Mistaken as Bad Words

Among the additional obligations to which these operators will be subjected, there are: a preliminary assessment of the risk of damaging users' rights; the possibility of effective adversarial process, which will take the form of appeal possibilities and mandatory interlocutions between platform and user; and the need to take affirmative action (and not only the removal of disputed content) to remedy any damage done by algorithmic bias.

In some legislation, algorithms are also beginning to be used in the legal field to make decisions, for example, on parole or bail. Professor Pollicino observes a different approach in the United States, where there is considerable trust in digital tools, and in Europe. "In particular," he concludes, "for Italy, I would speak of digital humanism: Council of State jurisprudence makes it clear that an algorithm cannot be the exclusive element of evaluation in any judgment, and always refers the final decision to the 'prudent assessment' of a judge."