Contacts
Facial recognition technology is redefining the relationship between the individual and the state, shifting the balance between security and freedom. When the law is weak or slow to respond, it risks legitimizing abuse instead of preventing it. Constitutional safeguards must be written not only into the code of law, but into the very code that governs computers

Artificial intelligence (AI) poses profound and unprecedented challenges to constitutional law. Nonetheless, facial recognition technology (FRT) represents one of the most emblematic of these challenges, because it describes how the vertical relationship of power between the individual and the state is recalibrated in the context of AI. Given this framework, my research “Constitutional Safeguards in the Age of AI. A Study on the Fundamental Rights Impact Assessment of Facial Recognition Technology” used the perspective of FRT as a laboratory to challenge the tensions between security, fundamental rights and AI governance. 

The central intuition behind this research is that abuses of power may occur not despite the law, but through the law. The real risk of FRT is not only its misuse in practice, but its normalization through legislation that inadequately structures the procedures of authorization, application and oversight. If the law is weak, vague or discretionary, even a formally correct deployment will hollow out rights protections. This perspective guided the thesis: to test whether European regulation, and in particular the AI Act, truly embeds safeguards that can withstand the constitutional pressure of biometric surveillance.

Two cases are specimen of the issues at stake. The Clearview AI scandal — in which billions of images were scraped without consent and used to construct biometric databases — triggered regulatory interventions and sanctions across Europe and catalysed the drafting of the AI Act. The judgment in Glukhin v. Russia, decided by the European Court of Human Rights, confirmed the dangers of indiscriminate use of FRT in public surveillance, stressing substantial interferences but also chilling effects on the enjoyment of democratic freedoms. 

In this context, the AI Act represents the first attempt to respond, introducing a lex specialis for biometric identification systems. Among the other obligations, for some notable high risk uses of AI, especially when a public authority is concerned, a Fundamental Rights Impact Assessment (FRIA) is required. Yet the instrument, as designed, places the burden primarily on deployers, those who utilize the technology. It risks becoming a procedural formality performed after technologies are already embedded in law enforcement practices. Thus, this thesis argues that this approach is insufficient. A FRIA that takes place only at the point of deployment comes too late: it cannot correct a legislative framework that has already failed to set substantive limits.

For this reason, this work reconceptualizes FRIA as a normative tool. It must be embedded at the stage of law-making, guiding Member States as they transpose the AI Act into national procedures. Within the year ahead, legislators will have to define who authorizes the use of FRT, under what conditions, with what limits and subject to which forms of oversight. The model I propose provides constitutional criteria for this task. It requires an ex ante test of legality and necessity before authorization regimes are designed. It insists on mapping which rights are structurally most exposed, recognizing the special vulnerability of some fundamental rights, such as data protection and the right to an effective remedy (Article 8 and Article 47, CFREU). It embeds remedial capacity into the very architecture of the law, ensuring that individuals are able to discover, challenge and seek review of FRT use. 

The analogy with wiretapping is helpful. Here too, the law requires a prior judicial order and procedural guarantees, precisely because the power is too intrusive to be left to administrative discretion. Biometric surveillance requires the same constitutionalization. To regulate only the “use” by deployers without embedding safeguards in the legislative design would repeat the mistake of treating legality as sufficient in form while justice might be denied in substance.

Facial recognition is, therefore, more than a technological challenge: it is a test of whether constitutional law can prevent rights erosion through anticipatory design. A constitutionalized FRIA, placed at the level of legislation, enables Member States to avoid making the law itself the channel through which constitutional guarantees are emptied out in the age of AI.

FEDERICA PAOLUCCI

Bocconi University
Department of Legal Studies