
The Laws of AI Security
As with every aspect of our lives, the field of national security is increasingly affected by technology, including artificial intelligence (AI). Although the assumption that technology per se is neutral — neither ‘good’ nor ‘evil’ — is particularly true in the field of security, the sensitive nature of security matters calls for close attention to the impact of these technologies on the legal protection of rights and freedoms and, ultimately, on the basic features of the rule of law.
We address the legal consequences of advanced technology in national security from two main angles: on one side, technology is a powerful tool in the hands of terrorists, exploited by them and their organizations to serve their criminal purposes; on the other, it is an essential ally for public authorities — and for other actors cooperating with them — in preventing and countering terrorism. Examining these two sides in parallel is crucial to get a full understanding of the bright and the dark sides of technology.
With this in mind and through a comparative lens, we reflect on what law — i.e. legal regulation — can or cannot (and should or should not) do, what its potentialities and its limits are, and how it has to interact with entities different from traditional regulatory bodies (public authorities), such as internet platforms and the so-called giants of technology or big techs.
Regarding the rights-security relationship, we point out that it is increasingly becoming a matter of private actors, thus losing its traditional connection with sovereignty and the public sphere. This is far from being a merely theoretical issue, since private bodies follow a totally different pattern from public authorities; specifically, they are driven by market and competition issues, which may distort, or at least change, the modus operandi when it comes to balancing security with rights.
Concerning regulatory aspects, it is well-known that several postures exist, from attempts to omnibus regulation centered on rights, as in the European Union (EU) with the recent AI Act, up to US deregulation, and to Chinese state-centric vision. All of these approaches, derived from varied legal cultures and political choices, share the same drawbacks, i.e. they do not lay down clear rules or at least principles for cases where AI is pivotal for security purposes. This is why we suggest a more sectorial approach, with some sort of lex specialis for advanced technology in counter-terrorism. It should not however leave big techs behind, who cannot be the leaders in regulating AI and security, but rather are key actors within a balanced and realistic framework. We highlight that some hints towards these goals have been made — e.g. within the EU — but the effort could be improved.
Coming to geopolitical considerations, different stances on how to regulate technology, rights and security in different parts of the world led to the fight for predominance in the field. The EU has tried to gain leadership through the so-called Brussels effect, which however is likely to be down-sized, given the recent events where private powers have shown an increasingly important role, especially in some areas of the world.
Against this background, it is difficult to foresee a ‘winning model’ in handling technology, security and rights, as this is strictly connected with actors’ political power as well as the socioeconomic context. However, we argue in favor of an approach that maximizes the protection of rights globally, instead of only embracing a market leadership, thus keeping the rights-security relationship within the framework of the rule of law.
To boost this challenging process, we lay down some suggestions. For instance, discussion among representatives of the EU and third countries might foster a cultural change towards better extra-EU standards. In parallel, the introduction of incentives such as tax relief for companies providing their services within the EU — alongside the requirement to comply with its rights-protecting legislation — could compensate the costs of compliance, which might otherwise make the EU market increasingly unattractive to companies from third countries.
In short, the ultimate goal should be to prevent any struggles for unilateral leadership, in favor of a well-balanced governance approach, that takes into account all the stakeholders involved.