The Balancing Act: How the European Union’s Artificial Intelligence Act Properly Frames the Dangers and Benefits of Using Biometric Identification to Combat Human Trafficking

Introduction

As technology advances, States must make important decisions about how they want to legislate and regulate the use of Artificial Intelligence (“AI”). In making these decisions, States need to balance their concerns about privacy and data protection[1] with their interests in innovating for the protection or benefit of their citizens.[2] This balancing act proves to be particularly difficult when addressing human rights issues, including human trafficking.[3] One AI-regimented solution that several States have considered to combat human trafficking is implementing biometric recognition systems,[4] sometimes also referred to as biometric identification systems, into their data bases. Biometric recognition is the “[a]utomated recognition of individuals based on their biological and behavioural characteristics.”[5] By having a system that tracks individuals based on these unique characteristics, some states hope to make huge strides in efficiently and effectively locating traffickers and trafficking victims.[6] While this technology holds the potential to significantly aid victims, it also raises substantial privacy concerns for everyday individuals who will also be tracked without their consent.[7] The question then remains: is there a way for governments to implement biometric technology while maintaining their own ethical obligations? The European Union (“EU”) is one governmental body that has legislated on this issue effectively. The EU’s “Artificial Intelligence Act”[8] should act as a model for other States trying to implement biometric identification, as it balances the dangers and the benefits of using this technology.

Negative Impact of Biometric Recognition

Concerns about biometric recognition use stem from the fear that citizens’ nonconsensual exposure to these large data sets will result in a loss of privacy, particularly for the most vulnerable communities. Already countries have acted recklessly with biometric data, resulting in their citizens’ data being exposed.[9] For example, when the United States had troops stationed in Afghanistan, they used biometric data on Afghani citizens to build a data base of their “fingerprints, iris scans, and facial images.”[10] They did this to ensure that the United States military could “confirm whether or not someone was an ally and identify and track threats.”[11] When the United States withdrew its troops, it left some of this data behind, enabling a means for the Afghani Taliban to seize and use the data for predatory purposes.[12] One concern that human rights advocates had was that the Afghani Taliban would “hijack and use the data to identify and target individuals who worked with opposing forces.”[13] Countries have also shown how easily biometric data can be used to police their citizens, resulting in vulnerable communities being exposed. In Iran, when the government implemented new laws on women wearing hijabs, they decided to implement facial recognition technology to track which women broke these laws.[14] This resulted in women who defied these laws being identified in seconds, brought into state detention, and forced to make confessions.[15] Seeing the negative and serious implications that biometric data can have has led States to advocate for a complete ban on State collection of biometric data;[16] however, this would mean that this same information cannot be used constructively.

The EU Legislation on AI Addresses the Concern

The European Union’s Artificial Intelligence Act balances the data privacy concerns of biometric identification with the positive impact it can have on human trafficking victims. It achieves this balance by defining two types of biometric identification: (1) real-time remote biometric identification system and (2) post-remote biometric identification system. A real-time remote biometric identification system is defined as “a remote biometric identification system” whereby the capturing of the data is done almost instantaneously.[17] A post-remote biometric identification system is defined as “a remote biometric identification system other than a real-time remote biometric identification system.”[18] The benefit of categorizing biometric identification systems in this way is that it underlines what makes the system so concerning while providing a carve out for using the system that could most benefit trafficking victims.

A real-time remote biometric identification system is the type of AI that poses the biggest threat to individual liberties, as it provides access to instantaneous data that was received nonconsensually. The EU also states that it “evoke[s] a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.”[19] It also can have biased results that lead to discriminatory effects.[20]

To address these concerns about the feeling of constant surveillance, in its legislation, the EU placed additional restrictions on real-time remote biometric identification systems. Namely, the EU only allows the use of this technology when it is “strictly necessary to achieve a substantial public interest.”[21] Substantial public interests include: searching for missing persons, addressing threats to life, combatting terrorist attacks, and locating or identifying suspects of certain criminal offenses, including human trafficking.[22] The EU also only allows real-time biometrics to be used in a “responsible and proportionate manner”[23] and only to confirm a targeted individual’s identity.[24] In situations of emergency, States may use real-time biometrics, but they must provide reasons why the technology had to be used within twenty-four hours, and if the usage is not approved, then it must stop being used immediately.[25] Lastly, market surveillance authorities and the national data protection authorities must submit annual reports about the usage of real-time biometric identification systems. [26]

All these systems are put into place to mitigate the concerns that people have about their privacy being impacted by biometric systems. Even so, the EU implements carve-outs for how this technology can be used to combat human trafficking.[27] It strikes the perfect balance of ensuring that States do not abuse their power while maintaining a way for them to effectively use AI.

Conclusion

Because the EU makes a conscious effort to address data privacy concerns while allowing the use of biometric technology, it proves to be a good starting point for different countries using this technology to safely and effectively combat human trafficking. The use of AI presents a myriad of data and privacy risks; however, States can implement legislation that addresses these risks while taking advantage of the benefits.

  1. See generally Sylvia Lu, Data Privacy, Human Rights, and Algorithmic Opacity, 110 Cal. L. Rev. 2087 (2022) (arguing that AI products have the potential to be invasive to a point that infringes both data privacy and human rights).

  2. See generally Orly Lobel, The Law of AI for Good, 75 Fla. L. Rev. 1073 (2023) (explaining how the implementation of public policy can be used to implement AI in ways that are positive).

  3. Id. at 1122.

  4. Int’l Crim. Police Org. [INTERPOL], INTERPOL Unveils New Biometric Screening Tool, (Nov. 29, 2023), https://www.interpol.int/en/News-and-Events/News/2023/INTERPOL-unveils-new-biometric-screening-tool.

  5. Int’l Org. for Standardization [ISO], Information Technology — Vocabulary — Part 37: Biometrics, ISO/IEC 2382-37:2022, https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:-37:ed-3:v1:en.

  6. Kassandra Jones, Tech vs. Human Trafficking (Mar. 4, 2020), https://preventht.org/reference/tech-vs-human-trafficking.

  7. Lu, supra note 1.

  8. European Parliament and Council Regulation 2024/1689, 2024 O.J.

  9. Drew Harwell, Ukraine is Scanning Faces of Dead Russians, then Contacting Their Mothers, Wash. Post (Apr. 15, 2022), https://www.washingtonpost.com/technology/2022/04/15/ukraine-facial-recognition-warfare/ [https://perma.cc/JKR6-6KPQ].

  10. Id.

  11. Anne Dulka, The Use of Artificial Intelligence in International Human Rights Law, 26 Stan. Tech. L. Rev. 316, 357 (2023).

  12. Id.

  13. Id.

  14. Arab News, New Iranian Hijab Law Set to Be Enforced By Facial-Recognition Technology (Sept. 5, 2022), https://arab.news/zapzp.

  15. Id.

  16. Lobel, supra note 2, at 1087–88.

  17. European Parliament and Council Regulation, supra note 8, at 48.

  18. Id.

  19. Id. at 9.

  20. Id.

  21. Id.

  22. Id.

  23. European Parliament and Council Regulation, supra note 8, at 10.

  24. Id.

  25. Id.

  26. Id. at 11.

  27. Id. at 52.