The Inadequacy of the Current International Human Rights Regime for Algorithm Discrimination
Along with the development of artificial intelligence (AI), algorithms and big data have emerged in many decision-making processes, both in the private and public sectors. Examples include algorithm decision-making in sentencing, job applications, and loan applications.[1] During these processes, AI collects and learns from data that reflects the social, historical, and political conditions in which such data was created. As a result, existing patterns of structural discrimination are reproduced and aggravated in the usage of these AI technologies.[2] For example, in 2017, the American Law Institute approved a proposed final draft of the “Model Penal Code: Sentencing,” which specifically recognizes the value of input from actuarial instruments that “estimate the relative risks that individual offenders pose to public safety through their future criminal conduct.”[3] A growing body of academic research and news reports show that such algorithms built upon incomplete or biased data can amplify discrimination when they produce results.[4] This blog post will explore why the current international human rights law and AI regulations are inadequate to address this problem.
Overview of International Human Rights Treaties Prohibiting Discrimination
Article 26 of the International Covenant on Civil and Political Rights (ICCPR) ensures non-discrimination by guaranteeing equal protection of the law.[5] To enforce the treaty, the State Parties are required to take “necessary steps [. . .] to adopt such laws or other measures as may be necessary to give effect to the rights recognized in the present Covenant.”[6]
In addition, the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also ensures the rights of racial and ethnic groups to have equal enjoyment of their human rights and fundamental freedoms.[7] Article 1 of ICERD defines racial discrimination as “any distinction, exclusion, restriction or preference based on race, color, descent, or national or ethnic origin which has the purpose or effect of enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural or any other field of public life.”[8] This definition captures both direct discrimination regarding the purpose of conduct and indirect discrimination regarding the effect of conduct. [9] Under this definition, the algorithm discrimination would violate the protected rights even if such discrimination was not intended.[10]
For the implementation of both treaties, all State Parties are required to submit reports regularly to a monitoring committee on how the rights are being protected. Based on these reports, the committees make recommendations to State Parties to ensure that the covenants are being fulfilled.
State Parties’ Hollow Commitment to International Human Rights Regime
The current international human rights regime is inadequate to address algorithm discrimination because the major State Parties’ commitment to such treaties is hollow. Nearly eight decades have passed since the adoption of the Universal Declaration of Human Rights in 1948, which is the basis for human rights treaties.[11] Many technologically advanced State Parties have either not ratified the fundamental rights treaties or not fully incorporated them into their domestic legal systems.[12] For example, China, despite signing the ICCPR on October 5, 1998, has not ratified ICCPR so far.[13] Additionally, the United States ratified ICERD subject to several reservations, understanding, and declarations (RUDs), such as the stipulation that it will not accept any obligation under ICERD to restrict U.S. freedom of speech, expression, and association, and it will also not accept any regulations that seek to regulate private conduct in a stricter manner than what already exists under U.S. law. Furthermore, the U.S. provides that ICERD is non-self-executing.[14]
Inadequate Guidance for Application in Problems Raised by Technological Development
The current international human rights regime is inadequate to address algorithm discrimination because the covenants do not offer enough guidance as to how treaties would apply to complex problems raised by technological development. Although, on face value, these treaties have a broad enough coverage of human rights violations, none of the treaties mention the increasingly prevalent problems caused by technological advancement. Specifically, what makes regulation of algorithms particularly complicated is its black box problem. Meaning, algorithms make decisions in a black box, and it is almost impossible even for its own designers to figure out how the decisions are made.[15] The input of data is the only portion of the decision-making process of algorithms that can be subject to regulation. The input of data may not be on its face discriminatory, but yet, it may reflect existing patterns of structural discrimination. Due to the complexity of the issue, it cannot be tackled by broad and vague definitions of human rights as it appears in ICCPR and ICERD and other international human rights treaties.[16]
Domestic AI Regulations as an Inadequate Solution
The inadequacy of international human rights law cannot be cured by domestic AI regulations. In the past few years, the idea of AI nationalism arose, which stems from the concern that normative values will end up encoded in these applications and thus shape the way they work, and further how users think.[17] AI nationalism incentivizes governments to seek to advance the interests of companies headquartered within their territory[18] rather than developing a fair AI regulation policy that promote ethical use of new technology by the human race in general.
The idea of AI nationalism has been widely recognized by the leaders of major powers. Russia President Vladimir Putin once stated that “whoever became the leader in the field would rule the world.”[19] Similarly, in two different interviews with WIRED magazine, then U.S. President Barack Obama and French President Emmanuel Macron admitted the significance of AI as an instrument of soft power.[20] Because of the advantage that leaders have in this field, states that engage in this race in varying capacities would not necessarily have the same incentives to come up with a shared set of rules that will govern the creation and use of such technology.[21]
To conclude, the current international human rights regime is inadequate to address algorithm discrimination because the major State Parties’ commitment to such treaties is hollow, and the covenants do not offer enough guidance as to how international human rights treaties would apply to complex challenges raised by technological development. Such inadequacy of the international human rights regime cannot be cured by domestic regulations because governments of major State Parties have distorted interests when it comes to AI nationalism.
- John-Stewart Gordon, The Impact of Artificial Intelligence on Human Rights Legislation: A Plea for an AI Convention 46 (2023). ↑
- Kristian P. Humble & Dilara Altun, Artificial Intelligence and the Threat to Human Rights, 24 J. of Intl. L. 13 (2020). ↑
- John Villasenor & Virginia Foggo, Algorithms and sentencing: What does due process require?, Brookings (Mar. 21, 2019), https://www.brookings.edu/articles/algorithms-and-sentencing-what-does-due-process-require/. ↑
- Molly Callahan & Jackie Ricciardi, Algorithms Were Supposed to Reduce Bias in Criminal Justice—Do They?, The Brink (Feb.23, 2023), https://www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice/. ↑
- International Covenant on Civil and Political Rights art. 26, 16 December 1966, 999 U.N.T.S. 171 [hereinafter ICCPR]. ↑
- ICCPR art. 2(2). ↑
- International Convention on the Elimination of All Forms of Racial Discrimination art., 21 December 1965 [hereinafter ICERD]. ↑
- ICERD art. 1. ↑
- Humble & Altun, supra note 2, at 14. ↑
- Id. ↑
- Onur Bakiner, The promises and challenges of addressing artificial intelligence with human rights, [Volumer #] Big Data & Soc’y 1, 8 (2023). ↑
- Id. ↑
- Ratification Status for China, UN Treaty Body Database, https://tbinternet.ohchr.org/_layouts/15/TreatyBodyExternal/Treaty.aspx?CountryID=36&Lang=EN (last visited Feb. 4, 2024). ↑
- Maya K. Watson, The United States’ Hollow Commitment to Eradicating Global Racial Discrimination, 44 Hum. Rts. Mag. (2020), https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/black-to-the-future-part-ii/the-united-states–hollow-commitment-to-eradicating-global-racia/. ↑
- Anna Su, The Promise and Perils of International Human Rights Law for AI Governance, 4 L., Tech. & Hum. 166, 168 (2022). ↑
- Humble & Altun, supra note 2, at 15. ↑
- Su, supra note 13, at 171. ↑
- Id. ↑
- James Vincent, Putin says the nation that leads in AI ‘will be the ruler of the world”, The Verge (Sep 4, 2017, 4:53 AM), https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world. ↑
- Scott Dadich, The President in Conversation With MIT’s Joi Ito and WIRED’s Scott Dadich, WIRED (Aug. 24, 2016), https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/; Nicholas Thompson, Emmanuel Macron Talks to WIRED About France’s AI Strategy, WIRED (Mar. 31, 2018), https://www.wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/. ↑
- Su, supra note 13, at 171. ↑