Confronting Catastrophic Risk: The International Obligation to Regulate Artificial Intelligence

While artificial intelligence (“AI”) holds enormous promise, many experts in the field are warning that there is a non-trivial chance that the development of AI poses an existential threat to humanity. Existing regulatory initiatives do not address this threat but instead merely focus on discrete AI-related risks such as consumer safety, cybersecurity, data protection, and privacy. In the absence of regulatory action to address the possible risk of human extinction by AI, the question arises: What obligations, if any, does public international law impose on states to regulate its development?

At present there is no scientific consensus as to the exact probability of this threat; however, it is generally agreed that the risk is non-zero. Given the potential magnitude of the harm, we argue that there is an international legal obligation on states to mitigate the threat of human extinction posed by AI. We ground our argument in the precautionary principle. Often invoked in relation to environmental regulation and the regulation of potentially harmful technologies, the principle holds that in situations where there is the potential for significant harm, even in the absence of full scientific certainty, preventive measures should not be postponed if delayed action may result in irreversible consequences.

We argue that the precautionary principle is a general principle of international law and, therefore, that there is a positive obligation on states under the right to life within international human rights law to proactively take regulatory action to mitigate the potential existential risk of AI. This is significant because, if an international obligation to regulate the development of AI can be established under international law, then the basic legal framework would be in place to address this evolving threat. Currently, no such framework exists.