Statements and speeches Office of the High Commissioner for Human Rights
Türk calls for attentive governance of artificial intelligence risks, focusing on people’s rights
30 November 2023
Delivered by
Volker Türk, UN High Commissioner for Human Rights
At
Generative Artificial Intelligence and Human Rights Summit
Distinguished friends and colleagues,
This is a pivotal moment in history. The 75th anniversary of the Universal Declaration of Human Rights coincides with a period of rapid and transformative changes in the field of artificial intelligence, particularly generative AI.
The emergence of generative AI presents a paradox of progress. On one hand, it could revolutionize the way we live, work, and solve some of our most complex challenges. On the other, it heightens profound risks that could undermine human dignity and rights. This makes it crucial to ensure that human rights are embedded at the core throughout the lifecycle of AI technologies, with a concerted effort by Governments and corporations to establish effective risk management frameworks, and operational guardrails.
I am increasingly alarmed about the capacity of digital technologies to reshape societies and influence global politics. Some 70 elections are scheduled in 2024 - covering countries where 4 billion people live - and digital fakes and disinformation campaigns could play a role. It is essential that we stand as an unassailable pillar against disinformation and manipulation.
There must be a comprehensive evaluation of the multiple fields in which AI could have transformative impact – including potential threats to non-discrimination, political participation, access to public services, and the erosion of civil liberties. This is why I am pleased to see the release today of the B-Tech 'Taxonomy of Generative AI Human Rights Harms', which can contribute to broader understanding of current and emerging risks.
Above all, generative AI needs governance. And that governance must be based on human rights. It also needs to be able to advance responsible business conduct, and accountability for harms that corporations contribute to.
The UN Guiding Principles on Business and Human Rights, and the OECD Guidelines for Responsible Business Conduct – both of which are widely in use – offer robust guardrails for States and corporations, and set the stage for responsible development of AI. But the UN Guiding Principles and OECD Guidelines will not alone be sufficient to address the challenges posed by AI. Potential misuse of AI technologies by States, or by criminal gangs, require a range of legal, regulatory, and multilateral frameworks. All of these need to be anchored in international human rights norms, including the standards that have already established the human rights responsibility of businesses and investors.
Currently, we're seeing wide recognition of the need for AI governance – but the multiple policy initiatives underway are mostly inconsistent, and they frequently fail to give human rights the appropriate emphasis. This risks leading to a fragmented regulatory landscape, with varying definitions of ethical conduct and acceptable risk.
A structured initiative like the B-Tech Generative AI Project can provide a clearer understanding of AI's potential human rights impacts, and clarity about the action needed from States and companies – lighting the road to more coherent governance.
We need all States to protect individuals from human rights abuses induced by AI. This means all States need to align their regulatory frameworks with their obligations under international human rights law.
Corporations need to ensure that their AI algorithms, operational processes, and business models ensure respect for human rights. There needs to be active due diligence, looking at the most at-risk populations and putting the prevention of human rights abuses at the centre of design and business decisions.
My Office will continue to call attention to the need for effective remedy for victims of AI-induced human rights abuse. Technology companies must recognize their responsibility, and the social benefits of contributing to systems of remedy that work. And ultimately States have a fundamental duty to ensure the remediation of human rights harms, including by forcing businesses to take appropriate action.
Generative AI is not a local or national phenomenon. It will have impact on everyone – and it demands a global, collaborative approach. We need to make sure that protecting people's rights is at the centre of that approach. This requires not just dialogue, but action – action that draws upon the collective wisdom and guidance of established frameworks. Our collaboration should unite States, corporations, civil society, and individuals in a shared mission: to ensure that AI serves humanity’s best interests, co-creating a world in which technology does not just serve the interests of the wealthy and powerful, but enables universal advancement of human dignity and rights.
Thank you.