Speeches Office of the High Commissioner for Human Rights
Human rights in the digital age
17 October 2019
Share
Human rights in the digital age - Can they make a difference?
Keynote speech by Michelle Bachelet, UN High Commissioner for Human Rights
Japan Society, New York, 17 October 2019
Distinguished panelists,
Colleagues, Friends,
My thanks to the Center for Human Rights and Global Justice at New York University, Amnesty International and the Guardian newspaper for inviting me to what promises to be an extremely important and vibrant event.
Focusing on human rights in the digital age is key. Data collection is already happening on an industrial scale. States, political parties, various organizations and, in particular, businesses hold remarkably detailed and powerful information about us. More and more aspects of our lives are being digitally tracked, stored, used – and misused. Just think, all of us here today with a smartphone will have created a digital trail leading right to this room.
Digital technology already delivers many benefits. Its value for human rights and development is enormous. We can connect and communicate around the globe as never before. We can empower, inform and investigate. We can use encrypted communications, satellite imagery and data streams to directly defend and promote human rights. We can even use artificial intelligence to predict and head off human rights violations.
But we cannot ignore the dark side. I cannot express it more strongly than this: The digital revolution is a major global human rights issue. Its unquestionable benefits do not cancel out its unmistakable risks.
Neither can we afford to see cyberspace and artificial intelligence as an ungoverned or ungovernable space –a human rights black hole. The same rights exist online and offline. The UN General Assembly and the Human Rights Council have affirmed this.
Friends and colleagues,
We should not feel overwhelmed by the scale or pace of digital development, but we do need to understand the specific risks.
A lot of our attention is rightly focused on challenges to freedom of expression online and incitement to hatred and violence. Online harassment, trolling campaigns and intimidation have polluted parts of the internet and pose very real off-line threats, with a disproportionate impact on women. In the most deadly case, social media posts targeted the Rohingya community in Myanmar in the run-up to the mass killings and rapes in 2017. Human rights investigators found that Facebook – and its algorithmically driven news feed – had helped spread hate speech and incitement to violence.
These grave violations of human rights leave no room for doubt. Threats, intimidation, and cyber-bullying on the internet lead to real world targeting, harassment, violence and murder, even to alleged genocide and ethnic cleansing. Failure to take action will result in further shrinking of civic space, decreased participation, enhanced discrimination, and a continuing risk of lethal consequences – in particular for women, minorities and migrants, for anyone seen as “other”.
But over-reaction by regulators to rein in speech and use of the online space is also a critical human rights issue. Dozens of countries are limiting what people can access, curbing free speech and political activity, often under the pretense of fighting hate or extremism. Internet shutdowns seem to have become a common tool to stifle legitimate debate, dissent and protests. The NGO Access Now counted 196 shutdowns in 25 states in 2018, almost three times the number (75) recorded in 2016.
Some States are deliberately tarnishing the reputations of human rights defenders and civil society groups by posting false information about them or orchestrating harassment campaigns. Others are using digital surveillance tools to track down and target rights defenders and other people perceived as critics.
Friends and colleagues,
Alongside these very real dangers – under-regulation, over-regulation and deliberate misuse – we are also seeing unprecedented risks to the right to privacy. Safeguards around privacy are failing in far too many cases. Many might be completely unaware of who holds their data or how it is being used.
And because data is held on a vast scale, the risks and impacts of its misuse are also vast. The dark end of the digital spectrum threatens not just privacy and safety, but undermines free and fair elections, jeopardises freedom of expression, information, thought and belief, and buries the truth under fake news. The stakes could not be higher – the direction of countries and entire continents.
This is significant not just as a privacy issue, but in relation to the large-scale harvesting and misuse of data, and the manipulation of voters. We have seen this reported in the US presidential election, the UK’s Brexit referendum, and polls in Brazil and Kenya. Newspapers including the Guardian, along with dedicated public officials, have been instrumental in bringing some of these abuses into the public domain.
As the digital revolution continues to unfold, the use of technology for both legitimate and illegitimate purposes will increase. States and businesses are already using data-driven tools that can identify individuals as potential security threats, including at borders and in criminal justice systems. Artificial intelligence systems assess and categorize people; draw conclusions about their physical and mental characteristics; and predict their future medical conditions, their suitability for jobs, even their likelihood of offending. People’s profiles, “scoring” and “ranking” can be used to assess their eligibility for health care, insurance and financial services.
So alongside the human rights abuses I’ve described, we find a whole new category – this time not necessarily deliberate, not the result of a desire to control or manipulate, but by-products of a legitimate drive for efficiency and progress.
Real world inequalities are reproduced within algorithms and flow back into the real world. Artificial intelligence systems cannot capture the complexity of human experience and need. Digital systems and artificial intelligence create centers of power, and unregulated centers of power always pose risks – including to human rights.
We already know what some of these risks look like in practice. Recruitment programs that systematically downgrade women. Systems that classify black suspects as more likely to reoffend. Predictive policing programs that lead to over-policing in poor or minority-populated areas. The people most heavily impacted are likely to be at the margins of society. Only a human rights approach that views people as individual holders of rights, empowers them and creates a legal and institutional environment to enforce their rights and to seek redress for any violations and abuses of rights, can adequately address these challenges.
Friends and colleagues,
Digital technology is being used not just to monitor and categorize, but to influence. Our data is not just digitized, but monetized and politicized. Digital processes are now shaping us as well as serving us. We are right to feel profoundly concerned about how Big Data, artificial intelligence and other digital technologies are impacting our lives and society.
We are also right to highlight the situation of people who work in the digital industry, often in precarious employment or the gig economy, who lose all the benefits that come with secure jobs. It’s essential that they can enjoy their full human rights, including the right to join unions and to strike. In some cases, this may help curb business excesses.
These challenges drive us back to the timeless principles of the Universal Declaration of Human Rights. Each person is equal, an individual with inalienable rights and inherent dignity. Each person has the right to live his or her life free from discrimination, to political participation, privacy, health, liberty, a fair trial. Each person has the right to life.
To respect these rights in our rapidly evolving world, we must ensure that the digital revolution is serving the people, and not the other way round. We must ensure that every machine-driven process or artificial intelligence system complies with cornerstone principles such as transparency, fairness, accountability, oversight and redress.
But whose responsibility is it to tackle these multiple, complex risks that cross cultures, national boundaries and legal jurisdictions? States that hold the primary duty to protect human rights and ensure remedies? Businesses that can change the way they work? International organizations that can seek cross-border solutions? Academics? Journalists? Parliamentarians? Human rights defenders? NGOs and civil society groups?
I believe the answer is all of the above, in partnership, with a sense of shared responsibility and ownership. We need a universal human response in defense of universal human rights.
And do we address these challenges using ethics or human rights? It is very encouraging that some States, regional blocs, businesses, academics and other passionate, far-sighted people have shown great leadership in developing ethical guidelines to overcome injustice and discrimination. But I also believe that guidelines, codes of conduct and voluntary compliance are not, by themselves, a robust enough response to the scale of the challenges we face.
Data is power, Big Data is big power – and all power is capable of being misused. This is true in any context, and the digital world is no different. The international human rights framework takes us further than ethics alone in placing the necessary checks and balances on this power. It provides a concrete, legal foundation on which States and firms can build their responses in the digital age. It provides very clear guidance on acceptable behavior – and equally importantly, it has already been established and agreed to by States. Alongside the Universal Declaration, we have numerous conventions, treaties, courts, commissions and other institutions that can hold States and companies to account.
Human rights and ethical approaches do not run counter to each other. As a recent World Economic Forum publication on the responsible use of technology makes clear, they can work alongside each other, resulting in a powerful combination where human rights reinforce ethics, and ethics reinforce human rights.
In fact, if we are to get the very best from the digital revolution, we need this kind of non-binary thinking in all our responses with the human rights framework as a guiding compass. A human rights framework and ethical standards. Obligations and responsibilities. States and businesses. Artificial intelligence and human dignity. Guarantees of free speech and clear protection from hate speech.
This means robust responses from governments, with policies that incorporate a duty to protect the full range of rights – with due consideration to social, cultural, and economic rights – when laws, guidelines and regulations are drawn up. It means tech giants showing leadership in their business practices. It means empowering people to control decisions on use of their personal data. It means ensuring the marginalized and poorest sections of our societies have access to remedies when their data is misused, or when they are subject to discriminatory decisions from automated decision-making processes. It means conducting human rights impact assessments at every stage of the development and deployment of artificial intelligence systems – this is a very important area where companies and researchers can show responsibility and leadership.
But governments and companies do not need to start from scratch. Alongside the UN Guiding Principles on Business and Human Rights, we have excellent examples of guidance in specific sectors, such as the European Union’s ICT Sector Guidance on implementing the Guiding Principles, the Telecommunications Industry Dialogue and the GNI Principles and Guidelines.
Friends and colleagues,
There is no part of the digital revolution that cannot and should not be viewed from a human rights perspective. We need to constantly seek out and assess gaps in protection. This doesn’t just mean passing new laws that keep pace with digital developments, but also adapting the way we use institutions and processes. We need institutions that keep the power of data-driven companies and States in check. We can protect rights effectively only if we constantly fine-tune our processes to find the right mix of interventions.
Government-led regulation of online space can of course raise its own issues, in particular if the fundamental guarantees of rule of law are not respected: in particular equality under the law, fairness, and accountability.
Let’s not forget: whenever we regulate social media, we determine what people are able to say and what they can see and hear in a world that’s become a dominant place for public debate and public life. So our interventions must be well-designed and avoid overreach at all cost. If regulation is needed, we should explore focusing on conduct of platforms rather than view-point-based regulation. The best solutions will be found by working in partnership, sharing best practices, and studying the detailed outcomes of national regulatory systems, including any unintended consequences.
Friends and colleagues,
There is already an urgent need for governments, social media platforms and other businesses to protect the fundamental pillars of democratic society, rule of law, and the full range of our rights on line: a need for oversight, accountability and responsibility. As the digital frontiers expand, one of our greatest challenges as a human rights community will be to help companies and societies to implement the international human rights framework in the land we have not yet reached. This includes clear guidance on responsibilities of business as well as the obligations of States.
At its best, the digital revolution will empower, connect, inform and save lives. At its worst, it will disempower, disconnect, misinform and cost lives.
Human rights will make all the difference to that equation.
Thank you.