Skip to main content

البيانات الإجراءات الخاصة

العرض الذي قدمه كريستوف هاينز، أستاذ قانون حقوق الإنسان في جامعة بريتوريا، في اجتماع الخبراء غير الرسمي الذي نظمته الدول الأطراف في الاتفاقية المتعلقة بأسلحة تقليدية معينة، 13 – 16 أيار/مايو 2014، جنيف، سويسرا مقرر الأمم المتحدة الخاص المعني بحالات الإعدام خارج نطاق القضاء أو بإجراءات موجزة أو تعسفاً

13 أيّار/مايو 2014

Autonomous weapons systems and human rights law

Chairperson
Excellences
Ladies and Gentlemen

It is indicative of the welcome recognition of the need for a holistic approach to this issue by the international community that the Human Rights Council has heard me in May of last year speaking on the international humanitarian law (IHL) implications of LAWS, and that the CCW, which normally deals with IHL, is willing today to consider the human rights aspects.

You may have noticed that I have not used the word ‘lethal’ in the heading of my paper for today. Instead of addressing the specialized case of lethal autonomous weapons systems I will be addressing the use of autonomous weapons systems to project force in general; lethal and non-lethal force; AWS and not LAWS.  The reason is as follows:

As I understand it, our common concern is with the autonomous use of force against human beings. IHL – and by extension the CCW - deals with situations of armed conflict where the kind of force that is used against people is usually lethal force. The intentional use of non-lethal force is the exception. As a result, this conference deals with Lethal Autonomous Weapons Systems or LAWS.  However, in the human rights context the expectation is that if force is used against humans it will normally not be lethal. Lethal force is the exception under international law enforcement standards.

Since AWS are weapon platforms, they can be used in such a context to deploy various types lethal or less lethal weapons. As a result, in considering the possible application of force by autonomous weapon platforms in the human rights context, the discussion cannot be confined to the use of lethal force, but all forms of the use of force must be considered, lethal as well as less lethal. In this context the use of force even if it does not constitute a violation of the right to life, can still violate some other rights concerning bodily security.

It is an open question exactly when a system should be described as autonomous. A definition of AWS that is widely used is ‘systems that, once activated, can select and engage targets without further human intervention’. Autonomy should however best be seen as a continuum of increased machine decision-making. Some of the problems that arise with fully autonomous systems, however defined, will also present themselves with lower forms of autonomy. It may thus be helpful at this stage – before clear definitions have been formulated - to see AWS as systems at the higher end of the scale of increasing machine autonomy. It should be clear, however, that the autonomy at stake relates to the so-called critical functions – the release of force.

I will consider the human rights implications of AWS under seven main headings.

1) Situations where human rights law may potentially apply to AWS

The legal debate about AWS that has emerged during the past few years has largely left human rights out of the picture, and focused primarily on IHL. Yet, it would be a mistake not to consider the implications of use of force through AWS from a human rights perspective. Given the rapid development of technology, there is certain to be increased pressure for systems for the release of force to become more autonomous.

Human rights law applies to the use of force at all times; it is complementary to IHL during armed conflict, and where there is no armed conflict it applies to the exclusion of IHL. To get the full picture of how to deal with the possible introduction of these weapons, the human rights angle should thus be a central consideration. Is the use of AWS to apply force permissible under human rights law, and if so under what circumstances?

There are three situations where human rights law can potentially apply to AWS:

a) Armed conflict

Human rights law complements IHL rules on the use of force during armed conflict, subject to a number of qualifications, including the proviso that during armed conflict the rules of human rights law are determined with reference to the provisions of the more specialized – and in many ways more permissive – legal rules of international humanitarian law. The important point, however, is that people on both sides of the conflict retain their human rights such as the right to life and the right to dignity during armed conflict, even if the contents of the rights may differ according to the context. For its part, the rules of IHL should be interpreted with reference to these rights.

b) Anti-terrorism and other actions in situations that do not constitute armed conflict

Moreover, in situations where the threshold requirements of armed conflict are not met (e.g. in a geographical area that is removed from established battlefields, without a nexus to an armed conflict), the requirements on the use of AWS would be regulated by human rights law only. It has been argued that armed drones have in a number of cases during the last decade and more been used in such contexts, and the same may happen with AWS. This should be treated as a law enforcement situation subject to IHRL and international law enforcement standards, not subject to IHL.

c) Domestic law enforcement

Human rights law, of course, is the relevant legal regime as far as domestic law enforcement – for example by police officers – is concerned. It is conceivable as a practical matter that if AWS were developed and made available to the military forces of a State, those military forces could be deployed in law enforcement operations using AWS. The same State’s law enforcement officials could at some point also decide to use AWS, fitted with lethal or less lethal weapons. In such contexts, the use of force is clearly subject to international human rights law.

There is a burgeoning industry poised to produce AWS manufactured specifically with domestic law enforcement in mind. Possible scenarios gleaned from the marketing literature of some of these companies include the use of AWS in the context of crowd control (for example armored robotic platforms and launchers to disperse demonstrators with teargas or rubber bullets, to inflict powerful electrical shocks from the air, and to mark perceived troublemakers with paint). Such weapon platforms may also be equipped with firearms or light weapons.

Other potential applications of AWS in the domestic law enforcement context include the apprehension of specific classes of perpetrators, such as prison escapees, or rhino or other big-game poachers; or providing perimeter protection around specific buildings, such as high security prisons or in border areas, where stationary systems that spray tear gas may for example be installed. Such systems may also be used to patrol pipelines.

The recently released ‘riobot’ (not currently autonomous as far as the release of force is concerned) is marketed as being particularly suitable to deal with strikes in the mining industry throughout Africa.

Hostage situations present popular (if not sometimes fanciful) hypotheticals – e.g. an AWS can conceivably be programmed to release deadly force against a hostage-taker who exposes himself or herself for a split second based on facial recognition, in a situation where a human sniper will be too slow to react.

States and private manufacturers are bound to make such technology available to buyers around the world, in the same way that private security firms have become a global industry. In some cases this will mean that States or other actors that do not necessarily have advanced capacity in technology, or experience in dealing with such weapons, may acquire such weapons and place it in the hands of ill-equipped or unaccountable police officers, security guards or other law enforcement officials.

2) Weapons law and human rights law

While international human rights law places stringent restrictions on the use of force and firearms, it poses relatively few limitations of its own on the kinds of weapons that may be manufactured and used. However, in most cases where weapons are illegal under IHL, they may also not be used in a law enforcement context.

IHL has a special branch, weapons law, which deals with the question when weapons should be regarded as unlawful in the context of armed conflict. One of the IHL mechanisms which have a link with human rights law is article 36 of Additional Protocol I to the Geneva Conventions. Article 36 provides that all State parties are required to subject new weapons to a review ‘to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party’. The words ‘or by any other rule of international law’ in the quotation may be interpreted to imply that in order to pass article 36 review, the potential use of such weapons under the applicable human rights law – including the right to dignity - must also be considered.

Another prong of IHL that impacts on the legality of weapons is the so-called Martens Clause, which provides that ‘in cases not covered by the law in force, the human person remains under the protection of the principles of humanity and the dictates of the public conscience.’  Clearly, in the human rights era, the values underlying human rights law will also influence the interpretation given to Martens Clause. It has been argued that weapons beyond a certain level of autonomy may be considered to violate the principles of humanity and the dictates of the public conscience. Human rights values form an important part of the ‘public conscience’ today, and some of the states that are known best for their research into autonomy in weapons systems pride themselves on their vibrant human rights cultures.

The Martens Clause establishes among other things that the absence of an explicit prohibition does not imply that conduct or weapons are permitted. Does this mean that the burden of proof is on a State that uses such technology to prove that it will be lawful? It is worth making reference here to the approach that is followed in international environmental law. The precautionary principle determines that, and in the absence of scientific consensus on whether harm will be caused by an action or policy, the burden of proof is on the one wishing to introduce the action or policy.

Given the potential that weapons with high levels of autonomy may end up being used in law enforcement, as well as the increased levels of sophistication and in some case lethality of so-called less-lethal weapons – and without detracting from the applicability of existing standards - it may be necessary at some point to develop a system that is analogous to the article 36 procedure for weapons to be used in law enforcement.

3) The human rights that are potentially at stake

The human rights that may potentially be infringed by the introduction and use of AWS to dispense force (lethal or less lethal) most noticeably include the right to life and the right to human dignity. While the focus will primarily be on these two rights, reference will also be made to the right to liberty and security of the person; the right against inhuman treatment; the right to just administrative action; and the right to a remedy.

These rights are widely recognized in the main human rights treaties and in most cases also form part of customary international law. In what follows, I will look at the potential impact of AWS in respect of each one of these rights. For ease of reference I will refer to the articulation of these rights in the International Covenant on Civil and Political Rights (ICCPR) (except in the case of dignity, which is not recognized in the ICCPR as a separate right).

I will also refer to the United Nations Code of Conduct for Law Enforcement Officials (Code of Conduct) and the Basic Principles on the use of Force and Firearms by Law Enforcement Officials (‘Basic Principles’). The Basic Principles as well as the Code of Conduct set out principles for the use of force by law enforcement officials and require them, in doing so, to uphold human rights as well as ethical considerations.

The cumulative effect of these international standards for law enforcement demonstrates the fundamental incompatibility that exists in many ways between the constitutive values of the human rights regime and the use of AWS. In general it can be said that there is considerably less space for the use of AWS under human rights law than under IHL. The lower the level of control that remains in the hands of humans over the use of AWS (that is, the more the autonomy of humans in this regard is compromised), the more there will be concern that the rights listed under consideration are violated.

a) The right to life

According to article 6 (1) of the ICCPR ‘[e]very human being has the inherent right to life. This right shall be protected by law. No one shall be arbitrarily deprived of his life.’ The term ‘arbitrary’ has a legal as well as an ethical meaning. The primary soft law sources for the interpretation of this right where State agents resort to force are the Code of Conduct and the Basic Principles, mentioned above.

International human rights law poses a number of rules for the use of force which have titles similar to those used in IHL, but which differ greatly in their content. This includes the rules of necessity and proportionality, which have specific meanings under human rights law. Human rights law does not know concepts such as ‘combatant’s privilege’ or ‘collateral damage’.

‘Necessity’, in the context of human rights law, means that force should only be used as a last resort, and if that is the case, a graduated approach should be followed. Non-violent or less violent means must be used under human rights law if possible. To capture someone who poses a threat and subject that person to a trial is the norm. Force may be used only against a person if that person is posing an imminent threat of violence – normally implying a matter of seconds or even split-seconds. While the hostile intention of the target is irrelevant in the context of IHL, where the focus is on status or conduct, it often plays a decisive role in the human rights context.

‘Proportionality’ also has a distinct meaning in the human rights context. Proportionality in this context sets a maximum on the force that may be used to achieve a specific legitimate purpose: the interest harmed may not exceed the interest protected. The fact that force may be ‘necessary’ does not imply that it is proportionate. Thus, for example, a fleeing thief who poses no immediate danger may not be killed, even if it means the thief will escape, because the protection of property does not justify the intentional taking of life.

Basic Principle 9 deals specifically with firearms:

Law enforcement officials shall not use firearms against persons except in self-defence or defence of others against the imminent threat of death or serious injury, to prevent the perpetration of a particularly serious crime involving grave threat to life, to arrest a person presenting such a danger and resisting their authority, or to prevent his or her escape, and only when less extreme means are insufficient to achieve these objectives. In any event, intentional lethal use of firearms may only be made when strictly unavoidable in order to protect life.

In sum: intentional use of lethal force is only permissible where it is strictly necessary in response to a truly imminent threat to life.

The argument that a deadly return of fire is justified as self-defence, which is often used where police officers employ deadly force, is not available insofar as AWS (or other unmanned systems) are concerned. Intentional deadly force may be used only to protect human life, and not objects such as a machine.

The requirements for the use of force under human rights law are clearly much stricter than under IHL. A case-by-case assessment is needed, not only of each attack, as under IHL, but of each use of force against a particular individual. The same problems that are encountered in the context of armed conflict – whether machines have, or will ever have, the ability to make the qualitative assessments required for the use of force in IHL – exists all the more in the case of the use of force during law enforcement and additional considerations also apply. It is, for example, very difficult to conceive that machines will be able to ascertain whether a particular person has the intention to attack with sufficient certainty to warrant the release of deadly force. Allowing machines to determine whether to act in defence of others poses grave risks that the right to life will be violated.

It could also be argued that a determination of life and death by a machine is inherently arbitrary, based on the premise that it is an unspoken assumption of international human rights law that the final decision to use lethal force must be reasonable and taken by a human. Machines cannot ‘reason’ in the way that humans do and can thus not take ‘reasonable’ decisions on their own. Article 1 of the Universal Declaration of Human Rights moreover provides that ‘All human beings … are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.’ The language may be outdated, but it is clear that human rights law places a strong emphasis on human reasoning and interaction.

b) The right to human dignity
               
The right to dignity is widely perceived to be at the heart of the entire human rights   enterprise. Article 1 of the Universal Declaration of Human Rights provides as follows: ‘All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.’

While dignity is not recognized as a separate right in the ICCPR, it is a constitutive part of a number of the rights contained in that treaty.  It is also recognized in several treaties as a separate right and it is a concept that influences the way in which other rights are interpreted. I would, for example, argue that the notion of the right to life cannot be understood in isolation from the concept of dignity, because it is the value of life that makes it worth protecting.

In the context of the use of force the right to dignity serves primarily to protect those targeted, rather than those who are incidental casualties. This is the case in armed conflict as well as law enforcement situations. It is worth keeping in mind that IHL was established in the first place to protect the dignity of combatants.

As a result of the above the strong and at times exclusive emphasis in much of civil society activism about ‘killer robots’ on civilian casualties and their right to life in the context of armed conflict could be seen as  one-sided. A significant but under-emphasized part of the problem with AWS is its potential impact on the dignity of those targeted. The potential effect of AWS on the dignity of the person targeted comes to the fore even more strongly in the case of law enforcement, where the combatant/civilian distinction does not exist.

One hears too often – and the first few days of this meeting is no exception – that considerations such as the right to dignity may be important ethical concepts, but they have nothing to do with law and as such do not place legal constraints on the actions of states. This is wrong for a number of reasons.

In the first place, as was set out above, the right to dignity is a legal right, and a central component of the international human rights canon that is enforceable through its mechanisms. This right also plays an important role on the domestic front in the context of the protection of the right to life, in some cases trumping it and in others supporting it. For example, in the well-known German air security case the German Constitutional Court has ruled that legislation allowing the Minister of Defence to authorize the shooting down of a civilian aircraft involved in a 9/11 style terrorist attack was unconstitutional, despite the lives that would be saved, inter alia because that would constitute a violation of the right to dignity of those in the airplane.

In other cases courts have ruled, or law makers have argued, that the death penalty (or at least aspects of its implementation) violates the right to dignity. The same applies to life imprisonment without the possibility of parole.

Moreover, as has been stated, the complementarity of IHL and human rights means that human rights rules such as the right to dignity need to be taken into account when IHL rules are interpreted. The Martens Clause, for example, is clearly open-ended and invites such interpretation.

Lastly, it presents a very bleak picture of the international order if ethical norms are explicitly excluded from consideration. An approach that ignores ethical norms presents the spectre of an order that will find itself increasingly unsupported by the fundamental values of the people whose interests it is supposed to serve. Human rights norms such as the right to life and dignity have to be given contents in terms of ethical standards.

Coupled to this is the tendency for people to take the law into their own hands beyond a certain point if their dignity is at stake. As indicated above, the ‘Riobot’ is being developed specifically to control unrest on the mines in Africa. It is not autonomous in its release of force at the moment, but the addition of such a function is technologically not a major step. One can imagine the likely reaction of miners to the indignity if they are being herded like cattle by autonomous robots; adding insult to injury.

It has been argued that having a machine deciding whether you live or die is the ultimate indignity. This could potentially be extended to the decision by machines to use force in general. Human rights and human dignity are premised on the idea of equal concern and respect for each individual. While AWS may arguably be used to attack property under certain circumstances, humans should not be treated like mere objects.

Death by algorithm means people are treated as interchangeable entities, like pests or objects; as a nuisance rather than as someone with inherent dignity. A decision as far-reaching as the one to deploy force – and in particular deadly force – should only be taken after due consideration by a human being, who has asked the question, in real time, whether there is really no other alternative in the specific case at hand, and who assumes responsibility for the outcome. A machine, bloodless and without morality or mortality, cannot fathom the significance of the killing or maiming of a human being. The use of force against a human being is so far-reaching that each use of force – in IHL language, every attack – requires that a human being should decide afresh whether to cross that threshold.

I am not a psychologists, and do not want to go into this in any detail, but it is difficult not to think about the implications of the claims by psychologists that the proper functioning of the human psyche depends amongst other things on the possibility of hope. The harshness of reality is often bearable only because we believe – often against the odds – that the worst will not happen. This is why life-long incarceration without the possibility of parole is seen in many legal systems as unacceptably cruel and inhuman. Knowing that one may be confronted at any moment by a robot which will bring your life to an end with all the certainty that science can offer leaves no room for the possibility of an exception; for a rare occurrence of compassion or just a last-minute change of mind. Dignity in many instances depends on hope, and high levels of lethal machine autonomy can do deep damage to our collective sense of worth.

The issue of time seems to play a central role in this context. One of the problems presented by laws that try to regulate such future situations if they arise (such as the German air security law) or computer algorithms that determine when AWS will be allowed to release potentially deadly force, is that they do so in advance, on the basis of hypotheticals, while there is no true and pressing emergency rendering such a far-reaching decision unavoidable. Decision-making about the life and limb of people of flesh and blood by law makers or scientists based on theoretical possibilities contemplated in the halls of the legislature or laboratories risks trivialising the issues at stake; it makes crossing the threshold of using force against another human being easy and routine.

This is not to say that decision-makers who may have to use force in real-life situations should be left with an unfettered discretion on whether to use such force and if so how much force may be used; the law should pose certain parameters, such as necessity and proportionality. However, these are general principles, not a priori determinations of how such principles should be applied in concrete cases. That should be left to humans on the ground, with situational awareness; humans who know that they will have to live with the consequences of their actions. Neither laws not algorithms should make the use of force against people inevitable; it is a responsibility humans cannot shirk.

In addition to the right to life and the right to dignity, several other human rights can be brought into play by the release of force by AWS. They will be discussed in a cursory way below.

c) The right to security of the person

Article 9(1) of the ICCPR provides that:  

Everyone has the right to liberty and security of person. No one shall be subjected to arbitrary arrest or detention. No one shall be deprived of his liberty except on such grounds and in accordance with such procedures as are established by law.

The right to security in this article covers the infliction of life-threatening as well as non-life-threatening injuries, for example during arrest. Using AWS to apply lethal or less-lethal force can thus potentially constitute a violation of article 9.

The Human Rights Committee is in the process of adopting a General Comment (number 35) on article 9, stating (in paragraph 12) that: ‘The notion of “arbitrariness” is not to be equated with ‘against the law’, but must be interpreted more broadly to include elements of inappropriateness, injustice, lack of predictability, and due process of law, as well as elements of reasonableness, necessity, and proportionality.’ This understanding of the prohibition on ‘arbitrary’ depravation of liberty is similar to the one that applies to depravations of the right to life. There seems to be little reason not to extend it also to bodily security as protected in article 9.

d) The right against inhuman treatment

Article 7 of the ICCPR provides that:

No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment. In particular, no one shall be subjected without his free consent to medical or scientific experimentation.

Since machines are not humans, it can be argued that the application of force by a machine to a human being without direct human involvement and appropriate levels of control is inherently, or by definition, ‘inhuman’ treatment. The same argument can be made about a system that allows animals – such as trained dogs – to be used against people where there is not strict control by humans.

e) Just administrative action

Legal systems around the world recognize everyone’s right to just administrative action.  It requires, at a minimum, that those affected by executive decisions will be ‘heard’ and that someone will apply his or her mind to the situation at hand.

As is the case of other areas of law, it has rarely been stated explicitly that a human and not a machine must take these decisions, but there has so far not been a need to make this clear. This appears to be a hidden assumption of administrative law. Such an approach would provide support for the notion that there should be a deliberative process when force is used by the authorities, as part of the continuous exercise of discretion by a human, as opposed to machine decision-making. Since the use of force by a law enforcement official is often irreversible, and ordinary appeal procedures do not provide protection, the person affected must at least be able to appeal to the humanity of the person exercising the executive power.

There is an emerging school of thought that the use of force in an armed conflict is an administrative act, which requires the exercise of human discretion; it is much easier to make the case in a situation of law enforcement, where the right to just administrative action is well established.

It should be noted that in response to the emergence of technology, some states are limiting the computerization of executive power. For example, article 15 of EU Directive 95/46/EC provides that every person has a right ‘not to be subject to a decision which produces legal effects concerning him…which is based solely on automated processing of data.’

f) The right to a remedy

The ICCPR in article 2 (3) provides that State Parties must ‘ensure that any person whose right or freedoms … are violated shall have an effective remedy.’ Not having a remedy for the violation of a particular right is in many cases in itself a violation of that right.

In line with this approach, the lack of accountability for a death where there is reason to believe that it was unlawful, is in itself a violation of the right to life. As will be discussed below, an accountability vacuum in the case of AWS is a real possibility.

4) Limitations on rights and the burden of proof

Most human rights may in principle be limited. Any infringement should, however, present as small an intrusion as possible. Those who infringe rights have to show that the infringement was for a legitimate reason and was as proportionate to that goal. If a State uses weapons systems that prima facie limits rights such as those listed above, the burden of proof to show that it is justified under human rights law is thus clearly on the State. In particular, if the level of autonomy of the weapon release system infringes the rights in question, the burden falls on the State in question to show why a human being – or perhaps a remote controlled system – is not employed instead.

5) Accountability and transparency

In the context of the use of armed drones, accountability and transparency have emerged as central issues under IHL. Human rights law likewise requires states to ‘ensure that any person whose rights or freedoms … are violated shall have an effective remedy…’.  I will not repeat those requirements here, but merely say that the same considerations that apply to weaponized drones will apply to the use of AWS, for example outside the scope of armed conflict. Given the currently high levels of secrecy and impunity concerning armed drones, developing AWS in this context is particularly concerning.

However, it should be noted that accountability in the case of AWS also raises hurdles additional to those presented by weaponized drones.  As many commentators have pointed out in the context of armed conflict, it is uncertain who will be held accountable if the use of an autonomous system has results that would have constituted a crime if a human being was directly involved. The same consideration applies, probably with increased force, in the law enforcement context.

Transparency is also of major concern. One of the problems with AWS is that little is known about the extent to which States are developing these weapons, though they stand ready to change the nature of war and law enforcement and many other aspects of the world we live in. I want to use this opportunity to commend the States present here today on their willingness to engage in a debate about the issue. However, more is required. States should disclose to the world – without necessarily going into technical details – to what extent they plan to develop autonomous systems and for what purposes. At the very least, they should disclose their views on what they see as the limits on such developments.

6) Possible conflicts between the rights to life and dignity

The main concerns expressed earlier were that AWS could violate the rights to life and dignity. What happens if there is a conflict between the right to life and the right to dignity – if the use of AWS protects the one but violates the other?

The argument has been made in the context of discussions of armed conflict, for example by Ron Arkin, that AWS may in specific cases or in the aggregate save lives, for example by allowing the more precise application of force against legitimate targets. Clearly, this is a consideration of significant weight. One can anticipate such arguments also being made in the law enforcement context.

However, even if it can be proven that machines can in this manner save lives, it is not necessarily the end of the debate. The right to life may be one of the supreme rights, but so is the right to human dignity, and saving lives for example of civilians by using AWS may come at the cost of the dignity of those targeted.

The human rights ethos militates against sacrificing the individual for the good of the many; the life or dignity of one for the lives of others. If that were not so, there would be no defence against the crude utilitarian argument that it may be acceptable to kill one person if that person’s body (for example his or her organs) could save the lives of many others. As a result, there should be great caution about following a ‘numbers game’ approach to AWS – there is an important difference between facing, or dying, a dignified and an undignified death. What, it may well be asked, is the point of preserving the physical continuation of specific lives, if life itself is devalued in the process?

Yet, an emphasis on dignity also cannot end the debate. Given the irreversibility of death, and the foundational nature of the right to life, when real decisions have to be taken, no one will deny the importance of saving life.

Which right should prevail in the event of a conflict between the right to life and the right to dignity? It may be that the choice is as stark as asking what is preferable: that fewer lives are lost but those who die do so in an undignified way, or that more people die but their deaths are more dignified?

I cannot see a quick and easy answer to this question, and trying to find a formulaic answer in terms of which one right is placed higher up on the hierarchy of rights is more likely to do harm than good. There can be no automatic (to use the term in this context) preference for the one right above the other. Both the right to life and the right to dignity are irreducible values; prioritizing the one over the other is bound to lead to losses that will be unacceptable in the long run.

The closest one can get to a solution, it seems to me, is to seek some kind of compromise between the protection of life as well as dignity. That is, it should be accepted that humans cannot and should not exercise complete control over every targeting decision. In some cases our decisions need to be enhanced by technology in order to save lives. However, if delegating such powers goes beyond a certain point, the threat to human dignity becomes too high.

Several speakers so far have advanced the idea that what is required is an obligation on the State to ensure that ‘meaningful human control’ or ‘an appropriate level of human control’ is retained over the use of force. This idea, which will be taken up again below, could present a possible compromise position between these two rights, retaining the essence of each.

7) Conclusion

IHL as it currently reads, in its black letter form, presents significant problems for the development and use of AWS. I have elaborated elsewhere on some of the problems that I think AWS will face to reliably meet the requirements of distinction, proportionality and precaution. These problems increase when the unarticulated premise of IHL – that humans will be the main decision-makers as far as decisions over life and death during armed conflict are concerned – is brought to the fore. 

The use of AWS under human rights law raises further reasons for concern. As was argued above, in many ways AWS are antithetical to human rights law, even more so than is the case with IHL.

Much of the debate so far has centered on examples. Those who oppose the use of AWS provide clear examples of the unacceptable use of some forms of AWS, while those who are in favor of it likewise point to examples where there can be little reason not to permit such use.

Most people would for example agree that the whole-sale delegation to machines of decisions on the use of force, and especially lethal or ‘less lethal’ force, during rapidly changing crowd control operations would not be compatible with human rights law. At the same time, the possibility that advanced technology combined with human control could save lives and cannot be discounted. The mere fact that a computer decides to pull the trigger, even where deadly force is at stake, does not necessarily mean that human rights norms are violated.

To illustrate this point, let us develop the hostage-taking scenario provided earlier a bit further: A large group hostages have been taken in a situation such as the one presented when the Nigerian school girls were abducted by Boko Haram. After all other avenues had proven fruitless a group of snipers were deployed to target the hostage takers. Because multiple, moving targets are concerned, it is difficult to get the snipers to fire at the same time, and it is dangerous if they don’t. A central computer coordinates the simultaneous release of force at a time when all of them have a clear shot. I would argue that there is a sufficiently high level of human control over the targeting decisions to make the release of force potentially lawful.

Using examples are useful, since it makes the issue real. However, it is also clear that confining the debate to such exchanges is not on its own going to take us much further. The question has to be asked how does one distinguish such cases on a principled basis. As has been clear from the discussions at this meeting, ‘meaningful human control’ provides a popular standard to be used to distinguish acceptable from unacceptable uses of increasingly autonomous systems, and it is worth exploring the contents and implications of using this standard further.

If highly autonomous systems were to be used for crowd control there would be little human control over each release of fire, while in the case of the snipers each individual targeting decision is taken by a human. Some level of machine autonomy may allow the snipers not to harm the hostage. Machine autonomy, up to a point, can thus complement and enhance human autonomy, but beyond a certain point (sometimes referred to as ‘full autonomy’) the scale tips and it undermines functions that should be performed only by humans.

Without some form of meaningful human control over every release of force, it is difficult to see how AWS can be lawfully used under human rights law. In order to take the debate on the appropriate response by the international community to AWS to the next level, we urgently need to develop a clearer picture of what ‘meaningful’ or ‘appropriate levels of’ human control would entail.

In the last place, I would like to make a comment about the appropriate forum to discuss the elaboration of this concept and the other challenges posed by AWS. AWS clearly has far-reaching consequences as far as IHL is concerned, and processes such as the one currently undertaken by the CCW are of great importance. It will, however, be important to keep the human rights dimension in mind as well in these processes. It is equally important to address AWS in human rights fora and I will ask the Human Rights Council in June to stay engaged with this issue, alongside the other relevant UN and international bodies. At the same time the great significance and importance of this issue being taken up by the CCW must be emphasized.

Technology will keep on developing and will keep on pushing up against the boundaries of human control. Those wishing to retain human control over life and death decisions will have to be equally relentless in their protection of this value which, once lost, cannot be regained.

For more information on the mandate (including this speech) see http://www.icla.up.ac.za/un

A/HRC/23/47  and http://www.ohchr.org/EN/Issues/Executions/Pages/SRExecutionsIndex.aspx

A/HRC/26/36

الصفحة متوفرة باللغة: