If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Should we use killer robots to fight our wars?

In this Wireless Philosophy video, Ryan Jenkins (professor of Philosophy at Cal Poly) asks whether the use of lethal autonomous weapons is ethically justifiable. Would the various military advantages of these weapons make them an effective force for good, or would creating rule-following soldiers that entirely lack empathy and moral judgment be a bridge too far? Created by Gaurav Vazirani.

Want to join the conversation?

No posts yet.

Video transcript

Hi, I’m Ryan Jenkins, a philosophy professor at Cal Poly in San Luis Obispo. There’s an old cliche that says all is fair in love and war. But few people really believe that instead, we tend to think even if war is hell, there are certain kinds of things that are simply too hellish: killing civilians, indiscriminately bombing cities, torture, and so on. And the major countries of the world agree not to do these kinds of things. They make war too hellish, too inhuman, too undignified… They also provide a frame to consider new weapons of war as they’re introduced. Do these weapons do too much to undermine human dignity in warfare? This has, in fact, been a frame that thinkers have used to evaluate weapons for a long time. The Ancient Greeks thought that archers were undignified because it was a “cowardly” weapon that could be fired from far away. Other weapons or tactics, like atomic bombs or bombing cities, have “shocked the conscience of humanity” with the scope of their destruction. There is clearly an intuitive appeal to thinking of warfare as having to maintain at least some kernel of dignity. Now consider a weapon system that has been the stuff of nightmares for decades: so-called lethal autonomous weapons. These are better known as killer robots. And the paragon of a killer robot is the Terminator. These are weapons that are able to select and fire on human targets automatically, without human intervention. Almost all other weapons of war still require a human to ‘pull the trigger,’ as it were. Not autonomous weapons. They would completely outsource the task of killing to robots. As robots, computers, artificial intelligence, and other advances continue to quicken the pace of war, armies will need weapons that can keep up. This is one of the main advantages of autonomous weapons. They can think and act at lightning speed; they can operate where communications have been jammed. In short, if these weapons ever see combat, the militaries of the world will probably find them irresistible. And we should be clear about the potential moral advantages of relying on such autonomous weapons rather than on human soldiers to conduct our wars. The obvious difference is that, unlike humans, autonomous weapons are machines that have no minds. This means they have no emotions like anger, no thirst for vengeance that might endanger a mission; no prejudices. They won't feel fatigued or tired from long hours in the field; or confused in the heat of battle. But notice that this is a double-edged sword. Though there are surely benefits to shifting the risks of fighting wars onto autonomous weapons, this move also presents some moral concerns. Most notably: something seems unsettling and, simply, inhuman, about people using robots to kill one another. Ethicists often use analogies to help them think through novel technologies. Even if autonomous weapons are something out of science fiction, are there any real life examples that can help us understand whether it would be alright to use autonomous weapons in warfare? While presumably there are no human warfighters who literally lack a mind, there are some humans who seem to lack emotions. We call these people sociopaths, and one thing that makes them special is that they seem to totally lack empathy, the emotion that lets us connect with other people, to feel their pain, and to appreciate the weight of moral decisions that might harm them. If we think about using a sociopathic soldier in battle, does this make us uneasy? Imagine a soldier who, from the outside, seems normal we can even suppose he’s an outstanding soldier. He doesn’t intentionally murder civilians, he doesn’t go marauding around, looting, destroying property, or anything else; he follows all the rules of war. Here is a person who seems to be a well-behaved soldier. But all the while, he feels nothing. He feels no remorse for those he kills, no shame, no guilt, no concern that what he’s doing might be harmful. Does this example of the sociopathic soldier make us morally uncomfortable? Autonomous weapons would be something like this sociopathic soldier: even supposing they follow all the rules of war, they are quite literally mindless, remorseless killing machines. Autonomous weapons follow a list of instructions or an algorithm. This is like doing a math problem and following whatever the outcome is, it is not the same as reasoning through a difficult moral situation. Autonomous weapons cannot feel the weight of moral reasons in the same way humans do. How about this question: Would it be wrong to recruit an army of sociopaths and send them into battle? If so, this suggests we have some aversion to using soldiers who cannot appreciate the significance of taking a human life in battle. So, the way an autonomous weapon acts in war would be morally problematic in the same way as the actions of the sociopathic soldier: neither of them acts with any moral reasons in mind when killing. Neither of them feels the weight of what they are doing. And this is the case even if autonomous machines do the right thing “from the outside.” What we ultimately have is a balancing act between the various benefits and drawbacks that autonomous weapons offer. They could clearly offer benefits over human soldiers maybe they would even do a better job of following the rules of war, again, “from the outside.” Through superior accuracy, speed, and so on, they might even save some civilian lives that reckless or error-prone humans would have taken. Supposing that autonomous weapons can wage war much better than humans could, should we oppose their use because they lack minds and the capacity for empathy? Another way of putting this: should we be more concerned about the ultimate “body count” in war, or about the emotions and motivations behind each act of killing? Would creating soldiers that entirely lack emotion and empathy be a bridge too far? What do you think?