Explore the dissolving boundary between science and science fiction with news from the front lines of discovery and imaginative speculation on how each one could change our world.
Today the United Nations convened a meeting on the use of “killer robots”: autonomous lethal weapons that could select targets and attack them without the involvement of a human operator. While some experts believe that killer robots may help reduce collateral damage and loss of life, others argue that such technology is both mechanically and ethically problematic. “Autonomous weapons systems cannot be guaranteed to predictably comply with international law,” says Professor Noel Sharkey, co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control. The Campaign calls for a pre-emptive ban on lethal automatons before they are developed.
Some machine weapons are already in use. The surveillance robots guarding the demilitarized zone between North and South Korea can detect body heat and fire built-in machine guns without human operators. Military drones have raised worldwide controversy in recent years. Other, more advanced projects are currently under development, including autonomous drones that can travel preprogrammed flight paths and select their own targets.
Also under development are considerations of whether such technology should be used. Human Rights Watch and Harvard Law School released a report yesterday entitled “Shaking the Foundations: The Human Rights Implications of Killer Robots” discussing the use of lethal autonomous robots. The paper discusses potentially positive employment of killer robots, such as combating crime and terrorism, but cautions that such weapons could run amok of ethics when acting without human intervention.
Ah, killer robots. Staple of summer movie blockbusters. From Terminator to Transformers, we meatbags love our violent machines. Science fiction has canvassed the killer robot concept so extensively, it will be hard for me to contribute any fresh philosophical thoughts, so I’ll approach this from the opposite direction: now that we’re entering territory already explored in theory, will we consider any of those lessons as we proceed? Let’s consider a few.
I, Robot: even tightly crafted systems have loopholes. This applies not just to robot behavior, but to that of the nations or agencies wielding them. If international lawmakers developed an airtight code of usage for killer robots, someone would certainly find a way to exploit it negatively.
2001: A Space Odyssey: relinquishing control to automation seems to minimize human effort and error, but it has a dark side, especially when technology gets a mind of its own.
Frankenstein; or, The Modern Prometheus: okay, the monster isn’t technically a robot, but the story an ambitious tinkerer building a humanoid remains a cautionary tale for anyone who risks losing control of their creations.
These three examples, and dozen of others across film and literature, share the theme of human hubris. All our favorite robot stories highlight the consequences of using technology without fully understanding its implications. Will we make the same mistakes as our fictional forebears did with lethal automatons? Those chapters in our own science fiction story remain to be written. But if killer robots are in our future, let’s just hope those frakkin’ toasters are Three Laws compliant.