A group of concerned scientists and thinkers from many fields of engineering, computer science, linguistics, robotics, and many other technical and non-technical fields released an open letter yesterday in which they warned of the dangers of so-called "autonomous weapons" - that is, highly-advanced weapons which employ techniques of artificial intelligence (or "AI") to select and engage targets without human intervention.
This is not a new concern. The potential dangers of computers that can outthink and outfight humans has long been a theme of science fiction and horror stories. One of the earliest (and most horrifying) I read was the 1967 novella "I Have No Mouth and I Must Scream," by Harlan Ellison; another (which was the inspiration for Skynet of the Terminator franchise), was the 1966 novel Colossus by D. F. Jones (later re-released as The Forbin Project, and now out of print); it was made into a 1970 science fiction film called (what else?) Colossus: The Forbin Project.
The theme has appeared on television, too: in an episode of Star Trek: The Next Generation titled "The Arsenal of Freedom," the Enterprise investigates a planet whose people sold autonomous weapon systems that eventually killed them all off, except for a holographic salesman who lures passing vessels in for a possible sale that always ends badly* for the customer.
So anyhow, the idea of machines becoming self-aware and deciding people are a threat is not a new one, but it's one that is becoming more real every year. We are very, very good at coming up with very, very bad ways to kill each other, and not always so good at employing them rationally**. Coupling extremely deadly weapons (nuclear, chemical, biological, genetic, kinetic, directed-energy, or what-have-you) with an artificial intelligence program that removes human emotions from decisions to kill does not seem to me to be a very good idea.
Somehow, though, I don't think we're going to get this toothpaste back into the tube. Experience shows that if something can be built, we'll go ahead and build it and worry about the consequences later. I'm with the folks who wrote the open letter ... I can see the consequences of marrying AI and ultramodern weapons going south really fast.
Have a good day. Leave Colossus, Guardian, and Skynet to the movies ... we have enough problems already.
More thoughts tomorrow.
Bilbo
* Red shirt or no.
** After all, the only thing that stops a bad guy with a nuke is a good guy with a nuke ... isn't it?
7 comments:
I never thought about their possibility, let alone this implication. Autonomous weapons is a bad idea.
The south (Southern hemisphere, southern US, whatever is south of where the weapons and ideas are) would hate us forever.
Maybe with reason.
This is really scary. I can't even get past the idea of a car being driven on its own.
I hear a guy named Murphy is designing the first system.
I didn't know how far they'd gotten with autonomous weapons but figured it was certainly in development. Most people just don't think through the unintended consequences of their decisions. Unfortunately...
That is a terrible idea from the start. Someone is sure to do it, because many decision-makers don't conceive of things going wrong. Murphy's Law is so right.
Post a Comment