* Update: I wrote this post several days ago. Yesterday, this story appeared on CNN: Uber Pulls Self-Driving Cars after First Fatal Crash of Autonomous Vehicle ...
Well, perhaps it wasn't the butler. Maybe it was the maid. Or the gardener. Or the ne'er-do-well son. You get the idea, though.
But the traditional unmasking of the criminal may be very different in the not-to-distant future, when we learn to our horror that the evil villain is not human, or even a trained animal (as in The Hound of the Baskervilles or Murder in the Rue Morgue) ... but a machine.
The idea of murder by machine is not new - Jefferey Deaver's novel The Steel Kiss has the murderer using his computer hacking skills to turn common products into murder weapons. But what if there's no human involved at all?
This is the point of a fascinating article I read the other day titled, When an AI Finally Kills Someone, Who Will Be Responsible?.
If a driverless car runs down and kills a pedestrian, who is at fault?* If a complex power distribution grid managed by an artificial intelligence (AI) program suddenly shuts down power to a hospital and patients die, who is responsible? Is it the programmer? The builder of the AI system itself? The builder of the car or the designer of the hospital systems? Can the AI system itself be held criminally liable for its actions? If so, how would it defend itself? How could it be punished? Here's a quote from the article:
"If an AI system can be criminally liable, what defense might it use? ... Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?"
As if you didn't have enough to worry about in the Age of Trump.
Have a good day. More thoughts tomorrow.
Bilbo
5 comments:
Ugh
I wonder if an AI would have the same interest in proving its innocence as a real life person.
Or think about suicidal people. Maybe the gal yesterday saw the driverless car and thought 'I won't make anyone feel bad if I jump in front of this car since no one is driving it'.
I hadn't considered this. It's a very important question! I suspect that it will not be answered quickly or thoroughly enough so that the law of unintended consequences will be rampant.
In using AI to make decisions, it is important to factor in allowances for human stupidity, cussedness, and so forth. Like pedestrians crossing in front of cars with the right of way and defying them to continue.
Post a Comment