Tuesday, March 20, 2018

Murder in the Digital Age


* Update: I wrote this post several days ago. Yesterday, this story appeared on CNN: Uber Pulls Self-Driving Cars after First Fatal Crash of Autonomous Vehicle ... 

One of the stock scenes from old murder mysteries comes when the suspects are all assembled in the library of the old manor house. While a storm rages outside, the detective talks his way through the crime and the clues and - at just the right moment - he whirls about and stabs an accusing finger at the murderer, announcing in his sternest voice that "The butler did it!"

Well, perhaps it wasn't the butler. Maybe it was the maid. Or the gardener. Or the ne'er-do-well son. You get the idea, though.

But the traditional unmasking of the criminal may be very different in the not-to-distant future, when we learn to our horror that the evil villain is not human, or even a trained animal (as in The Hound of the Baskervilles or Murder in the Rue Morgue) ... but a machine.

The idea of murder by machine is not new - Jefferey Deaver's novel The Steel Kiss has the murderer using his computer hacking skills to turn common products into murder weapons. But what if there's no human involved at all?

This is the point of a fascinating article I read the other day titled, When an AI Finally Kills Someone, Who Will Be Responsible?.

If a driverless car runs down and kills a pedestrian, who is at fault?* If a complex power distribution grid managed by an artificial intelligence (AI) program suddenly shuts down power to a hospital and patients die, who is responsible? Is it the programmer? The builder of the AI system itself? The builder of the car or the designer of the hospital systems? Can the AI system itself be held criminally liable for its actions? If so, how would it defend itself? How could it be punished? Here's a quote from the article:

"If an AI system can be criminally liable, what defense might it use? ... Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?"


This is not an angels-dancing-on-the-head-of-a-pin philosophical discussion, because the need to consider these things is now upon us. As we've already seen with the advent of e-mail, cell phones, and similar things, our laws governing privacy and the criminal use of communication devices are woefully out of date.

As if you didn't have enough to worry about in the Age of Trump.

Have a good day. More thoughts tomorrow.

Bilbo

5 comments:

John Hill said...

Ugh

eViL pOp TaRt said...

I wonder if an AI would have the same interest in proving its innocence as a real life person.

Mike said...

Or think about suicidal people. Maybe the gal yesterday saw the driverless car and thought 'I won't make anyone feel bad if I jump in front of this car since no one is driving it'.

allenwoodhaven said...

I hadn't considered this. It's a very important question! I suspect that it will not be answered quickly or thoroughly enough so that the law of unintended consequences will be rampant.

Heidi O'Rourke said...

In using AI to make decisions, it is important to factor in allowances for human stupidity, cussedness, and so forth. Like pedestrians crossing in front of cars with the right of way and defying them to continue.