Ethical software up for grabs

by on November 28, 2012 at 7:05 am in Economics, Law | Permalink

Does your car swerve off the bridge or plough through the nearby crowd?  In case of an impending accident, which ethical standards will govern your driverless car?  (Which govern you?)  “And over here, at a price discount, is the Peter Singer Utilitarian Model.  The Roark costs $800 more.”

Or perhaps it will be put up to a vote, or handed over to OIRA.  California can run a referendum.  Alex earlier called this the Google Trolley problem, after the famous philosophical conundrum.

Joshua adds in the comments: “Can’t we just give the robot cars like three general guidelines and let them figure out the details on their own?”

For the pointer I thank Gordon H.

axa November 28, 2012 at 7:15 am

page not found, missing link =(

Tyler Cowen November 28, 2012 at 7:32 am

Fixed…

joshua November 28, 2012 at 7:28 am

Can’t we just give the robot cars like three general guidelines and let them figure out the details on their own?

Andrew' November 28, 2012 at 7:54 am

A ‘bot named sue.

joshua November 28, 2012 at 8:05 am

Now that the link is fixed, this looks less clever…

Andrew' November 28, 2012 at 8:12 am

Yeah, but now I get it.

Anonzmous November 28, 2012 at 9:16 am

Still clever, but Andrew’ should have said a ‘bot named Sally.

http://en.wikipedia.org/wiki/Sally_(short_story)

Geoff Smith November 28, 2012 at 8:33 am

Given how many news stories I tend to see about people accidently careening into crowds, buildings, and farmer’s markets and how few I see about people randomly driving off bridges, I would guess that most people probably pick the crowds instead of the cliff.

The truth is probably less interesting, and people make snap decisions in these kinds of situations where morality is not considered during the actual accident… they just move way too fast for that kind of introspection. Sounds like people want to hold computers to a higher standard than they hold people to.

Mark Thorson November 28, 2012 at 9:59 am

Most of the news reports I’ve read about people careening into crowds and buildings have been the result of mistaking the gas pedal for the brake pedal. A lot of these people have been senior citizens. It’s not that hard to do — I once mistook the clutch pedal for the brake pedal. I didn’t have time to diagnose the error and fortunately avoided several collisions by superb steering.

Yancey Ward November 28, 2012 at 11:07 am

Maybe they just don’t like people.

Francis November 28, 2012 at 8:46 am

So what if this was an option button on the car? Like the ‘eco’ or ‘performance’ fuel economy setting? Would people be more likely to set it to ‘save the crowd’ or ‘save myself’ (surely the 2nd would win out)?

Would it be a value-add the car to even provide this option or a negative? Would you feel comfortable getting in a car as the owner’s guest (since i suppose everyone is a ‘passenger’) where it was set to ‘selfless’ mode?

Bill November 28, 2012 at 8:53 am

I wonder what the insurance rates would be depending on the option. Would the insurance company have a conflict: save the driver or pay for 20 people killed.

Dan Weber November 28, 2012 at 10:56 am

It would be weird for a car to get into a situation where it was legally at fault for killing 20 people. If a bus of kids illegally runs a red light, I’m not going to be held responsible for hitting them instead of driving off a cliff, providing I was obeying all the other laws.

Given our current laws, that is. And ignoring no-fault insurance, which is the real case in a few states.

Bill November 28, 2012 at 4:04 pm

No, that’s not true. If you are still in control of the vehicle–and no one is saying that automatic cars will relinquish that responsibility–then you are still liable. You had the last clear chance to avoid the injury according to the post.

No fault doesn’t apply to manslaughter. And, no-fault doesn’t usually doesn’t apply to bodily injury.

Dan Weber November 29, 2012 at 10:52 am

Yes, you are required to try to avoid an accident. Every law student knows that even if you have a green light, you can’t just casually plow into the car or the pedestrian or the school bus that was illegally running the red light.

But that doesn’t mean you have to drive into a brick wall instead of the school bus. Those are both accidents. I’d love a citation to a case where someone was held liable for choosing the “wrong” disaster when someone else ran a red light. (And I’m sure that computer-driven car would be slamming on the brakes a lot faster than the human would — plus it’s going to notice the bus speeding towards the red light unless you have a perfect storm of objects screening its view of the intersection.)

Bill November 28, 2012 at 10:24 am

Regarding the insurance, this is an interesting case of internalizing an externality.

Say I purchase the car with the save me first option. My insurance rates will be higher, as I will have to pay for the injuries for others.

On the other hand, if I select the “save others and minimize damage to others” option, my rates are lower and I am a more vigilant driver and more likely to purchase a more intelligent vehicle that will do less damage to myself.

axa November 28, 2012 at 8:57 am

most of accidents are cause humans drivers are driving recklessly, consciously or not. you don’t hit the bridge column and get your car in flames by going at legal speed, sober (alcohol, pot, and many medications) , well-rested, focused on the road (not texting), nice weather (no ice or snow), and having perfect visibility conditions (no fog). a robot car may be able to overcome these human faults.

anyway, some accidents are still going to happen because no ethics algorithm can defy inertia. in the bus example, the sensors can detect the bus incoming, onboard computer can translate crude data into “danger”, but if the time period between the danger is “detected” and the crash happens is smaller than the actions needed (braking, steering) to avoid it…….ethics are useless.

medical community have experience in this problem. if a patient dies, the living family members may claim “medical negligence”. situation is analyzed by outsiders based on decisions, if the doctor followed all recognized protocols, no problem.

anyone who has crashed while driving (me, 2 times) can tell you’re not taking ethical decisions while spinning on a slippery road. you don’t even know if you’re hitting a tree or another car. all you learn from that experience is “i’ll be more cautious next time, watch for water, drive slowly”. maybe that’s what we need to teach our robots. about ethics, why are we asking (in the beginning) robots to be more rational than average humans?

Sam November 28, 2012 at 9:10 am

The US arms race towards ever bigger cars (SUVs, Hummers), at least when gas is cheap and the economy is booming, seems to be driven by marginal safety. Big cars have better safety ratings because when they careen across the dividing line they’re more likely to crush their oncoming victim then be crushed. These people are already buying Roark-mobiles. I don’t think this bias will go away with driverless cars, even if coordination is improved and the arms race eliminated. Who would choose to drive a Singer-mobile, when it would turn off route after detecting someone drowning who’s life it calculated as more valuable than your transmission. I suspect driverless-AI will be as quietist as is feasible and just ‘follow orders’, with a reliable blackbox so the issuer (no longer the ‘driver’) can be made to take responsibility.

Perhaps at most the DOT will require a “qualms” interface: “But sir, Main St. is closed for the Macy’s Parade. I am required to warn you that humans will be harmed if we take that route”: “Full steam ahead, car, and then re-calculate for Mexico city.”

On second thought, the above presumes a much more laissez faire government than will ever exist.

Finch November 28, 2012 at 10:39 am

I thought this was largely a myth… Some people don’t like people who drive SUVs for other reasons, so they try to make them sound evil too. Didn’t IIHS put out some research showing every 100 pounds the average vehicle weight goes down would lead to 600 additional deaths? I can’t find the reference, so perhaps I’m remembering incorrectly.

Obviously in a single vehicle accident it’s better to be in a heavy vehicle, and a Suburban-Suburban collision is safer than a Civic-Civic collision. The question is whether these effects are outweighed by the problem of asymmetric collisions.

Finch November 28, 2012 at 11:19 am

> Some people don’t like people who drive SUVs for other reasons, so they try to make them sound evil too.

I didn’t mean to say you were doing this, just that this is done.

Sam November 28, 2012 at 4:27 pm

Well, it’s an argument I read in a popular economics book, so it’s almost certainly wrong.

tt31 November 28, 2012 at 9:12 am

How much of this gets driven by Google PR considerations? Is expected PR blowback a monotonically increasing function of loss of life?

Bill November 28, 2012 at 9:16 am

The post is incorrect to say that this is the Trolley problem (one where a person has the choice to intervene and kill 1 person to save 13 by committing an act that kills one rather than letting nature take its course to kill 13).

This is not a trolley problem because the actor–here the driver–chose the initial option (to buy the driverless car with a defect or risk range, or not to intervene when he should be attentively driving in the sense that he has a continuing obligation to control his vehicle).

Framing it as the Trolley problem removes the obligation of the driver to maintain control of his vehicle.

Cliff November 28, 2012 at 9:29 am

But the drivers will save many lives by not maintaining control of the vehicle…

Bill November 28, 2012 at 10:27 am

Cliff,
Re: “will save many lives by not maintaining control of the vehicle…’

That’s an action the driver takes by not intervening–there are two possible actions: intervene and not intervene, and the agent can choose either.

nix November 28, 2012 at 9:21 am

The best part about driverless cars from a lawyer’s perspective is that, in case of an accident, you’re no longer getting hit by some moneyless rube, but by a billionaire with deep pockets.

Walter McGrain November 28, 2012 at 9:34 am

“Meanwhile, Asimov’s laws themselves might not be fair—to robots. As the computer scientist Kevin Korb has pointed out, Asimov’s laws effectively treat robots like slaves. Perhaps that is acceptable for now, but it could become morally questionable (and more difficult to enforce) as machines become smarter and possibly more self-aware.”

This is a key question for me: how could self-aware robots ever by ethically created in the first place – again, from the robot’s perspective. Should anyone be allowed to experiment with a sentient “being”. How would that not be equivalent to torture?

Major November 28, 2012 at 6:22 pm

Your questions don’t make much sense to me. Why would merely creating a self-aware robot necessarily be unethical? And why is experimenting with a sentient being equivalent to torture? We do experiments with sentient beings (both animals and humans) all the time.

John Batey November 28, 2012 at 10:02 am

This is silly. No more ‘ethical standards’ will govern your car than governs your microwave, television or airplane. It will follow rules (otherwise known as software) that dictate what happens. Explicitly:
1) You give it a destination. It figures out an overall route similar to how your GPS currently works. The software ‘may’ allow you to select toll roads, avoid traffic, etc.. These options may have ethical impacts, but the car doesn’t car.
2) The car will have scanners (likely LiDAR and dual-camera infrared) to look ahead. Software will calculate a ‘travel cost’ of the areas ahead of the car. A brick wall will have a high travel cost, since it’ll cost a lot to go through. Same with sidewalks/potholes/cliffs. People/dogs/birds/balls/etc will also have high costs, likely related to the apparent size of the object. Assigning these costs has ethical impacts, but the car doesn’t care. It only knows that “that area to my left is more dangerous than the area in front of me”.
3) The car will follow a local path of lowest cost… generally the middle of the road. It’ll have some threshold at which is just applies the brakes if the costs are too high. (for example, one lane road an a car pulls out parallel to you.

https://www.youtube.com/watch?v=YXylqtEQ0tk&t=397
Notice that people/cars/etc are segmented, but only at the level of ‘things to avoid’.

Carolus November 28, 2012 at 10:03 am

Like most ethical problems, this one does not have a unique answer. If you’re a real, dedicated utilitarian, you need more information. What’s the age and expected lifetime income of persons inside the car v. those in the street? What’s the value of the damage done if driving off the bridge? What are the dynamic incentives for behavioral modifications arising from each option, both for crowds in the vicinity of cars and for drivers of the cars? And if you’re a Rawlsian utilitarian, what’s the effect on the most disadvantaged in society of each option? My own solution is this: each driver gets to program his/her own car, and take the consequences in court. Then every person can apply whatever ethics they have — which seems the only fair utilitarian solution because that would maximize the expected utility of all involved. It also avoids the intractable problem of having to make interpersonal utility comparisons or having to assume that utilities are cardinal.

Saturos November 29, 2012 at 12:48 pm

No, the real utilitarian has to be prepared to make these decisions under uncertainty. You know, the way reality works.

Jwill003 November 28, 2012 at 10:19 am

Funny, the author of this article is known in Cognitive Psychology for taking an extremist position that the mind is comprised of nothing but rigid (and therefore brittle) rules. It’s understandable that if you believe that everything in the universe must be this way, you would be worried about automated car…

Affe November 28, 2012 at 10:38 am

The Pete Singer model will crash into a tree instead of hitting a squirrel, and the “seatbelt on” warning will consist of harangues about how the car cost more than $10K and as a result its existence is unethical to begin with, in a world of starving people.

Saturos November 29, 2012 at 12:49 pm

+1

Alan November 28, 2012 at 10:55 am

Who cares? The most important point here is that driverless technology will replace truck drivers with software. Trucking companies, sofware companies and truck manufacturers will pay for legislation that indemnifies them.

JSIS November 28, 2012 at 11:24 am

If our robots are that intelligent, they should design our tax policies too. What ethical guidelines should they be using ?
-Vyas

hanmeng November 28, 2012 at 12:01 pm

This doesn’t answer the question, but we should consider further options, like a mechanism to blow up the car (and driver, natch) before it hits anyone. We need this just as we need a mechanism to blow the earth to smithereens before it hits any harmless asteroids.

tom November 28, 2012 at 12:14 pm

I’m always stunned by ethical dilemmas that aren’t. The chances of an accident killing a school bus full of children that doesn’t claim the driver of the passenger car are pretty damn low and the cars computer isn’t going to be anywhere near advanced enough to calculate the difference between the chances of surviving a head on collision with a vehicle 4x its weight verses the chances of surviving while hitting the guard rail of a bridge and potentially going over. As others noted fatal accidents are generally caused by bad drivers not by impossible to escape scenarios of death. The only ethical thing to do is to make sure that once auto driven cars are safer than humans that you get them onto the market as effectively as possible. That means not spending thousands of hours trying to figure out if you should swerve into the bus full of nuns or the bus full of school children.

JWatts November 28, 2012 at 3:30 pm

“As others noted fatal accidents are generally caused by bad drivers not by impossible to escape scenarios of death.”

+1, robotic driven cars will be so much more reliable than human driven cars that this kind of scenario will be very rare. Traffic deaths in the US have already declined by half since the mid-80′s per mile traveled. It would be pretty reasonable to assume that deaths would decline by a much larger factor assuming no/limited human control. And since humans would generally rather talk on the phone, eat, sleep or play a game rather than drive, they’ll be more than willing to fork over the extra money to avoid the tedium and aggravation of actually driving.

Gunnar Tveiten November 29, 2012 at 4:27 am

Very much true. These kinds of situations are theorethical constructs that essentially don’t exist in the real world, thus this question is of interest to philosophers, but not to engineers.

Situations where there was no reasonable way to detect the danger early enough, and now you’re in a situation where you genuinely must choose between your own death and the death of several innocents essentially don’t exist. Oh sure, it may have happened somewhere, at some time, but they certainly don’t account for 0.01% of traffic deaths.

When we’ve eliminated the other 99.99% of traffic deaths, we can worry about this I guess.

Techie in SiliconValley November 28, 2012 at 3:32 pm

The question is important in the abstract, but doesn’t apply to how autonomous cars actually work, based on being an outsider who has seen several lectures/demos of the g-car. The car doesn’t need to make a split-second decision, because it’s already been taking the bus into account for many seconds.

“Coming out of nowhere” is a perceptual problem for humans which the cars avoid by
* having 360 “vision” that knows velocity and changes to velocity or direction of travel (in ways we can’t do well [or at all]) and
* always calculating travel paths where other objects cannot be.
Other moving objects will have a calculated cone-of-possible locations, and if the car’s travel path would intersect this cone, the car starts to change its own path by moving or slowing down.

Bill November 28, 2012 at 4:08 pm

Interesting question would be if the human took over the car if the car selected a less risky option and the human chose the more risky one, causing death and injury to others. Interesting issue about the standard of care: that of a reasonable human, or a reasonable machine.

Dick King November 28, 2012 at 5:52 pm

I’ll repeat a comment I made in the cited article [ see http://marginalrevolution.com/marginalrevolution/2012/06/the-google-trolley-problem.html , cited in the main post as well ]:

“Indeed the reference is to driverless cars, but not to a silly artificial problem when driving a single car [or at least I don't think it is]. The decision that Google and the rest of the nation really faces is whether to field driverless cars when they are accurate and safe enough to substantially reduce the total number of accidents, or only when they are accurate and safe enough to never kill anyone.

If a certain number of cars driven by a certain number of human drivers over a certain distance would yield five traffic fatalities, and Google believes that the same amount of usage will yield but one, whether to release the system is the google trolley decision.”

Alas, I don’t see how our legal system gets us there. In 99 out of 100 civil trials, Google indeed has to pay for the true cost of the accident, and charge more for the cars accordingly, but that’s fine, because the buyers’ insurance companies saved money on the 495 avoided death claims and will share a substantial part of that money with the car owners. Just as homeowners’ liability insurance decreases your rate if you promise to not own a dog, auto liability insurance will decrease your rate if you promise only to own self-driving cars and to use that feature, once they get that good, and if they are that good then with normal jury verdicts the payments that Google has to make, that they will recover in license fees on the software, will be cheap enough for the buyers to recover that money on their own insurance bills.

In the hundredth civil trial, however, the jury will decide that a 60-odd year old produce manager at Wal-mart is worth $25 million, just so they can pick google’s pocket to the tune of a quarter billion dollars while still meeting the O’Conner standard [that punitive damages cannot exceed a single-digit multiple of compensatory damages]. The one jury that acts like that will kill the cost savings for the avoided accidents.

-dk

Stan November 28, 2012 at 10:56 pm

If the driver-less car, like the trolley, is out of control, any code a programmer writes will never be applied to either of these scenarios. The programmer is completely out of the picture once the car’s systems fail. If a passenger in the car can take control of it, since they are not on tracks, they can steer on any path open to avoid a collision. They are not constrained by trolley tracks. The passenger would also probably have brakes to stop the car.
This problem is nonsense.

Comments on this entry are closed.

Previous post:

Next post: