Will we understand why driverless cars do what they do?

by on July 9, 2016 at 6:11 am in Law, Philosophy, Uncategorized, Web/Tech | Permalink

A neural network can be designed to provide a measure of its own confidence in a categorization, but the complexity of the mathematical calculations involved means it’s not straightforward to take the network apart to understand how it makes its decisions. This can make unintended behavior hard to predict; and if failure does occur, it can be difficult to explain why. If a system misrecognizes an object in a photo, for instance, it may be hard (though not impossible) to know what feature of the image led to the error. Similar challenges exist with other machine learning techniques.

That is from Will Knight.  This reminds me of computer chess, especially in its earlier days but still today as well.  The evaluation functions are not transparent, to say the least, and they were not designed by the conscious planning of humans.  (In the case of chess, it was a common tactic to let varied program options play millions of games against each other and simply see which evaluation functions won the most.)  So when people debate “Will you buy the Peter Singer utilitarian driverless car?” or “Will you buy the Kant categorical imperative driverless car?”, and the like, they are not paying sufficient heed to this point.  A lot of the real “action” with driverless cars will be determined by the non-transparent features of their programs.

How will regulatory systems — which typically look for some measure of verifiable ex ante safety — handle this reality?  Or might this non-transparency be precisely what enables the vehicles to be put on the road, because it will be harder to object to them?  What will happen when there is a call to “fix the software so this doesn’t happen any more”?  To be sure, adjustments will be made.

More and more of our world is becoming this way, albeit slowly.

For the pointer I thank Michelle Dawson.

1 iamreddave July 9, 2016 at 6:39 am

It is not just driverless cars. Advanced Machine Learning leads to technical debt of not understanding why something is classified a certain way.

Why is one x-ray classed as cancer and another not? Who gets blamed for errors of type 1 or of type 2?

If you don’t get a job or a college place because ‘the computer says no’ in an opaque way is infuriating.

Laws to prevent racism and such can be easily gamed. Often the algorithm isn’t slliwed predict on race but it can predict on the ratio of vowels to consonants in a name. This ratio can correlate with race though.

2 Handle July 10, 2016 at 2:55 pm

Yes, but those who need opacity – a big class if you think about it! – will find it extremely convenient to be able to deflect responsibility and blame to a the decisions of an algorithm that is both (1) provably reliable, but (2) incomprehensible and non-interrogable.

Uh oh. A lot of bureaucracy exists to create a Kafka-esque defense-in-depth layers of impenetrable and invisible gatekeepers. For good reason! But now maybe we don’t need these people and procedures because, if the lawyers let it happen, we’ll have these ‘provably innocent’ algorithms that perform the same function. The penultimate frontier in automation. The final frontier is general coding. Après ça, le déluge.

3 prior_test2 July 9, 2016 at 6:43 am

‘will be determined by the non-transparent features of their programs’

No, it will be from the not precisely non-transparent interactions of their program, not the features.

For example, several detailed articles concerning the Tesla fatality have said that the image recognition software considered the trailer profile to fit within the parameters of a street sign – which is completely plausible, especially on a highway with a regular sized object. The problem being that the other system was not designed for such a collision scenario. Two programs, each acting essentially as designed, were not sufficiently integrated, so that the vehicle did not recognize that a ‘street sign’ lower than the top of the Tesla was a danger requiring braking.

And let us be honest – that is an extremely basic mistake in system integration, not the programming of individual functions or of a ‘neural network’ – something which one can safely assume Tesla does not claim to provide its drivers.

4 mulp July 9, 2016 at 3:51 pm

Two other vehicles had been recalled because their obstruction detection systems were triggering panic stops from radar reflections from overhead signs and bridges in the first case, and in the second case from plates in or on the roadway such as expansion plates in bridges or temporary construction plates.

False positives can be as dangerous or worse than false negatives in system under the supervision of a driver or pilot. A false positive results in taking control from the driver and making a mistake, while a false negative simply requires the driver or pilot to take control and correct the problem.

If several cars are cruising along with the first using a collision avoidance system, will a human driver be able to detect the car ahead doing a panic stop for no reason soon enough to initiate a panic stop, or will the drive first slow and then search for an explanation for the behaviour and see nothing justifying a panic stop and thus fail to really panic immediately? Generally the panic comes when it’s clear you will rear end the car that stopped for no apparent reason.

And a panic stop due to a plate in the road would occur at low speed, say while traveling under the supervision of a flag man. He’s trying to get cars to move at 5 mph, but a car keeps hitting the brakes because it thinks it’s detecting a curb or parking space barrier while parking.

Note, advocates of self driving cars argue roads will carry more traffic because cars can travel at 60 with only a car length between them. Of course, passenger trains are made up of self driving cars so close you can walk between them. The driver of the lead car that is only now being given the authority (positive train control) to override the human can, and has, made mistakes harmful to the following self driving cars and passengers.

5 Ray Lopez July 9, 2016 at 6:52 am

More fundamentally, since computers deal with integers, I think Godel’s Incompleteness Theorem (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems) says there will always be ‘errors’ in any computer program.

So, it amount to this: do you trust a human pilot in a storm more so than an autopilot? I prefer the inflatable autopilot of Airplane! Your mileage may vary.

6 Rick Hyatt July 9, 2016 at 11:01 am

It does not. Do you think that it is impossible to write an error-less program which computes 2+2?

7 Gorobei July 9, 2016 at 11:55 am

Godel’s Incompleteness Theorem has nothing to do with this. It just says that in a sufficiently powerful, and consistent, system there are true things that you cannot prove to be true using the system. It has nothing to do with errors, and it doesn’t even apply to computers, which are just big Finite State Machines.

8 Tagore Smith July 10, 2016 at 3:34 pm

On some level you could, I guess, say that computers are finite state machines, in that they have bounded memory. That’s not normally how we think of them though, even (perhaps especially) in theory, and it’s certainly not how we tend to think of software systems. The number of possible states a computer can take on is so large (and was, even given a few k of memory decades ago,) that we tend to think of them as not only more powerful than FSMs, but as as powerful, in theory, as any possible physical machine. Dat as.

Ray’s comment is… well, perhaps odd would be a polite way to put it. The Halting Problem (which could be seen as a bit analogous to the Incompleteness Theorem) does tell us that… to be very loose about it, that it’s not possible to write a program that can, in the general case, determine the correctness of other programs. It’s an important (in fact, central) point of theory, but I’m not sure it’s all that important to the point under discussion.

9 prior_test2 July 9, 2016 at 6:53 am

Though not exactly a high quality article (though timely, one must admit), at least its author explores a couple of concerns in regards to how robots, if not precisely ‘neural networks,’ are likely to function in the future in American society – https://www.washingtonpost.com/news/the-switch/wp/2016/07/08/dallas-police-used-a-robot-to-deliver-bomb-that-killed-shooting-suspect/

10 anon July 9, 2016 at 8:56 am

I warned of drones chasing soldiers with bombs in these pages a week or so ago. Sad to see if needed and deployed here.

11 Ray Lopez July 9, 2016 at 11:03 am

@anon- you may have warned a week ago, but around 1999 the NYPD was the first law enforcement agency in the USA to use a robot to shoot (via a shotgun mounted on the robot) and kill a suspect hiding in a closet. Since then they modified their protocol to avoid this (sadly IMO). Anyway such “robots” are not true AI robots since there’s a human being in the loop, directing the action from a safe distance.

12 anon July 9, 2016 at 11:13 am

I did not know about the 1999 case, but it is strange that people protest what are obviously converging technologies. Per Wikipedia a Claymore mine weighs 3.5 pounds. That’s at the edge of lifting capacity for commercially available semi-autonomous drones. A Phantom lifts about 3 pounds?

As the old story says “Somebody set up us the bomb.”

13 Mark Thorson July 9, 2016 at 11:19 am

If it’s operated by a human, it’s not a robot. It’s a teleoperator.

14 anon July 9, 2016 at 11:23 am

As you probably know, real systems are increasingly dividing intelligence between the “robot” and the “operator.” “A human in the the loop” is often a moral design decision, as when military drones acquire information and then send it to an operator for a “kill decision.” As AI improves the choice to keep the human in the loop will be a moral question. Perhaps in a hot war zone we will drop fully autonomous killer robots, and accept collateral damage. We do that with dumb munitions, after all.

I very much doubt that systems, even AI systems, would be deployed Stateside without that human in the loop though, no.

15 derek July 9, 2016 at 11:49 am

A PID loop controller has now been promoted to AI.

16 anon July 9, 2016 at 11:53 am

Go look up Rodney Brooks, and chart his path to the Pack Bot.

17 The Original D July 9, 2016 at 4:21 pm

A couple years ago during a tour of NASA’s Ames Research facility the guide showed us a small helicopter that could be programmed to lift off, fly several miles to a field, dust its crops, then return with no human intervention.

They of course only showed us the declassified stuff. Autonomous bombers surely exist, but they may not be in use in the field yet.

18 Daniel Murfet July 9, 2016 at 6:58 am

There is recent work in computer vision where two neural nets are trained simultaneously: one to classify an image, and one to output an explanation for why the classification was made the way that it was.

I’m not an expert but I see no fundamental technical reason this couldn’t be extended to other domains, where neural nets are used to make decisions (although to be sure this is probably nontrivial research, which is yet to be done).

You could object: how do you know the neural net’s explanation has anything to do with how it “actually” works? Well, how do YOU know the answer to that question about your own decisions? That seems hard to answer in a satisfying way; it seems plausible the Explainer net could be trained so as to give internally consistent answers, stable across a range of inputs.

19 Axa July 9, 2016 at 7:18 am

Even if the word is not used explicitly, it seems they try to describe non-deterministic behavior (stochastic). Models/algorithms are not complex but computationally intensive. The defining characteristic is that they produce different results each run, but if you run it 10K times you get the distribution of outcomes.

It’s a very interesting question. Automated cars have data loggers. What happens when the data logs shows an outcome that is far from the average non-deterministic behavior? Whose fault?

Today we are comfortable with a very well distributed liability for accidents: the other driver was drunk, I fall asleep at the wheel, errors following signs, car component failure, etc. We deal with this uncertainty by “punishing” the individual/organization at fault. We are fine because accidents have multiple origins/liability. What happens when the cause of accidents tends to be a single one? Will we lynch them?

20 Rick Hyatt July 9, 2016 at 11:04 am

Most NNs are fully deterministic, but deterministic doesn’t mean explainable or understandable or predictable. Run a CNN on 1 image, you’ll get the same answer every time. This is critical to Google’s training program, as a matter of fact, as it lets them record all the data from their cars and rerun scenarios back home in the datacenter – for example, so in the dozen or so cases where a human took over manually, they can simulate out what would have happened if the human hadn’t (bad things, often).

21 spandrell July 9, 2016 at 7:40 am

http://mashable.com/2015/07/01/google-photos-black-people-gorillas/

Just imagine if the Google Car is coded to kill animals before putting a human in danger.

22 Bill July 9, 2016 at 7:45 am

Regulation often sets a standard, not how you get there.

So, you set a standard of so many accidents per mile traveled; so many deaths per hour, etc.

Certainly, if you know how to fix the problem (car running into object (child) less than 3 ft high); but if it is a unique situation, it might just fall into the general category of accidents.

The deeper question is do you envision a driverless car without a human attendant able to take over.

So, define driverless. You might have one standard for one situation (totally driverless) and one for another.

23 mulp July 9, 2016 at 4:35 pm

For several decades, planes have had the ability to take off and land on autopilot as well as cruise from one place to another, but these actions must be initiated by a human in response to commands from the ground in non-standard signaling with the pilot filling in lots of blanks based on training and experience.

In some cases, ground operations are as difficult as parking to go shopping from Black Friday to New Years. How well will a Google car find a parking space in a filled parking lot with snow banks that overflow into parking spaces? But to say, “no problem, the car let’s us out and just keeps driving around in the traffic jam”, means Google cars make holiday traffic worse.

It occurred to me that if wages should be set at the price of a substitute, thus AI will drive down wages, the opposite should be true. If the cost of an AI sufficiently capable of solving going shopping in the holiday rush, or caring for an infant, or caring for your elderly parent suffering from dementia is the equivalent to one million per year in wages, then child and elder care and chauffer wages should produce million dollar per year incomes.

24 Nylund July 10, 2016 at 11:04 am

There’s a pretty serious flaw in the logic of that last paragraph.

25 Ezequiel July 9, 2016 at 7:58 am

I do software for a living. Basic business stuff, nothing fancy like AI. Even here, “fix it so it never happens again” is a fantasy that managers sell to each other and scream at developers. Works about as well as “don’t forget your keys ever again”.

26 anon July 9, 2016 at 8:54 am

Yes, the conceptual idea of software is better than software itself.

27 jim jones July 9, 2016 at 8:17 am

MRI software has just been found to have been buggy for the last fifteen years, how can you ever trust autonomous cars?

http://www.theregister.co.uk/2016/07/03/mri_software_bugs_could_upend_years_of_research/

28 anon July 9, 2016 at 8:52 am

Good citation, as would reminder on emissions cheating, every time a car company tried to bury a safety issue (or succeeded?).

29 Dude July 9, 2016 at 10:45 am

how can you ever trust autonomous humans?

30 derek July 9, 2016 at 11:00 am

It isn’t that there is a bug; that should be assumed. it is that the results were trusted without checking.

I know a building mechanical system that is DDC or building automation controlled and monitored remotely. I walk into the mechanical room and hear things going on and off, noises that mean imminent failure, etc. The operators have fooled themselves into believing that what they see on their screens is a representation of reality.

I’ve been poking around in web programming recently and a characteristic of the rather remarkably nice frameworks is a hard and fast rule; if it can’t be tested, it is a bug. These smart people tear down and rebuild the whole paradigm of web programming to make it possible to quickly and easily write a testing routine to verify your application.

If vehicles have programming routines that generate untestable results, it is a bug. If it can’t be logged and the path unverifiable, it is a bug. If the system is so complex that it can’t be debugged, you are criminally liable if you put that in operation in a life safety application.

From what I understand the Tesla driver trusted the software. A bad mistake. I’m certain there is a massive body of work describing the human/machine interaction, especially the necessity of keeping the human side involved and attentive.

There are lots of first generation ai devices showing up in reasonably priced vehicles now. Collision avoidance, braking and steering. A friend bought one recently, not sure what make or model, but it wasn’t a Mercedes or Tesla. What these things do is something akin to the AWD SUV’s; an unskilled driver will drive faster and find a very sharp definition between under control and catastrophe. There will be lots of missed accidents and the like, but the accidents that occur will be dramatic.

Does anyone here remember the transition between bias ply and radial tires in auto racing? The bias ply with the inherent flexing would give the driver a gradual feedback up to the limits of adhesion. A radial ply has far less flexing and would hold adhesion up to the limit then let go. The limits were higher, but the process was far more dangerous.

31 Lord July 9, 2016 at 8:58 am

Why is less important than what. If humans can’t predict their behavior, we won’t be able to live with them. We can’t always predict people, but we are stuck with them.

32 Troll me July 10, 2016 at 3:50 am

Life would be so much better if everything were predictable.

33 Michael Gardner July 9, 2016 at 9:20 am

“I’m sorry, Dave. I’m afraid I can’t do that”…HAL 9000

34 Yoav Hollander July 9, 2016 at 10:16 am

While the actual implementation of driverless cars may be opaque (e.g. if much of it is done via neural networks), its verification (i.e. the set of scenarios it can handle and their parameters) can be transparent. I (try to) explain this in my post: https://blog.foretellix.com/2016/07/05/the-tesla-crash-tsunamis-and-spec-errors/ . See also https://blog.foretellix.com/2015/10/09/verification-challenges-of-autonomous-systems/ about verifying machine-learning-based systems in general.

35 anon July 9, 2016 at 10:24 am

Good stuff, a lot of knowledge and effort put into those.

I wonder how insurance companies will read such things, and what notices they will send to customers.

36 Yoav Hollander July 9, 2016 at 10:51 am

Thanks

I think it is now a common assumption that driverless cars will be insured by their manufacturers. But this leaves “regular” drivers and their insurers.

I guess if there really is common (perhaps open-source) catalog of possible driverless-car-scenarios and per-manufacturer assessment of their expected outcomes, then insurers could use it for computing risks and thus premiums.

37 anon July 9, 2016 at 10:59 am

Am I correct in thinking that Google, and manufacturers who test platforms, self-insure? And that Tesla pretty much assumes that a driver’s current insurance covers “Autopilot?”

If so, I think the second group might get some letters.

38 stephan July 9, 2016 at 10:29 am

It’s ironic that as we try to make machines approximate general intelligence., they seem to suffer from the same limitations as humans.

Do we understand how humans arrive at all their decisions. ? Can we make it predictable and reproducible ?
How many times have humans been in a collision and what we get is a confusing set of explanations . “I didn’t realize there was a stop sign, I must have drifted out of my lane, he came out of nowhere etc.. “

39 Jim July 9, 2016 at 12:04 pm

A great general won’t be able to explain why he, in the heat of the moment, sent his troops one way over another. He just knew, because some part of his brain served up the answer to him.

A great poet won’t be able to explain why she chose one word over another. It just came to her while she was writing, or on a walk, or in the shower.

If we limited either of these people to only decisions they could explain, we’d be missing out on most of their greatness. Once computers get good enough — and in some realms they already have — it should be the same.

40 carlolspln July 9, 2016 at 4:55 pm

“A great general won’t be able to explain why he, in the heat of the moment, sent his troops one way over another. He just knew, because some part of his brain served up the answer to him”

‘He just knew’, because that was the intelligence served up for him.

Like every other great battle in history, the history of military warfare is the history of intelligence.

https://www.amazon.com/Intelligence-Wars-American-Collections-Paperback/dp/1590170989

41 Troll me July 10, 2016 at 3:55 am

What, the side with the largest number of weak-willed dumbasses who can be chained into war wins? I guess you must be assuming something altogether different …

As though the fact that we were not in fact the “noble savages” some had envisioned implies we were the precise opposite, or some such thing.

Anyways, certainly a smart general can make a difference.

42 anomdebus July 9, 2016 at 1:46 pm

I experienced a spoonerism with the title of this post, somehow reading “Will we ever understand why careless drivers do what they do?”, which is actually an interesting topic on how to automate reactions to human drivers.

43 Craig July 9, 2016 at 2:03 pm

What is a fitness function, and how machine learning works.

There is a confusion in the post between the means of making a decision and the evaluation of that decision.

In chess, we don’t grasp why it’s making certain decisions, but we do know what it’s optimizing toward: winning the chess game. The fitness function evaluates an algorithm as +1 for a win, 0 for a draw, and -1 for a loss.

I don’t know the sorts of simulations run on the self driving cars, but it’s conceivable that you could specify the fitness function to which the AI optimizes how you please.

44 Enrique July 9, 2016 at 3:14 pm

To some extent, The same could be said of human decision-making (do we really understand what people do?)

45 dux.ie July 10, 2016 at 12:10 am

https://arxiv.org/abs/1606.08813

“””EU regulations on algorithmic decision-making and a “right to explanation” “””

46 Steve Sailer July 10, 2016 at 5:52 am

Non-transparency is popular and useful.

For example, consider the question of whether or not potential vice-president Elizabeth Warren benefited from affirmative action at Harvard by claiming to be American Indian.

Affirmative action has been federal and elite private policy since roughly 1969, 47 years ago, and yet nobody has the slightest confirmation on who has benefited from it and who has not.

It’s a secret!

47 anon July 10, 2016 at 6:02 pm

Level unlocked!

The reason you and Donald hate that someone would take pride in a family story of Native American heritage is that is is pride in non-whiteness.

(Other than that, another reason to use family income based affirmation action.)

48 Rob P July 10, 2016 at 12:29 pm

It’s not clear how complex neural networks make decisions although it’s a rich area of research. At the same time we don’t really understand how humans make decisions and sometimes “mistakes” have drastic consequences. (ex) http://time.com/2902520/child-forgotten-car-deaths/

With respect to things that have life and death consequences, we expect people to never make mistakes[and those who do often suffer legal consequences]. For machines we expect to be able to root cause the source of the mistake and fix it.

This might not be optimal for the future where machines can make better decisions than people. Even if they aren’t perfect or transparent in how they make those decisions.

Comments on this entry are closed.

Previous post:

Next post: