Will we understand why driverless cars do what they do?

A neural network can be designed to provide a measure of its own confidence in a categorization, but the complexity of the mathematical calculations involved means it’s not straightforward to take the network apart to understand how it makes its decisions. This can make unintended behavior hard to predict; and if failure does occur, it can be difficult to explain why. If a system misrecognizes an object in a photo, for instance, it may be hard (though not impossible) to know what feature of the image led to the error. Similar challenges exist with other machine learning techniques.

That is from Will Knight.  This reminds me of computer chess, especially in its earlier days but still today as well.  The evaluation functions are not transparent, to say the least, and they were not designed by the conscious planning of humans.  (In the case of chess, it was a common tactic to let varied program options play millions of games against each other and simply see which evaluation functions won the most.)  So when people debate “Will you buy the Peter Singer utilitarian driverless car?” or “Will you buy the Kant categorical imperative driverless car?”, and the like, they are not paying sufficient heed to this point.  A lot of the real “action” with driverless cars will be determined by the non-transparent features of their programs.

How will regulatory systems — which typically look for some measure of verifiable ex ante safety — handle this reality?  Or might this non-transparency be precisely what enables the vehicles to be put on the road, because it will be harder to object to them?  What will happen when there is a call to “fix the software so this doesn’t happen any more”?  To be sure, adjustments will be made.

More and more of our world is becoming this way, albeit slowly.

For the pointer I thank Michelle Dawson.


Comments for this post are closed