Robot Cars: The Case for Laissez-Faire

Very few people imagined that self-driving cars would advance so quickly or be deployed so rapidly. As a result, robot cars are largely unregulated. There is no government testing regime or pre-certification for robot cars, for example. Indeed, most states don’t even require a human driver because no one imagined that there was an alternative. Many people, however, are beginning to question laissez-faire in light of the first fatality involving a partially-autonomous car that occurred in May and became public last week. That would be a mistake. The normal system of laissez-faire is working well for robot cars.

Laissez-faire for new technologies is the norm. In the automotive world, for example, new technologies have been deployed on cars for over a hundred years without pre-certification including seatbelts, air bags, crumple zones, abs braking systems, adaptive cruise control and lane departure and collision warning systems. Some of these technologies are now regulated but regulation came after these technologies were developed and became common. Airbags began to be deployed in the 1970s, for example when they were not as safe as they are today but airbags improved over time and by the 1990s were fairly common. It was only in 1998, long after they were an option and the design had stabilized, that the Federal government required airbags in all new cars.

Lane departure and collision warning systems, among other technologies, remain largely unregulated by the Federal government today. All technologies, however, are regulated by the ordinary rules of tort (part of the laissez-faire system). The tort system is imperfect but it works tolerably well especially when it focuses on contract and disclosure. Market regulation also occurs through the insurance companies. Will insurance companies given a discount for self-driving cars? Will they charge more? Forbid the use of self-driving cars? Let the system evolve an answer.

Had burdensome regulations been imposed on airbags in the 1970s the technology would have been delayed and the net result could well have been more injury and death. We have ignored important tradeoffs in drug regulation to our detriment. Let’s avoid these errors in the regulation of other technologies.

The fatality in May was a tragedy but so were the approximately 35,000 other traffic fatalities that occurred last year without a robot at the wheel. At present, these technologies appear to be increasing safety but even more importantly what I have called the glide path of the technology looks very good. Investment is flowing into this field and we don’t want to forestall improvements by raising costs now or imposing technological “fixes” which could well be obsolete in a few years.

Laissez-faire is working well for robot cars. Let’s avoid over-regulation today so that in a dozen years we can argue about whether all cars should be required to be robot cars.


Take away my self-driving car over my cold, dead body.

If you weaponize it, it might be a "firearm" under Article Two of the Bill of Rights!

A self-driving car bomb? Yes, that might work. An Oklahoma City without needing a man on the scene. Heck, if it's a hybrid the car might cross a couple of state lines before reaching its target.

I agree tort law and insurance companies will set the rules for the controls of the AIs driving these vehicles. Shame for all these philosophy majors who finally thought their time had come for an actual use for their cogitations on ethics. What does tort law say about the trolley problem?

"What does tort law say about the trolley problem?"

It says that no matter what you do, someone is gonna have cause to sue you.

This question has been asked and answered. The future isn't pretty.


Has no one heard of the Tesla X that rolled over while on "autopilot" recently?

"[a police investigator] said he likely will cite Scaglione after he completes his investigation, but he declined to specify the charge.."

An "autopilot" is a smarter "cruise control" that does not remove the "driver" from responsibility, tort and ... whatever you call traffic law. This would be true in every car with a human watchdog. Leave the word "autonomous" for those vehicles with no human on board. Who would be legally responsible for those under current law? The person who presses the "go" button, on-board or not?

"reportedly on autopilot". I can think of some incentive's why the driver would report that, even if what tesla reported "there’s no indication that Tesla’s Autopilot malfunctioned" were true.

Maybe, but any driver trying to pass responsibility has a problem with recommended usage:

Tesla recommends that drivers keep their hands on the steering wheel at all times while on Autopilot and the automaker adds that drivers “need to maintain control and responsibility for the vehicle”

This reminds me of some "Incredible Hulk Hands" I saw on sale in the toy department at Wal-mart one time. These were big foam hands that made a noise of something being destroyed whenever you hit something with them.

They had a warning on them that they shouldn't be used to hit things.

Or toy lightsabers that include a warning not to use them for dueling.

Tesla and Google are taking divergent paths to the technology for self-driving cars. Google's path would limit the speed for self-driving cars to 25 mph. What's the likely demand for such a car? Tesla's path would not, but would rely on distracted drivers to intervene and avoid collisions. What's the likely demand for such a car? Tabarrok is correct in that consumers will decide if they prefer a safe but slow self-driving car or a relatively unsafe but much faster self-driving car. The problem is that consumers are irrational, and believe they can have it all. I recall many years ago Ralph Nader visiting my college campus promoting his book Unsafe at Any Speed. Two things stick out in my mind: one, Nader's technique of repeating for emphasis the points he wants the audience to remember (sometimes three times), and two when he asked the thousands of students listening to him to raise their hands if they had a family member or close friend killed or seriously injured in an automobile accident and every student raised his or her hand.

In urban areas 25 mph is not perfect but acceptable. Software driven cars are an attractive idea for cities with a train station right in the center. The slow and unsexy vehicles provide the last mile link.

Also, old people. If golf cars are already an option, what if the golf cart drives itself?

Give Axa a cigar. The answer is simple: slow but safe self-driving cars and public transit within the urban area and unsafe but fast cars plus public transit between urban areas. Of course, this is impossible absent government intervention, anathema to Tabarrok and the like-minded so it's unlikely to ever happen. Instead, like us college students oblivious to the carnage taking place on the nation's highways, it will be laissez-faire coupled with chaos and carnage.

It's ironic that in one of the most heavily regulated places on Earth autonomous small buses were open to the public a couple weeks ago. The 4-wheel pods are in a 2 year test in the medieval part of Sion. If the development of autonomous cars takes 5-10 years more. Is it really that bad?

"After successfully completing the first phase of testing, PostBus obtained all the necessary clearances to start the second and most exciting phase : opening the shuttles to the public."

Several weeks ago Cowen seemed (to me anyway) to acknowledge that "self-driving cars" is likely to be limited to self-driving transit; millions of autonomous vehicles, one for every person, going every which way is nice in theory but not in practice. Much like economics.

How is "millions of autonomous vehicles, one for every person" not nice in practice? It may not be efficient, but it is very convenient. And it seems a logical extension of current private care ownership.

Self-driving cars should enable new kinds of car sharing. Perhaps this will eventually bring an end to widespread private auto ownership. But in the short term, I doubt it.

"Self-driving cars should enable new kinds of car sharing. Perhaps this will eventually bring an end to widespread private auto ownership. But in the short term, I doubt it."

I'd give up my car ownership in a heartbeat for door-to-door transit at under 20 cents a mile.

"slow but safe self-driving cars and public transit"

I can't see that. Uber is already putting public transit under pressure. Self driving cars will kill it.

@LA: that is feasible is Suburbania where your half-hectare grass patch is greener than your neighbor's. In the rest of the world, public transit is an efficient solution.

I should have hedged, "if they work out technologically." I don't believe that's a slam dunk at all.

But, if they work out technologically, they will markedly increase the number of cars, they'll ease commuting by car, and they'll move people who were public transit users towards car ownership or car sharing.

"In the rest of the world, public transit is an efficient solution."

We're getting lower population density as time goes by. So the fraction of the world where it works is falling.

And, as I said, even in urban areas, self-driving cars will displace public transit. Much as Uber is already displacing public transit. View self driving cars as a better Uber, or an Uber where you don't need to ride with a driver you don't know in a car you don't own.

Agreed. If I can get in my car and eat breakfast, brush my teeth, shave, and read the newspaper on the way to work, it would be perfectly acceptable if the ride took twice as long. By overlapping all of those activities, I'm actually saving time. Also, I'm saving fuel by travelling at a lower speed, so there's a public good being accomplished. I vote for the Google model -- 25 mph is good.

Sharing my ride with a smelly stranger? Ugh, no. I vote against the Uber model, but I don't oppose your right make a buck that way.

Nadar seemed to have been confusing (perhaps intentionally) 2 issues. Most car deaths, particularly those occurring to friends of college students are due to alcohol, not speed.

I'm wasting my time, but here goes...

Ray, why do you think 'Google' picked 25 mph?

25 MPH is a widely used division in the United States. Laws surrounding Neighborhood Electric Vehicles (NEVs) and Scooters use it, for example.

As a guy who has crashed mountain bikes at around that speed, I'd say it is because 25 MPH is sort of within the limits of the human body. That is for minor injury, broken bones. At higher speeds bad things start to happen.

"25 MPH is a widely used division in the United States. Laws surrounding Neighborhood Electric Vehicles (NEVs) and Scooters use it, for example." -- Ding, ding.

I think that Google has stated that this is the reason.

Well, the number 25 symbolizes grace in the Bible; 20 means redemption and five means grace. Here's Goggle's explanation (in the NYT article I cited): "Google decided to play down the vigilant-human [Tesla] approach after an experiment in 2013, when the company let some of its employees sit behind the wheel of the self-driving cars on their daily commutes. Engineers using onboard video cameras to remotely monitor the results were alarmed by what they observed — a range of distracted-driving behavior that included falling asleep. “We saw stuff that made us a little nervous,” Christopher Urmson, a former Carnegie Mellon University roboticist who directs the car project at Google, said at the time. The experiment convinced the engineers that it might not be possible to have a human driver quickly snap back to “situational awareness,” the reflexive response required for a person to handle a split-second crisis. So Google engineers chose another route, taking the human driver completely out of the loop. They created a fleet of cars without brake pedals, accelerators or steering wheels, and designed to travel no faster than 25 miles an hour." It appears that Google didn't have faith in divine intervention, so Google elected to go with the next best thing.

If we allow automakers to do whatever they want with the robot cars, this will come to pass:

The real test will be when a car kills someone other than the driver, though it will be hard for Teslas beta mode defense to hold up in any event. What other industries could get away with a still in testing loophole do we think?

"What other industries could get away with a still in testing loophole do we think?"

The relevant metric is how many deaths per mile traveled the existing auto industry experiences. If Tesla cars in auto mode are safer than the average car without it, why would you argue against it?

The regulators are already on their way.

And apparently, they do not understand marginal reasoning at all: “'I’d actually like to throw the gauntlet down,' Mark Rosekind, head of the National Highway Traffic Safety Administration, said Wednesday at a conference in Novi, Michigan. 'We need to start with two times better. We need to set a higher bar if we expect safety to actually be a benefit here.'"

But by definition, if the safety of a self driving car is only marginally better than a human driven car, then safety improves with every new self driving car on the road. Rosekind's thinking only makes sense if every self-driving car death is worth two human driving car death. Is that loss aversion speaking? As in, losses are twice as costly as gains and so every self-driving car death is new, and to make it worthwhile, you have to save two lives on the road? That can't possibly be the right way to set national policy.

It depends on how much NHTSA regulations increase the incentive for automakers to make their products safe.

In the conservative/libertarian worldview, the incentive for safety mainly comes from free market competition. Automakers already have a very strong incentive to make cars as safe as possible, and so this regulation won't increase the safety of the cars that are produced at all, just delay their introduction to the market. Thus setting the bar higher than a marginal improvement causes harm.

In the leftist worldview, automakers are going to do the bare minimum the government regulators require. So setting the bar higher really does result in safer cars. The benefit of this increased safety would almost certainly outweigh the harm caused by the delay in introduction of the technology.

I think the former worldview is much truer than the latter, but it is unlikely to be perfectly true. The regulations probably at least somewhat incentivize an increased focus on safety by auto manufacturer, leading to safer cars being produced. Whether a particular regulatory bar such as "twice as safe" passes a cost/benefit test is not clear, though.

Overall, I think the article portends well for self-driving cars.

The quality of his specific reasoning aside, the NHTSA head appears to view autonomous vehicles as a solution to reduce traffic deaths, rather than a problem, and the aim of regulations is to help the technology be deployed safely rather than prevent it.

The regulatory bar may still be set too high, and may slow the technology somewhat. But realistically, Alex's hope for laissez-faire was never going to happen. The big risk is killing the technology through regulation, and it seems hopeful that this will be avoided.

What you say in the second comment is very true. And while I agree that regulators incentivizing carmakers to make cars safer is a good idea, I don't see how making a threshold of twice as safe as humans helps that along. If anything, the comments would disincentivize the Teslas of the world to do live fire testing, and encourage Google to continue along their precaution over everything else approach.

I think how people react to this event will tell us a great deal. Maybe there is some Straussian wisdom to mandating twice as safe: Then when the fatalities happen at scale, the car companies can say "we made them twice as safe, as required" and that will quiet the blowback against self-driving cars allowing swifter introduction after.

@JL: It is the wrong kind of regulators. The use of the Autopilot idea in marketing should make the FTC act.....I hope.

Or it could be anticipating measuring issues. What measurement are they going to use? Is it tracked by google/uber/someone with a vested interest in the technology? If that were the case I would assume some inflation and would not accept only "marginally better"

I think the issue with Tesla is not about the regulating of technology development. As Alex said, lawyers, insurers and actuaries can handle this.

However, Tesla marketing strategy sometimes can be considered as false advertising. When Tesla is confronted with this issue, the answer is also deceptive: "bad anti-technology people try to hurt development". Consumer protection laws have this topic covered. A drug that helps 60% of patients with a problem is very very good....cheers for science and technology. The only problem is advertising this drug as cure-it-all. I think Tesla's Autopilot issue is more an advertising problem instead of a car safety problem.

Hey, there couldn't be any externalities from this, requiring some form of regulation, could there?

I mean, the market will take care of the funeral expenses of a six year old killed by a driverless vehicle.

Bill, that's the point. Regulations often slow down innovation and self driving cars will be more safe than meat bag driven cars. More self driving cars means less funerals for 6 year olds.

Also, I would conjecture that self-driving cars will behave more predictably and less aggressively than human-driven cars, so the gains in safety as they become much more widely adopted will probably accelerate because fewer other cars on the road will be doing dangerous things.

Dude, I know, regulations have also slowed down the development of flammable pajamas and defective drugs.

What does "the market" do for 6 year olds who are currently killed in car wrecks? I don't think anyone is considering eliminating compulsory auto insurance.

Actually there is one more reason there is no "regulative answer" to the driveless cars. Most drivers don't earn money while driving - actually it is the opposite. They lose money because the drive instead of doing other productive work. So there is "a big lobby" for driveless cars and a very small lobby against. Even a taxidriver could be in favour of driveless cars. He'll buy two instead of one, will rent it to the clients via apps and then enjoy drinking beer on sofa while... checking the incoming revenue...

This assumes that political attitudes perfectly align with rational benefit, and that no one will think "I bet I am a better driver than that stupid machine."

Assuming this all works technically, those drivers will age out of the population.

Sure, there'll be car guys and overconfident old people, but young people will like the idea of watching Harry Potter during their commute. It's not like most driving is the Sunday afternoon jaunt on great roads - it's drudgery.

Bingo. Plus for a not insignificant portion of the driving public, we could just take away their driving rights and force them to use autonomous vehicles. Drunk drivers, too many tickets, people who we would like to take licenses away from now, but are just on the cuff of being to bad a driver to be on the road. If you think you are a better driver than the machine, your record should bear that out or forget it.

Yeah but there's car guys who drive cars, and car guys who talk about cars. The ones who talk about cars will complain about it and adopt anyways, the ones who drive generally accept the car is better at driving than they are already. I look forward to automation decreasing traffic issues by increasing the potential concentration of cars. That'll free up roadways for me to privatize into race tracks.

We have a regulation for self driving cars, the drivers manual for a deriving test.

Alex seems to ignore the regulatory changes that have actually allowed automobile technology to flourish, such as no-fault insurance streamlining the transaction costs involved with recovering for vehicular damage. A similar regime will almost certainly be necessary to obviate the thorny products liability issues surrounding autonomous vehicles, which itself necessitates regulatory involvement.

Laissez-Faire economics as AlexT puts it is the same as what is known in FAA parlance as "tombstone engineering". The FAA is a conservative body that does not like to regulate until such time somebody has died from an airplane defect. Hence (to pick but one of many examples): when square windows were used in jumbo jets, stress cracks after some cycles would blow out the windows, causing fatalities. Finally the FAA mandated that oval windows be used, which remain today. As this happened under a weakly regulated regime it would also likely happen with AlexT's unregulated regime. In fact, the driverless car fatality occurred because the company in question was "beta testing" its cars using customers rather than professional drivers on a controlled course, as other players in this field are doing (Google this, it's a fact).

You could say "science --in laissez faire economics with new tech--progresses 'one funeral at a time'".

"The FAA is a conservative body that does not like to regulate until such time somebody has died from an airplane defect."

This is the opposite of true. Witness drone regulation and launch vehicle regulation, to name two prominent examples. The FAA is widely known as an aggressive, preemptive, and over-reaching regulator.

Now, they aren't very good at predicting risks, which is why so many regulations are built around risks that manifested in deaths because the FAA didn't see them coming. But that doesn't mean that they aren't eager to regulate based on speculation.

You are saying pilots have filed repeated complaints about lasers and drones because they are mandated to complain about things that will not have adverse effects on them by the FAA?

Are you saying wildfire fighting pilots should ignore drones in the area of wildfires where they are dropping water and retardants?

No, nobody said that stuff.

Ray claimed that the FAA doesn't regulate until people die. Drones are a counterexample of the FAA implementing extensive regulations before any high-profile accidents happened.

@Lord Action - historically the FAA is conservative, but regardless, here is the Wikipedia entry on tombstone engineering: (I'm the authority on this topic, as I wrote the original entry though Wikipedia does not show it)

tombstone engineering:
The practice of letting accidents or failures (perhaps occasioning death, but not necessarily) identify engineering problems.
2005: The media have the power to drive the FAA to actions that may not directly benefit safety but are very reactive to accidents, then they turn around and accuse the FAA of tombstone engineering for behaving in that very fashion. — Safe Skies International [1]

I thoroughly understand the concept.

I'm saying that the fact that a lot of aerospace practice came about due to deaths does nothing to diminish the FAA as a zealous regulator eager to seize turf.

They are conservative in the sense that they allow the minimum they possibly can, not in the sense that they are reluctant to regulate.

Is there any reason to believe that the FAA would have known square windows were more dangerous? Why would the FAA regulate square windows until it knew that was a defect in the first place? Tombstone engineering sounds really scary, but it could just be called "engineering". Build the best product you can, test as best you can, fix any issues that come up later. You have the benefit of simply fast-forwarding the discovery channel show to the point where they say "later it was discovered that the shape of the windows caused structural weakness", engineers dont have that luxury.

This fatality occurred because someone misused a feature, its not autopilot, its the same lane following that a lot of cars have now, and they all tell you that you have to be ready to intervene. Blaming Tesla is like blaming Ford because someone turned on cruise control and went to sleep.

@MOFO - another example is 'air traffic control' - before the FAA pilots could fly according to visual flight rules if it was sunny. Then two experienced WWII pilots collided over Arizona and the rest is history.

To answer your question: government boffins can make a difference. Take aloe vera as a laxative. Back in 2002, some well known laxatives were banned by the FDA since their rewards were deemed by the FDA as outweighed by their risks. I forget the names, but some of the OTC products were well known. and yet, aloe vera is known in the annals of medicine for the last 6000 years. Who to believe? The storehouse of human wisdom from the time of the Pharaohs, or some GS-scale government boffin who never worked a day of their lives in industry? I know who I'm betting on.

@MOFO again - of course government delay can save lives, witness Thalidomide. And, who is to say that consumers who buy driverless cars, or for that matter early passengers on Pan Am airplanes, who had to deplane (urban legend has it) by parachute often, since routine landings were not perfected, accepted all the risks when they signed up to buy the product or service in question? I bet you that ex post, most of the hapless tort victims will claim they did not "assume the risk". By having government boffins drag their feet, defects can be brought out in the lab rather than in the consuming public. Beta testing through consumers is not acceptable when said consumers have not been informed of all the risks. Children who cannot consent may be hurt. +Regulate space travel? Yep. Billionaires need protection too.

Are consumers fully aware of the risks? Not always. But in at least some of the cases you site, neither was anyone else. Heres the thing, we dont live in perfect state, making changes involves risk, but not changing has risks too. Having government boffins drag their feet *may* help bring out defects, but it *may* also delay life saving changes as well. Everything looks good if you only consider the upside.

The square window problem wasn't the 747 it was the first commercial jet airliner the Dr Havilland Comet. Catastrophic metal fatigue caused four to disintegrate in the air, the first in 1952 was misidentified as due to weather the investigation of two incidents in 1953 failed to reach a firm conclusion but thought fire probable. Another incident in 1954 led to yet another enquiry which finally found the problem. It was wholly unexpected that square windows posed such a threat and aircraft manufacturing has avoided them ever since.

The roads are a particularly poor place to argue for laissez-faire. Road regulations are clearly needed to prevent reckless or incompetent individuals from causing harm to others. It is a certainty that more people would die on the road without things like speed limits and driver licensing.

Also, the use of public roads is a privilege, not a right, which limits the applicability of libertarian arguments (of course people should be able to use any kind of self-driving technology they want on private land).

While over-regulation would be bad, I do think that some regulation of self-driving technology is necessary. Big players like Google and the automakers already have strong incentives to create a safe product. But what sort of things might occur at the fringe of a self-driving car market? Risks could come from smaller companies with inadequately tested software, homemade or modified systems, or even systems designed to drive aggressively/dangerously.

Note too that by owning test cars and paying drivers Google retains all liability. Sending out a software update with "autopilot" to Tesla owners puts the ball initially in their court. The "driver" invokes the "autopilot" and is supposed to cancel it in dangerous situations. Wait, what?

The natural outcome would seem to be that when something sufficiently bad happens someone turns to sue Tesla, to establish their involvement.

Ah, this LA Times story covers those diverging paths and resulting liabilities:

I could immediately get behind regulation that focuses on the expected behavior of vehicles on the road, rather than mandating certain technological features.

Stuff like: vehicles shall travel down the center of the road, excepting when turning or changing lanes. Vehicle shall indicate a non-emergency turn or change of lane 3 seconds before performing the operation.

But, then I would also expect this standard to be applied to humans. If anything, it will force regulators to address the ambiguous "flow of traffic" allowance for higher speed traffic than is the road's speed limit. They should either match the speed limit to the actual safe speed (an argument of the flow proponents), or not accept that excuse from humans.

What do we mean by safety, and why do we think consumers are able to correctly judge the complicated, probabilistic bottom line?

As to what safety consists of, the insurance industry – or more precisely the private Insurance Institute for Highway Safety – is less concerned about property damage than injuries; healthcare in the rest of the world is not cheap, nor is disability. And then there's the US. So IIHS focuses on crash tests and star ratings, the best ways they've found to sell safety. Now this is expensive, because making accidents more survivable is best done by adding mass to a vehicle, which impedes fuel efficiency and lower emissions.

But on the margin will the costs of autonomy outweigh the benefits? Initially these costs will be on top of the costs of other improvements for safety. Oh, when autonomous cars are universal then perhaps we can get rid of seat belts, airbags and crush zones. But the amount of time required to turn over the vehicle fleet is measured in decades. After all, he average passenger car in the US is 12 years old, and we may see a few autonomous cars [Level IV] in 2020 but even in the advocates' ideal case the ramp up will take 15 years, given the way in which new technologies are rolled out, full model change by full model change. Vehicles will not only need radars, vision systems, and better windshield wipers but also electric steering. For larger cars – and maybe even small cars – all this additional electric load likely needs moving from 14V to 42V systems. Not a problem for Tesla, but we're still a long ways from battery electric vehicles being a substitute for internal combustion engines. So in a recent teleconference of industry experts [off-the-record, so I won't cite details], the earliest anyone saw autonomous vehicles representing 30% of the fleet was 2035, and I'm less optimistic.

Finally, what's the business case for full autonomy? With lane-keeping and position-keeping (adaptive cruise control), what's the benefit to the purchaser of the next increment? Doesn't that diminish, while the cost rises?

"Finally, what’s the business case for full autonomy?"

Consumers buying them.

" With lane-keeping and position-keeping (adaptive cruise control), what’s the benefit to the purchaser of the next increment?"

This seems silly. A vehicle with largely autonomous control is significantly better than one with out. If those two items translate into me safely taking a nap or watching a movie then they're fine. Otherwise I'll buy from the competitor with a better product.

This strikes me a little bit like a Motorola asking why anyone would buy an iPhone? After all the Razr flip phone makes calls just as well as an iPhone. And you can even text and snap photos.

Notice that, in the examples Alex provided, the regulation eventually REQUIRED the technologies even though those technologies were not safe enough in their original form to meet the current regulatory requirements.

I predict that eventually, self-driving cars will be required, at least in cities. Human-driven cars are extremely dangerous, especially by future standards.

Perhaps jet-packs will be required.

Actually, I get you. I just want to point out the risk in predicting that a future invention will be required. It is pretty much dependent on the future invention being invented.

Ah, similar point to Mercer below.

I just want to drag facts back into the theoretical argument here. Many people assert that humans are bad drivers. 35,000 fatalities annually. That is indeed 35,000 too many. But that is across driving over 3 trillion miles. Thus one fatality per roughly 100 million miles. Not sure this support humans being awful drivers. But no matter. Tesla has said roughly 100 million miles have been driven on Autopilot. And we have one fatality. Yes, I know statistics tells us that this one data point means little, so it is likely a coincidence. But proponents of (semi)autonomous driving have told us it will be MUCH better than human driving. And so far it seems to NOT be better. Not for Tesla. And Google's cars (with enormous issues around data collection for crashes, I know, I know) are not outperforming humans, either. So, I wonder if we shouldn't move to laissez faire only after we get a better track record. The technologies are indeed improving rapidly. But it seems premature to go to full LF when they are so far only "just as good as humans." IMHO. I agree with LF, I disagree with the timing.

Aren't most of those Tesla self-driving miles on highways, and therefore a lot more dangerous than the relatively slow driving miles common in cities and accrued by the Google cars?

What are the expected fatalities from 100 million miles of 70 mph driving? Are they more or less than the expected fatalities from 100 million miles of mixed driving?

I'm far from convinced by the technology, but it seems like these Tesla autopilot miles are not directly comparable to generic driving miles.

Again, Tesla recommends that drivers keep their hands on the steering wheel at all times while on Autopilot and the automaker adds that drivers “need to maintain control and responsibility for the vehicle”.

How are these 100 million miles broken down? Do we have data on how many times the driver took control and overrode the autopilot?

As I've said in prior threads, I think there is a contradiction here. You can't treat a smart cruise control, which if used properly is under constant supervision, as autonomous driving.

What is the point of a technology that steers, accelerates, and brakes for you, if not to allow you to pay less attention to driving? It's clearly intended to allow the user to supervise the driving process less (whatever the instruction manual may claim).

That's where I think it gets really interesting. Customers try to intuit what Elon really means, and those with less computer experience might intuit that he means more than he does.

To be harsh about it, Musk sets up the least competent customers of his technology to "explore the limits."

(The worst case customer buys a Tesla and says "It's self-driving" because it brings with it all the hopes and dreams surrounding self-driving.)

Highway driving actually has a lot fewer fatalities per mile than driving in city streets. While the speeds are high, it is safer because of the lack of intersections and oncoming traffic, wide roads, large radius curves, etc.

Is that true? I've been told the opposite by an ER doc friend. But that's not a citation.

I'm wrong! A quick Googling shows my doctor friend is full of crap.

Okay, I'm not sure he was wrong. It may depend on whether you're in the car or not.

If pedestrians are accounted in that number, non-highway mile-deaths will skew lower with regard to speed.

Did I miss a zero? 3 trillion/35,000=1 in 86 million. Not extremely safer, but safer. Plus you're only looking at tesla's dataset (cherry picking the worst set of data). Add in Googles for a more representative sample and it is much safer.

Sorry brain lapse. that is slightly less safe. But still, when you look at all autonomous miles driven rather then just tesla's unnecessarily small sample it is safer.

I doubt the current self-driving technology is better than a good human driver in very many situations, but I think future technology is likely to improve to the point that no human driver can match its safety in any driving conditions. That's probably a couple decades in the future. I very much want to allow developers of self-driving technology to keep improving their technology to get to that point, and I also don't want self-driving technology to go around killing a lot of people (especially a lot of bystanders) while it's being developed. It isn't clear to me exactly where in this tradeoff is best, nor how we can get to the optimal point.

"But proponents of (semi)autonomous driving have told us it will be MUCH better than human driving. And so far it seems to NOT be better. "

10 years from now, humans will still be just as dangerous as they currently are. Does any reasonable person believe that autonomous cars won't be significantly better in 10 years time? And significantly better than that in 20 years?

This transition may well take decades, but it's likely inevitable. This conversation is about as relevant as talking about how a computer will never beat a Grand Master at Chess in 1975. It's possible that autonomous driving will turn out to be a "tough" problem, but I see no indications of it.

"10 years from now, humans will still be just as dangerous as they currently are." Maybe, but driving deaths have steadily declined over time. Will that trend stop? "This transition may well take decades, but it’s likely inevitable." I agree, but we're not talking about in 40 years, we're talking about right now.

I wonder, if the semi had been robot driven, would the nit-picking be about an unsafe left turn across a 65 MPH highway?

A human or autonomous driver who does not see a and stop for large object in the way has committed a major, dangerous error.

The question of who is legally at fault in the accident doesn't change this point.

oops.. reply ended up be low @ 11:22

I think it is an important distinction when we are talking about a driver augmentation system.

It also matters when it comes to fault. Let's say for argument's sake that the truck pulled in front of the car with no possibility of stopping in time. Yes, it is a system failure if the car didn't even try to brake, but the fault is still with the other vehicle.

The truck driver may be at fault, but the autopilot still failed (and pretty badly). The autopilot is what makes this a story of national interest.

In no way did I say that people shouldn't talk about this. What I believe will happen is that the story will be stripped down and eventually it will be "Tesla killed a guy".

One of the more interesting topics is how do you assign liability when an autonomous vehicle is in an accident. If you are put in a situation where you have a chance of avoiding injury (say someone pushes you while you are on a cliff), does it make it your fault if you handle it badly or is it entirely the person who put you in that situation? No matter where you draw a line, there is still uncertainty. The more you can measure, the smaller you can make that gray area, but it doesn't go away.

I don't think we have enough information at this point to say much more. They would have to recreate the situation to see what percentage of time the system fails to detect the truck. We don't have that at this time.

The category error runs deep here. Tesla's system is an enhanced cruise control. It is not an autonomous vehicle. And yet the premise of Alex's argument is that the two are the same. Most replies treat them as the same.

To correct this, just think of the system as what it is. It is a system of computers and sensors that can keep you in a lane, and mostly avoid obstacles. And then decide how you want to treat a system that mostly avoids obstacles.

(Perhaps at some level Elon Musk wants you to make this error, which would be bad.)

Remember, this isn't the first time a Tesla ran into a truck.

That looks like a pretty minor accident! It looks like the trucked stopped in the lane was in neutral or something...? In any case, I don't see any obvious damage on the truck after the collision, and the hood of the Tesla doesn't seem to move up at all. So it looks like it must have been a pretty slow-speed collision...?

Actually the error is bigger than that: the premise of Alex's argument is that autonomous vehicles should be treated the same as any other technology, even something with no expected impact on other drivers, such as airbags.

I believe this system is an attempt to get from here to there. Because of this, it is rational to discuss what will be, rather than just what is. If we expected no progress, then yes, there would be no point in talking about autonomous vehicles since the high speed assisted vehicles would always need human attention.

The incremental attention model may or may not work, but I think their intention is to gradually expand the system's utility while getting a lot of real world experience.

I can see that, but think that trained drivers might be an important component in such a transition, rather than a random pool with varied expectations.

When we get to those random testers with no intuition or training on the behaviour of computer-controlled motion systems, we verge on tombstone engineering.

Perhaps a middle of the road approach is to require a psychological test prior to enabling the extra features, hopefully weeding out people who would put too much faith in the system?

Here is real tombstone engineering:

Invent cars. Let humans drive them on open roads. Observe fatalities. Reckon fatalities are inevitable. Wait for the marketplace to deliver safety technologies.

Inquiring minds might ask why Google? To take you there. Where? To the place to get whatever you tell Google you want. Think of the convenience. You ask Google where to eat lunch and Google takes you there. You ask Google where to buy shoes and Google takes you there. You ask Google for a date with a smart and beautiful woman and Google takes you there. Will Google charge me to take me there? No, Google will charge the businesses where Google takes you. So it is free? It depends on the meaning of "is" and "free". As for the date with the smart and beautiful woman, there might be a slight charge. All legal, of course.

Here's an interesting article by an author (Paulo Santos) who's long been critical of Tesla with detailed description of the incident:

He claims it's likely there are more deaths due to autopilot than we see, because if the driver had disengaged the autopilot just prior to the crash, it would not have been reported as an accident on autopilot. He also claims that Tesla's press release is misleading about the part about autopilot not recognizing bright sky/white truck trailer in path. He says the Mobileye technology is not designed to see crossing traffic, so the color of sky/time of day, not really relevant to crash. I thought it was interesting.

Basically it seems an issue of driver attentiveness to me, but from a safety perspective we have to recognize these features are going to drive driver inattentiveness, increasingly.

Nice article, lots of details I hadn't seen before. I almost skipped continuing to read it due to the registration requirement, but found disabling javascript was enough to stop that. Disabling javascript also removed that ridiculous paging.

Yeah, good report. Possibly this story will drive some caution, but I worry too about lay user expectations.

Thinking about the camera issue. I don't see why they would mention the camera as a feint when all the defense they needed was the driver misusing the system. I don't think it is well known that another company makes the camera, so it doesn't move the fault in most people's eye from Tesla. Where that separation would matter, then you get into areas of libel if the camera was irrelevant to the crash.

I have to assume there is something on the Tesla side that we are unaware of. Perhaps due to trade secrecy, they won't add too many details. One possibility is that Tesla is doing in software now what Mobileye is going to implement in hardware in the future.

I think the part about the camera not designed to see crossing traffic is just one of a long-running issue with what the author views as misleading PR from Tesla and Musk (and I am sympathetic to observations of hype). In this case, the argument is that Tesla/Musk would prefer buyers think the autopilot is more advanced than it is re: crossing traffic that it's not designed to see, otherwise, why mention that the car did not observe due to the sun, bright sky and the white trailer not being observed in the press release? The argument is that it wouldn't have seen it regardless of the color of the trailer and regardless of the color of the sky. It's not designed to see in that way as far as I can tell.

I personally don't see how "our camera didn't see something that you could plainly see" makes it sound advanced. It may be techno-realist, but it is far from inspiring.

I have no problem with pushing them to explain that detail, I just think there isn't any basis for saying the camera isn't used in any way that might have detected the semi. For example, is it absurd to think that they use imaging processing to try to validate the sonar? If they ruled out an ambiguous signal based on the fact that the video didn't look like anything, then their statement makes sense.

Dang ^sonar^radar ..

Rely on tort law?

Boy, is that a bad idea. Let's see. Massive delays in compensating victims. Huge variations from state to state as manufacturers are more or less successful in lobbying legislature. Huge variations because of arbitrary awards by inexpert juries. Legal costs eating up 1/3 or more of compensation. Manufacturers concealing defects (Oh, they'd never do that!!) and fighting wars of attrition in court.

Yeah. That makes sense. Much better than just setting nationwide standards.

Yeah the tort law bit really stuck out, especially when calling it "Laissez-Faire". It is an already-existing, extremely messy and inefficient regulatory regime.

That first sentence if a real stumbling point. If anything, robot cars are coming online way slower than widespread predictions.

I think it's a little odd to call tort law Laissez-Faire. It is an abysmal means of compensating people, and an inefficient means of forcing the internalization of externalities. It also varies between States, which seems like a less-than-ideal situation for vehicles.

The uncertainty of litigation (both of the possibility of liability and of the likelihood of truly being made whole) likely reduces investment in these types of technologies compared to a more-certain regulatory environment. Some kind of national, no-fault-like insurance scheme seems superior to operating against a background of tort law that changes over time and across State borders.

Another problem here is that today, most accidents are caused by driver misbehavior - drinking, reckless driving, excessive speeds, etc.

For driverless vehicles they will, I suspect, be caused by a manufacturing defect or electronic malfunction. The manufacturer, not an individual driver, will be the defendant. The delays and costs will be much greater than in individual accident cases.

The Laissez-Faire approach is fine as long as
1.) The author is the one serving as the test passenger
2.) He's doing this far away from me.

Laissez-faire! We are a long way away from that. See today's Irish Times on regulating rickshaw - all of the lessons of regulation are in this article!

Comments for this post are closed