Truly driverless cars

California regulators have given the green light to truly driverless cars.

The state’s Department of Motor Vehicles said Monday that it was eliminating a requirement for autonomous vehicles to have a person in the driver’s seat to take over in the event of an emergency. The new rule goes into effect on April 2.

California has given 50 companies a license to test self-driving vehicles in the state. The new rules also require companies to be able to operate the vehicle remotely — a bit like a flying military drone — and communicate with law enforcement and other drivers when something goes wrong.

That is from Daisuke Wakabayashi at the NYT, via Michelle Dawson.


As a Tesla bear I enjoyed this: 0 (zero) miles of vehicle test on public roads in California in autonomous mode. They babble about billons of miles of driving data around the world, but data is not accessible. Thus, trust the "engineers".

Also, I love the passive-aggresive tone of this Tesla report.

I don't think Tesla has any cars ready to go driverless yet. This is more likely to apply to mobs such as Google.

That Tesla does not have such vehicles was his point. When the rubber hits the road, Tesla's reputation appears to be just a tiny bit inflated - sort of like its stock price, hence his bear reference.

I'm also a Tesla bear, but I notice people hated about a decade ago, or more, and said they would fail, but they did not. Having said that, I'm long Walmart and I would bet that Amazon will not match Walmart, and further 'drone deliveries' are overrated due to the high cost of energy (flying is energy inefficient).

Good point. I was quite skeptical of Amazon, particular Bezos's lack of concern with turning a profit for years, but he has been vindicated.

I'm not sure drone delivery will be more than a gimmick, but I doubt it really is less efficient compared to a human driver operating a vehicle that weighs thousands of pounds to deliver small packages.

As for Tesla, I'm still somewhat of a skeptic, but they have forced me to shift my position. I used to think that electric cars would not be a viable product at all, and I think their success so far disproves this. However, I continue to doubt whether they can transition to a moderately-priced mass market product.

First they ignore you, then they laugh at you, then they fight you, then you win.

Tesla not having a full autonomous car ready is the point. Google, Bosch and others have developed and tested full autonomous driving while collaborating with California's DMV. Performance statistics are public information.

This post from Mr. Tabarrok brings context. What if a company brings the technology to the market first while following burdensome regulations.

How is the laissez-faire approach going?

("full autonomous car" ???)

What is that ?

How does it compare with a "Driverless Car", a "Truly Driverless Car", "self-driving vehicles", an "autonomous vehicle", etc ??

The terminology is so sloppy on this topic that very few people know what they are talking about.

The Federal government (NHTSA) has 5 official categories of automated road vehicles based upon the degree of automation. You are just bullshi__ing if you don't specify which automation category you are addressing in any discussion of 'driverless vehicles'.

"Level 5" is what most people have in mind with the concept/reality of 'driverless vehicles'.
Level 5 is what Google’s aiming for -- a car that can handle all driving tasks and go anywhere. No human, no steering wheel, no pedals. Climb in, tell it where you want to go.

Nobody yet has even marketed a Category 3 vehicle. Dream on.

It's really puzzling how any time driverless cars make a step forward, this army of gnomes comes out of the woodwork to announce that driverless cars are impossible and will never happen. I'm sure this will continue right up until driverless cars are ubiquitous, and perhaps beyond.

So, the only thing required for a car or truck is a driver, and a driver is just a computer with hands?

Ie, drivers never open and close doors and assist passengers, or put in oor take out things of the vehicle, or diagnose and get flats fixed, or clean windows, refuel, ...

Not to mention dealing with confusing construction workers, or accident detour directions.

I don't understand the point of the driverless pizza delivery vehicle recently featured. I don't want to dress to go outside and drive 5 minutes to pick up a pizza, so i call for delivery and then need to dress to go outside to get my pizza, and if I wait 3 minutes to finish my game, the truck sits in the street and all the other pizzas get delayed 3 minutes? If I wait to leave to pick up the pizza, no one else is delayed in picking up their pizza. A driver will keep ringing the door bell or start pounding on the door, plus I can have the 6 year old open the door and give him the money if not already paid for, and take the pizza.

If UPS had set a standard for UPS package delivery boxes at curbside decades ago, maybe installed them like news delivery drivers installed tubes for new customers, I can see a vehicle that is designed to open, deposit, and close that standard delivery box, even pickup packages first. But when packages go inside lobbies or business or get placed on porches, inside garages, or stuck in the storm door or put in the grill or hidden behind the bush, the UPS truck will always have a human in it. He might not need to drive most of the time and be doing package handling between stops, saving a minute per stop.

It's possible to know what you're talking about at the same time as not using the same terminology or categorization as that used in the regulating federal agency.

However, knowing one or more approaches to categorization on a particular matter may be instructive ...

"Truly driverless cars" simply means cars that literally do not have a driver in them. This is blatantly obvious, given that the post is about a law change to no longer require drivers.

And yeah, who cares about the federal government's categorization? When I was a kid, I thought there were actually four food groups, as in this is a property of reality. Then I grew up and learned better.

'The new rules also require companies to be able to operate the vehicle remotely — a bit like a flying military drone — and communicate with law enforcement and other drivers when something goes wrong.'

Considering how famously secure so many data connections to commercial products are, this is not exactly encouraging. Swatting a car seems just like the sort of thing a certain demographic would jump on, for example.

'A feud between two Call of Duty players led to the death of a 28-year-old Kansas man, who was shot and killed by police after a fraudulent 911 call sent a SWAT team to the man’s private home. The news was first reported by local newspaper The Wichita Eagle, which cites numerous now-deleted tweets in which Call of Duty players take responsibility for participating in or observing the intended prank, which came after an argument about an online wagered match reportedly worth just $1.50. One player allegedly provided a fake address to someone with a history of calling in fake threats. That person, later identified and arrested by the LAPD, proceeded to embroil the innocent stranger in the feud, according to independent cybersecurity journalist Brian Krebs.'

Notice that the procedure leading to that shooting death involved a third party making the actual threat - the buzzword at this website is MIE.

And taking over a vehicle means that one does not need to trust law enforcement to use their weapons to uphold law and order - 'In this case, Wichita local Andrew Finch, whose family members say did not play video games and was a father of two young boys, answered his door only to face down a SWAT team-level response. Allegedly, one officer immediately fired upon Finch, who later died at a hospital. It’s unclear why Finch, who is said not to have had a weapon on him, was fired upon. The Wichita Eagle reports that the police department is investigating the issue, which occurred late Thursday night.'

Lying to the police is far easier than hacking a car, even an insecure one. I assure you that random gamers don't have the technical chops or motivation needed to hack a company. The risk is a real hacker creating and distributing scripts that do it for others, but this is still harder than lying to the police - there's no well-known route to becoming a script kiddie.

he new rules also require companies to be able to operate the vehicle remotely — a bit like a flying military drone — and communicate with law enforcement and other drivers when something goes wrong.

Skynet approves.

Who in their right mind would want such a thing? Presumably it is so you get your pizza in 20 minutes or less. But what else is it good for?

I can't access the article, but I assumed it was solely related to the vehicles used for testing. In that context, it seems logical. For example, it would enable moving a test vehicle so it isn't blocking traffic, if something goes wrong.

Such a requirement for production vehicles would raise lots of questions, I agree.

'it was solely related to the vehicles used for testing. In that context, it seems logical.'

Yes, because there is no way one operator could oversee several thousand vehicles. Even in this scenario, though, it is a bit of a fig leaf. A certain number of failures are going to occur inside of any possible reaction time from a remote operator. Still much better than nothing, of course. (If reporting is to be trusted, the first fatal Tesla Autopilot crash was in part due to the software incorrectly classifying the side of a trailer as a street sign - the only way a remote operator could have prevented that crash was by being actively engaged with how the vehicle was travelling - which in that particular case, the driver was not - )

'For example, it would enable moving a test vehicle so it isn’t blocking traffic, if something goes wrong.'

Or in a less benign scenario, opening a still travelling vehicle that is burning, so as to allow the occupants to escape. There will be a number of interesting trade-offs to be made in the future regarding such decision making, but a test vehicle is more likely to have a variety of problems than a production vehicle.

The feature would not be to prevent accidents (it's clearly infeasible to have a human operator take over quickly enough, as you note) but to enable the car to be moved if the software is unable to operate in a given scenario.

Prevent accidents is a broad concept. Obviously there are lots of variables, but there will likely be situations where the problem might be something like a panic button being pushed by an occupant. Assuming that the test vehicle allows for such user feedback, of course.

That first fatal Tesla crash is instructive in that regard. Autopilot was not designed to handle cross traffic apparently, and failed to properly identify that the vehicle was about to smash into a truck at over 70mph. If one were merely a passenger, without any chance to push a panic button in a situation that was comparable (for the sake of discussion, because the vehicle was unable to properly identify a hazard - say a fire after a tanker spill that an occupant would recognize instantly). And that does happen - I have driven by one such accident on the A5 (going the other direction and as fast as possible), and a friend actually (moronically) drove through a tanker accident on the Beltway in the late 80s.

It's plausible.....But the human still needs to recognise the incident, AND recognise that the autopilot HASN'T recognised the incident, and hit the button, and have the car react.

How many accident/incident timelines is this feasible in? Quite a few, certainly, but not many.


The notion that a human operator could successfully "take over" in the event of machine error in these driving modes is slight. The fall-back wetware is redundant.

Unfortunately I have been reading articles about driverless cars for long enough that I no longer consider them impressive.

What is unfortunate about that?

I'm not sure if unrealistic expectations for autonomous cars is a good thing or a bad thing. It's a good thing if it accelerates the development and use of them, but a bad thing if unrealized expectations cause most to abandon or never use them. Realistically, autonomous cars will be a safe and efficient alternative if but only if we replace all non-autonomous cars or build a separate right of way for them (which is the same as replacing non-autonomous cars since there won't be any on the separate right of way). In any case, I'm surprised that economists would be so enthusiastic about autonomous cars because they are not an efficient way to move people from place to place as compared to the alternatives. I suspect those economists are so impressed with "tech" and the billionaires "tech" has produced that they have convinced themselves that "tech" can perform miracles. I would remind them that not even all "tech" billionaires are happy with "tech" anymore, at least one having moved on in search of Galt's Gulch.

The obvious plan is to structure the use of autonomous vehicles in a configuration similar to cable television. Passenger/subscribers will pay a monthly fee for the service with an additional fee for each use to the owner of the vehicles. Passengers won't have any control over the vehicle except for its destination. A similar, and easier to operate, system could have been established years ago for air traffic in the far-less congested skies but no, there are still pilots. The only way autonomous vehicles will be common is if a national bureaucracy like the FAA takes over surface transportation, which is easy to imagine.

In other words, autonomous vehicles is a euphemism for "transit".

Are rental cars "transit"? This seems closer to that.

Like the FAA? Jesus.

Cowen's irony never disappoints. This time he follows a post about nudge theory with a post about autonomous cars and California's "nudge" to make them truly autonomous. Other nudges for autonomous cars include exempting the participants (i.e., the makers of the autonomous cars and the software that makes them autonomous) from liability for the likely mayhem. I fear that this nudge will be a reverse nudge since photos of the casualties will discourage the use of autonomous cars.

Apparently any law or policy is now a "nudge"?

There's a difference between mandating one option instead of another, as opposed to allowing both options but making one of them the default option and/or cheaper.

In direct contradiction to the whole 'tacocopter' claim. Innovation doesn't come from removing regulations, innovation drives regulation. Regulation is accumulative because there's often no value in actually pruning regulations that innovation has made moot.

Innovation flourishes where regulatory constraints are relaxed.

Ahh, the Greek Chorus of MR sings!

Except that's not the case here. No one relaxed laws about self-driving cars. There was no laws about them until recently and probably 20 years from now there will be a huge amount of self-driving car law. Innovation demands and drives the regulation.

This has been done before. It was called a horseless carriage.

Yep. They didn't have the internet in 1900 but plenty of people wrote letters to their newspapers abut the menace of switching from horses to cars. Safety, practicality, cost. All the same arguments.

In 1900, they did not switch from horses to self-driving horses which are potentially remotely operated by their owners and/or the government and/or criminals.

For example, the reins of horses and the steering wheel of a horseless carriage are both firmly in the hands of the person who is taking themselves from one place to another.

If things go well, for practical purposes it won't matter. But in the meantime, history has long shown that there are those who would exploit basically any possibility toward exercising political and/or economic control that may be exploited, were they permitted to get away with directing things toward being set up in a manner that facilitated it.

For example, would you tolerate a regulation which allows an an autonomous car company to refuse to make a pit stop for "Starbucks coffee" out of preference for a stop 5 minutes later, due to some predicted efficiency? Without thinking about how the same technology could be hacked and/or repurposed for subtle-yet-in-fact-extreme draconian control over people's movement and activities?

If an operating company or manufacturer can operate an "autonomous" vehicle remotely, so can a determined hacker. Presumably the processors used in AVs have the same Spectre and Meltdown flaws we have been told are in all processing chips heretofore manufactured, so they must be in these vehicles as well.

Leave it to the Calif DMV to do something stupid and make testing of truly self-driving cars near impossible. It is like they want to appear not to be the blockers of new technology evolution while blocking it.

The requirement: "The new rules also require companies to be able to operate the vehicle remotely — a bit like a flying military drone " would require full data streams to a remote location or operator and that assumes that the vehicles data streams are valid and it didn't shut down because its software didn't detect an error or inconsistency.

Step 1) require companies to be able to pilot "autonomous" vehicles remotely (by definition, then, not in fact being autonomous).

Step 2) mandate that faux-autonomous vehicles may only transport people with identity verified at the beginning, middle and end of every trip.

Step 3) enable to turn off or otherwise mess with the operations of any vehicle transporting a person with politically inconvenient views, ideally at the maximally troublesome time and/or location.

Step 4) equip vehicles with some means of verifying sufficient progress toward ideological reform before the problem may be rectified on the basis of some faux premise.

Comments for this post are closed