Tuesday assorted links

1. Sam Harris interviews Robin Hanson.

2. I still find the Cambridge Analytica story confusing.  This article is useful, but it heightens my confusion too.  Who exactly did what wrong?  And I don’t think I agree with the framing of this Michael Dougherty piece, but it does pose some useful questions.  Here is the Bershidsky take.

3. “Ask a child to draw a scientist, and she’s more likely than ever to draw a woman.

4. Megan is skeptical about the greater safety of driverless cars.

Comments

#2: My understanding is that no one did anything wrong. Actually, this is an important discussion because it shows just how malicious and misinformed the media is. They don't like the output (Trump won) so they try to re-write history and make someone to blame for it.

This even feeds into the whole RUSSIA frenzy. I mean, even if Trump colluded with the Russians what did they collude to? Nothing! Just fake Facebook posts, which by any measure, are mostly innocuous. So yeah, call me partisan (even though I didn't vote for Trump) but this whole thing is just one long, extended and loud session of democratic crying.

In the meantime, here comes GDPR! We are all saved.

this is also my take, FYI.

More or less agree with FYI. The breaking news on CA seems to be coordinated features dropped at the same time: Channel4 video, interviews with ex employee, NYtimes editorial, etc. In reality, nothing is very new or something we didn't know last year: that CA used facebook data to target political ads based on psychoanalytic profiles. I'm also interested if I'm missing something?

I also agree with this.

Consider as well Obama's 2008 facebook efforts, which scraped the social network graph in *exactly the same* way CA is alleged to have.

http://swampland.time.com/2012/11/20/friended-how-the-obama-campaign-connected-with-young-voters/

And consider an insider's claim that facebook endorsed their efforts for political reasons https://twitter.com/cld276/status/975568208886484997

In that example though, people gave explicit and informed concent.

One charge here is that Cambridge Analytics "repurposed" another, different, consent.

"People", phhh..... that's a nice way to put it.

You tell someone (Facebook) all your personal information and then get upset when they tell someone else (CA). Boohoo, what a joke, anyone with a half a brain already knew about this. Many people have warned you not to give all your personal information out to strangers, but you didn't listen.

I think Facebook has a different name for these "people", "suckers".

One one level that is true, and it is why Facebook stock took a hit.

Still, they may have tripped some legal barriers, as discussed below.

It's literally illegal to collect information for one purpose and then use it for another. This is the meaning of "consent". For example, if Yahoo sells its email list to North Korean online political operatives, that would be very illegal.

Not always, but when it's 50 million people's data and the other purpose is related to who makes the final decision on the largest nuclear arsenal on the planet, it kind of matters.

Suggesting that it's like someone looked at your Facebook page and then told someone else is beyond absurd.

+1. I highly doubt that the vast majority of people would have refused to give more explicit consent. Just check the box and move along.

You "highly doubt?"

OK. Who cares what you doubt?

The fact is that they did not give consent. You cant change that by asserting, without evidence, that they would have if asked, especially since the user was working on behalf of the Trump campaign.

Troll me, you entered all your information into a database for Facebook and acknowledged a form giving them permission to use it however they wanted, nothing about that is illegal in the US.

Facebook gave conditional permission for another company to use their data in a certain way, well the company broke their agreement with facebook, but that's a matter between Facebook and the other company.

Those afflicted with Trump Derangement Syndrome weren't exactly the people his campaign would care about targeting. And, yes, given the past behavior of what people are willing to give to companies for marketing purposes without a second thought, I stand by my claim that most wouldn't bother to read the fine print of any consent agreement.

I have not proposed that the users whose data was taken without their permission have some grounds to sue (although I'm not sure whether that's true, but most likely they will never know who they were). I'm saying that the people who broke laws broke laws.

It is up to Facebook to do what they can to make things right with those who abused access beyond the agreement. And it is up to officials empowered by the public to do so to make sure that Facebook is doing its due diligence (and/or possibly to determine whether specific individuals may need to be potentially held accountable in some manner).

'Facebook gave conditional permission for another company'

No, Facebook gave conditional permission to an academic researcher. According to Facebook, it never gave permission for that academic researcher to provide that data to a commercial company - see below for what Facebook itself says. Further, Facebook says it revoked that conditional permission in 2015.

That's not true.

Obama's '08 campaign exploited the same API weakness, by which one users' consent to have data collected (be it explicitly for a campaign or by a "personality test") resulted in their social graph being exported. Certainly the friends of the person who surrendured their data didn't explicitly consent to that.

I guess you can have a minor gripe that the person enabling the scraping of the data for the Obama campaign new it was going to the campaign, but outside of the context of elections this really shouldn't be surprising. You should notice by now that what you do online very strongly influences what you see online - advertisements political or otherwise, not to mention search results.

It's also not even clear that it's factually true that CA actually used the personality test data set.

And besides all of that, there's no reason to think they were actually that effective at changing minds.

It's OK to break the law, as long as you do it badly, and don't gain anything as a result.

What law did they break? Breaching a contract is not against the law.

At the surface, CA is perhaps a morally problematic actor in terms of electoral research and targeting. There could be additional details we aren't aware of, but regardless I think it's fine to discuss the details of campaign tactics and use of voter or population data. If you're not interested in those details, cool, consume media on literally any other topic.

More broadly, there is perhaps a legal issue with Facebook's handling of consumer data and privacy. This issue, without question, has been clearly adjudicated and long recognized to be within FTC oversight and joint FTC/DOJ enforcement authority per the Federal Trade Commission Act of 1914, (Specifically Section 5, 15 U.S.C. § 45, Unfair or Deceptive Practices Affecting Consumers). FTC has opened a probe into Facebook and we'll see whether it has any legs, but I wouldn't want to be in Facebook's position. If you don't like the breadth of FTC authority then, (a) you're outside of the political mainstream, and (b) you should vote for candidates would seek to amend the law. It should be noted that the current acting FTC chair was designated by Trump and his current nominee is a former FTC bureau chief who has already voiced intent to aggressively enforce consumer privacy protections with tech companies.

Why weren't these significant issues in 2012 when the Obama campaign was bragging about using Facebook to the same ends?

"But to the Windy City number crunchers, it was a game changer. “I think this will wind up being the most groundbreaking piece of technology developed for this campaign,” says Teddy Goff, the Obama campaign’s digital director.

That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists. In an instant, the campaign had a way to see the hidden young voters. Roughly 85% of those without a listed phone number could be found in the uploaded friend lists. What’s more, Facebook offered an ideal way to reach them. “People don’t trust campaigns. They don’t even trust media organizations,” says Goff. “Who do they trust? Their friends."

Not 100% sure, but think the difference is Obama's campaign just got a list of the friends, whereas here CA actually mined the data of the friends without those friends' permission. Still though, don't think people would have had as much of a meltdown even if Obama's campaign had done that. I think why people are actually annoyed is CA used the data to spread a lot of fake news stories that were then shared widely, which in and of itself is not illegal, so this is the backdoor method they're using to get at that.

It was discussed at the time, both the Obama and Romney campaigns were using targeted social media campaigns. Fake News wasn't a term yet, I don't believe there were concerns with using the data for misinformation vs. improving outreach and voter activation etc. -- but you're certainly welcome to write the history political data mining on social media if it interests you. Certainly the HC campaign was going digital in 2016, you could write a chapter on that too.

I'm not really interested in "what aboutism" since that's literally pointless and only serves to avoid talking about the issue at hand. Do you want to talk about the '96 campaign and Chinese money to the DNC? I mean, we all have google, we could "what about" endlessly and never move on.

CA was engaged is some potentially unseemly tactics. Does it concern you? Should we discuss whether it's gone too far? Or is it really an issue with Social Media and consumer ability to understand what they're consenting to? There are lots of things of public concern that aren't legalistically wrong.

That tape was deceptively edited.

Its not what-aboutism to ask why the media suddenly perked up its ears.

In fact, its very troubling. More troubling that CA itself.

If my watchman isn't watching, I'm worried about my watchman not the specific burglar from last night, when I found out he let in a burglar two weeks ago, too.

'Why weren’t these significant issues in 2012 when the Obama campaign was bragging about using Facebook to the same ends?'

Well, the Register contradicts that perspective, in typical Register fashion - 'Another of Cambridge Analytica’s Tweets tried to paint its electioneering activities as anodyne.

Obama's 2008 campaign was famously data-driven, pioneered microtargeting in 2012, talking to people specifically based on the issues they care about. 6/8
— Cambridge Analytica (@CamAnalytica) March 17, 2018

As luck would have it, The Register encountered the Obama campaign’s chief technology officer, Harper Reed, in 2013. Here’s how we reported some of what he had to say.

“Data on what car you drive was not very useful in the campaign,” he said. “We did not use that much private data.” More useful, Reed said, was simple data points like a response to the question “do you support the President?” With a response to that question and information on whether an individual had voted in the past in hand, the Obama campaign was able to identify a voter as someone worthy of their attention.

We also wrote the following:

Reed also cautioned old people – anyone over 25 in his big-beard-chunky-earrings-and-thick-framed-glasses world – not to panic on the topic of privacy. Oldsters are uneasy with the notion that Facebook et al mines their data, he said. Young folk have no such qualms, understand the transactions they participate in and are more familiar with the privacy controls of the services they use.' https://www.theregister.co.uk/2018/03/19/facebook_suspends_account_of_cambridge_analytica_whistleblower_chris_wylie/

Yeah right, this is just playing along:

"Nix said no. 'Just saying we could bring some Ukranians in," he said, adding "they are very beautiful, I find that works very well.'"

Volunteering that *you find* that it works very well.

wait is roughly correct. CA actually mined data of the friends of those who downloaded the "thisisyourdigitallife" app, which is a breach of the Facebook license agreement.

The Obama campaign simply recorded a social graph based on data you get when you use Facebook to authenticate user identities, which is not a breach of the agreement, as far as I know.

These are both morally and legally different from one another, although I would argue both are unacceptable, but most people would not agree with me.

But they gave permission in the context of being about a campaign. Not in the context of some paid online survey "for academic purposes".

Anyways, the matter of apps having access to people who have not consented is a problem, regardless of whether the original explanation of an app was the same as the actual purpose.

Giving me a list of my friends and asking me to contact them (by pushing a button) is not any way the same thing as CA getting my information because afriend that downloaded an app for a purported academic study.

No one did anything more wrong than anyone else... yet this is all very wrong.
We need to be careful of mood affiliation on this. In both directions.

"We need to be careful of mood affiliation on this. In both directions."

That bit of advice is pretty much always useful. ;)

Sure, but at the end of the day the question will become whether we want to outlaw "fake news" or not. That question is, in my opinion, very dangerous. The main problem in my view is that news agencies and democrats are using roundabout ways to force us to one specific answer here.

There's a difference between Facebook being required to take responsibility which may reduce the volume of fake news on that platform, and, say, to outlaw anyone exchanging stories without fact checking every detail.

Does this comment of your mean anything except that upon reaching a certain threshold of exposure, Facebook profiles will become subject to scrutiny that may result in censorship based on the subjective opinion of a Facebook employee?

A subjective opinion of a Facebook employee to ban someone from the Facebook platform for (what they believe to be) intentionally spreading politically-sensitive disinformation is not similar to throwing people in prison for accidentally sharing misinformation.

But to be clear, the subjective opinion would be biased to the left and the scrutiny will be made de facto or de jure required by leftists in government, right?

It would seem that way if you're on the furthest 0.1% to the right on the spectrum.

Also, as an international business, it would not be surprising if, on average, it's commercial interest would not be well-represented by being to the right of the median US voter.

FYI is correct. All you need to know is that all the media outlets are using the same vacuous watchword (“harvest”) to describe what I hope they don’t understand but suspect they do, namely that it’s legal and completely ubiquitous. “CA harvested Facebook data” makes it sound evil, gross, and surreptitious, like some necromancer harvesting organs from a cadaver, instead of what it actually is, i.e. what every big data company legally does all of the time because somewhere in your millions of online interactions you agreed to terms of service that permitted them to. We’re not talking about people’s email inboxes, we’re talking about their Facebook profiles. It couldn’t be more of a stretch. I want to be optimistic about the left and the media but I feel I’m left to conclude, as the old saying goes, that they’re either stupid or evil or both. This story is an attempt to deliberately mislead.

'My understanding is that no one did anything wrong.'

Well, Facebook most certainly thinks that Cambridge Analytica did something wrong, as noted by Facebook itself - 'Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted. We are moving aggressively to determine the accuracy of these claims. If true, this is another unacceptable violation of trust and the commitments they made. We are suspending SCL/Cambridge Analytica, Wylie and Kogan from Facebook, pending further information.

We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens. We will take legal action if necessary to hold them responsible and accountable for any unlawful behavior.' https://newsroom.fb.com/news/2018/03/suspending-cambridge-analytica/

LOL. FB is doing CYA.

"there is gambling going on here? Why I never!"

+1 for the reference

https://www.youtube.com/watch?v=SjbPi00k_ME

One of my faves...

Sure - but if nothing wrong was done, why would Facebook feel the need to cover its ass?

You really cannot have it both ways - that Facebook data was revealed to be easily exploited in ways Facebook said it shouldn't be is one of the obvious points here. Which in no way contradicts the fact that Facebook data was exploited in violation of terms of its use, especially Facebook noting that data supposedly intended for academic research was sold for commercial purposes.

No, you really can have it both ways. There’s a media firestorm. Facebook needs to do CYA because of that fact alone, regardless of whether the underlying actions were wrong.

Carol Davidsen, tweeted that "Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing."

'They came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.'

This to me suggests they knew exactly what was going on.

"Gambling here?"

"Sure – but if nothing wrong was done, why would Facebook feel the need to cover its ass?"

Because they are going to be subject to retalitory action by Senate and Congressional Democrats as well as Democrat attorney's general if they do not?

'Because they are going to be subject to retalitory action by Senate and Congressional Democrats as well as Democrat attorney’s general'

This is one of the real weaknesses of being American-centric. The current legal problems are all centered in the UK, where a Conservative led government is going after criminal activities on the part of Cambridge Analytica, and which will likely expand to include Facebook.

And unlike in the U.S., there is just not that much feeling in the EU that Facebook is anyhing but a for profit surveillance tool which has no problem ignoring EU data privacy laws, as demonstrated over the years by this Austrian - 'Maximilian Schrems (usually referred to as Max Schrems) is an Austrian lawyer, author and privacy activist who became known for campaigns against Facebook for privacy violation, including its violations of European privacy laws and alleged transfer of personal data to the US National Security Agency (NSA) as part of the NSA's PRISM program. Schrems is the founder of NOYB – European Center for Digital Rights and has published a copy of his personal data revived from Facebook and documents relating to legal procedures on europe-v-facebook.org ' https://en.wikipedia.org/wiki/Max_Schrems

My own cautious take on it is that the Russians did indeed meddle, quite substantially (though not to the point of altering actual votes), but that Trump himself was not party to any of it.

It's important to note that Cambridge Analytics was involved in both the Trump election and the Brexit voting. From a Russian strategic view, the Trump election was a trivial event, but Brexit has actual strategic great state alliance ramifications. Clearly, Britain outside of the EU makes the EU weaker. Furthermore, Britain is generally regarded as one of the more aggressive members of the EU. So, Russia may feel that a Britain-less EU is a distinctly more malleable EU.

I doubt the Russians care about the EU that much vs. NATO. Yes, EU technically is a superpower....but its not. Maybe a regulatory one.

Also, breaking up the EU might seem like payback for braking up the USSR.

But having UK out of the EU might be more harmful than having it in: in the EU, France and Germany constrain the UK.

Also note that Cambridge Analytics wasn't used by the Trump campaign for the general election. They used the RNC's better data instead. They only used them in the primary because they weren't sure if they could fully trust the RNC to play fair with them at the time.

Indeed. I don't think putting CA and alleged Russian meddling in the same bracket is correct. CA spreads fake news with targeted audiences, Russia is accused of both spreading fake news and accessing personal and private data to use against one of the parties.

Personally I don't understand the fuss around CA's practices because it was what I expected from political consultants: manipulation. I thought it was part of the game.

4. The vehicle in Arizona was traveling at 38 mph (in a 35 mph speed zone). Google's engineers have stated that autonomous vehicles must be limited to 30 mph to be safe unless they have their own right of way. Why? Well, sometimes pedestrians walk out in front of vehicles from between parked cars as was the case in Arizona. And then there are teenagers texting while driving and soccer moms talking while driving. If we want autonomous vehicles that can travel at an efficient and safe speed, then we will have to create a separate right of way. Two points. One, who will pay for the separate right of way, Uber? Google? Taxpayers? Two, autonomous vehicles traveling in a separate right of way is . . . transit. That's right, autonomous vehicles is a euphemism for transit. Of course, we already have transit, and separate rights of way for transit. Then why don't we just update transit, like they do in Europe and China? Boring. Americans prefer adventure, autonomous vehicles racing around the streets of America while sharing the road with teenage drivers and soccer mom drivers and drunk drivers and whoever else elects to risk her life and yours on the road.

I don't know the details but if there was an accident involving a regular driver not under the influence of technology, and a pedestrian walking outside a crosswalk at night , who would found responsible?

Here's a very good and informed summary of the legal issues involved (as opposed to McArdle's gut feeling about it) when an autonomous vehicle is involved in a collision/accident: https://www.theatlantic.com/technology/archive/2018/03/can-you-sue-a-robocar/556007/

As a driver, you are responsible for avoiding an accident if you can. If someone dressed in all-black in middle of the night drops from a freeway overpass in front of your car, you're okay. If someone jaywalks but in clear sight of you, you have to stop, as frustrating as that is.

The jury would be asked what was reasonable for the driver to have avoided.

We probably have a boatload of video data showing exactly what happened here, which will eventually come out. I say "probably" because Uber.

It already came out and the cops say a human couldn't have avoided her either.

Really? Can you link to an article stating this?

The car, seeing a woman walking a bicycle in the road at 10 p.m., did not even slow down.

A human who did not slow down may not have been legally culpable, but I think we should agree that would be a bad driver.

How do you know the car even saw her at all?

So, the autonomous car was no better than a (not so great) human driver.

Say you are driving on a neighborhood street with children playing on the lawns. If you are a careful/great driver, you slow down in case a child darts out in front of you. Was the autonomous car programmed to slow down if it detects a person on a median? Is the auto. car programmed to look the woman in the eyes and try detect her intentions? I learned that 100 years ago in high school drivers ed.

When I say see, I mean LIDAR, which works better in the dark than our eyes. But sure visual systems and processing vary greatly between the "species."

For Dain (reply-depth-limit):
The woman was "pushing a bicycle laden with plastic shopping bags when she abruptly walked from a center median into a lane of traffic." Police Chief Moir concluded that "it's very clear it would have been difficult to avoid this collision in any kind of mode." - https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-car-likely-not-at-fault-in-fatal-crash/

Sometimes when someone decides to step in front of a moving vehicle at the last second, there is nothing anyone else can do about it.

I don't see how that follows. Everyone should slow way down when driving parallel to parked cars. Much slower than 30 mph.

Google’s engineers have stated that autonomous vehicles must be limited to 30 mph to be safe

Everyone take a drink!

Do we have to take a drink every time Mulp mentions Reagan or prior mentions GMU? If so, I'll need to get some more alcohol.

And a new liver. btw, I thought it was 35 mph.

"And a new liver. btw, I thought it was 35 mph."

rayward has used the numbers 25, 30 and 35 mph over the past 2 years. Apparently he talks to a lot of Google engineers and they all give him slightly different numbers.

I don't need additional incentive to take a drink.

Americans, for the most part, don't want autonomous vehicles because they aren't very useful. Google wants to pretend to work on them because they can use them as a distraction to obtain a higher market valuation, recruit people, and cultivate a particular public image. At the same time, by now they've figured out to work more generally they will need dedicated sensors in the roadway and all vehicles on it, which means dedicated lanes although their end game was always to ban regular cars. Beyond small scale experiments taxpayers would be on the hook.

Transit may or may not have dedicated right of way depending on the area. Some areas it is not available at all, although most areas aren't suitable for autonomous cars which is why they are only tested where the weather never changes. I'm not sure what you would do to "update" transit. Besides basic stuff like fixing escalators, there isn't much that would change the experience. But the main reasons people don't like it is 1) they want to be alone, 2) they want a place to store their stuff while traveling (which is a reason why ride sharing wont work), 3) they want to get to their destination as quickly as possible, and 4) driving is fun. Autonomous vehicles don't improve any of these features for motorists and so people other than Tesla longs probably won't pay for them.

How many people in the US can afford a vehicle with autonomous features and technology? I would wait and see how Americans respond once vehicles with autonomous technology are widely available and not that much more expensive than other vehicles before confidently declaring that they don't want autonomous vehicles. Maybe they do, maybe they don't. Right now a Tesla is the only option for those who do, and most people cannot afford a Tesla.

"How many people in the US can afford a vehicle with autonomous features and technology?"

When fully autonomous vehicles arrive in mass production (within 10 years at most), the most logical situation will not be private ownership, but operation of the vehicles as transportation-as-a-service (TAAS, for short). TAAS allows many benefits, compared to private ownership, including:

1) Each vehicle will travel many more miles per year (100,000+ will be common), dramatically reducing the depreciation cost per mile driven.

2) TAAS will allow hyper-specialization...for example, one-seat, one-passenger minicars will be common.

3) Need for parking will be completely eliminated. (And with that, all residential garages can be converted to living space.)

4) Electric vehicles will become more common, because "range anxiety" will be eliminated (cars will only be delivered with appropriate charge levels to suit the planned trip).

5) Large fleet owners of electric vehicles will be able to work with electric utilities to charge vehicles when electrical demand is low, and to provide vehicles as a source of electrical power to the grid when demand is high.

6) Large fleet owners will be able to deal with manufacturers for liability in accidents.

7) Large fleet owners will be able to coordinate ride-sharing, where individual vehicle owners don't know about other peoples' travel plans.

I disagree on this part. Nobody really wants to ride in a car that other people ride in all day. We only do it with ubers and taxis because there are no good alternatives. If you have your own car that can go park itself a mile or two away, and then pick you up when you call it, why deal with a public car? Yes, there will be a big market for this sort of service, but most people will still prefer owning their own car if they can afford it.

@ Doug

Extreme germophobes will still want their own. I don't see any evidence that typical people will want to pay an extra 50%? 100% for transportation to ensure no else has been in it.

Doug:

The autonomous alternative is a great excuse to make individual driving more expensive and annoying. Raise taxes, insurances rates, increase enforcement and penalties, shame people for SELFISH and DANGEROUS self driving (you think MADD nags are bad now for having one beer, just wait until you don't even need to drink to be nagged!).

Oh it will be a wonderful world.

It's not just about germophobia. There's also a pretty big convenience factor--being able to leave stuff in your car, not worrying about losing your phone/wallet/purse, not waiting around for a car to become available during peak travel times or travel to remote locations. But yeah, on top of that there will be an ick factor--even Ubers can get pretty nasty these days, and that's when there's a human driver there watching the passengers at all times. Once people are alone in the cars, who knows how gross the cars will get. Graffiti, spilled food that never gets cleaned up, people getting sick. I don't think you'd have to be an "extreme" germaphobe to want to avoid that sort of thing.

The future, as the corporate enthusiasts of autonomous cars see it, will resemble cable television. Customers/riders will pay a monthly fee for availability whether they make use of the service or not and a fee for each individual trip determined by time of day, distance, route, etc. The idea that travelers will happily jump into smelly vehicles with empty Starbucks cups and candy bar wrappers scattered around the floor seems wildly optimistic. The current spontaneity of travel will also be affected. No sudden impulse to zip into Goodwill or stop by a favorite deli. The whole concept seems like a socialist wet dream.

"Nobody really wants to ride in a car that other people ride in all day."

I'm too lazy to look up the exact numbers, but: The average car drives about 12,000 miles per year. The government reimbursement rate is about 55 cents per mile. So that's about $6600 a year. TAAS might cut that cost in half...saving $3300 a year. Close to $300 a month. That's a pretty good chunk of change.

That's crazy talk. I'm an American, I would LOVE an autonomous vehicle. Never look for parking again? Never worry about having too much to drink at dinner, or who is going to be DD? Never worry about falling asleep on a long drive? Watch TV or browse internet during commute? Take a six hour drive from LA to mammoth or vegas in relative luxury compared to a cramped and miserable flight that eats up almost as much time, once you include security and wait times? All of these things will be amazing once the technology is mature.

Also, most of your own specific points ARE improved in autonomous vehicles when compared to other transportation modes that don't require you to do the driving: (1) you're alone in your autonomous vehicle, unlike on a bus, train, or airplane, (2) you definitely have more storage space in your autonomous vehicle than you would on a bus, train, or airplane, (3) autonomous vehicle would definitely be quicker than a bus or train, and far less miserable than flying, and might eventually allow for faster freeway speeds than normal driving, and (4) true, but it's not so fun when you have to drive in heavy traffic. Also, having a cocktail in your car on the way to dinner would be fun too...

"Take a six hour drive from LA to mammoth or vegas in relative luxury compared to a cramped and miserable flight that eats up almost as much time, once you include security and wait times? "

I'm expecting to see a resurgence of real vans that have luxury captains chairs. Ones that swivel, with a table in the center. And a mini-bar.

Now you're talking!

“Take a six hour drive from LA to mammoth or vegas in relative luxury compared to a cramped and miserable flight that eats up almost as much time, once you include security and wait times? ”

I expect the 270 miles from LA to Vegas to take under 3 hours (i.e., averaging 100+ mph) when computer-driven vehicles comprise the majority of vehicles on interstates.

Peter Thiel to Google's Eric Schmidt : Google doesn't know how invest it's money and is not a technology company. Too funny! I love Peter Thiel!

https://youtu.be/snMWgvMgWr4

Point to point transit would be better than European style.

I suppose you harrumph at such improvements, as you're old.

But she was not a pedestrian; she was a bicyclist walking her bike to navigate a road that is hazardous to bicyclists.

Bike riders understand this.

But car drivers who do not ride bikes do not believe anyone without a 4 or more wheels has any right to be on the road.

When I road a bike regularly, I frequently walked my bike to cross lanes of traffic because doing so on the bike is too dangerous because car drivers fail to yield right of way.

Ie, if you need to make a left turn on a two to four lane road, you must cross at least two lanes of traffic, and no amount of extended left arm will stop drivers from continuing to zoom by on your left, swerving, if you are lucky, to not hit you. My strategy has been to stop between lights and then cross to the median to then travel to the light to make the left turn while cars are forced to move slowly by the red light or the congestion of stopped and turning cars.

I do not know exactly what she was doing, but given there was a bike lane marked starting at the point she was hit for navigating the intersection, she might have been moving into the bike lane from sidewalk which required crossing the car turning lane. The bike lane is clearly marked on the street pavement. I often illegally ride on sidewalks for my safety, and to avoid impeding traffic on four to six lane streets.

There were no parked cars because there is no parking allowed and the intersection has six or seven lanes marked, including at least one bike lane, starting at the point of the accident.

The Uber car hit a bicyclist as can be easily seen by the warped front tire. To say she was a pedsestian is to call a car driver outside his car with a flat tire a pedestrian, which on an Interstate is illegal, meaning the car driver with a flat is illegally on the Interstate. In some places, the police will tell you to stay in your broken down vehicle until the police arrive. But the police office is then the illegal pedestrian responsible for his death when struck by a car. That's why some police officers park their car in traffic, in some cases "causing" accidents when drivers changing lanes get hitt by cars passing on their left instead of yielding right of way.

In any case, Arizona has among the highest rates of both bicyclist and pedestrian accident rates, with most happening in well lighted conditions in urban areas. The accident occurred under street lights.

This is pretty classic. The car driver is not at fault, it's the person without a car that is always to blame.

She was walking, ergo she was a pedestrian. Just because she was "pushing a bicycle laden with plastic shopping bags when she abruptly walked from a center median into a lane of traffic." doesn't make her a bicyclist at the time. If you're walking across a center median where there are signs posted saying not to, you can't blame someone else if you decide to abruptly enter the road right in front of a car. That's why police Chief Moir concluded that "it's very clear it would have been difficult to avoid this collision in any kind of mode." - https://arstechnica.com/cars/2018/03/police-chief-uber-self-driving-car-likely-not-at-fault-in-fatal-crash/

But she can't ride her bike on sidewalks. To ride and cross at intersections she must be in the street.

But less that two years ago there was confrontation between an enraged driver and bicycles at the same intersection:

"The video shows a driver berating cyclists for merging into a left-turn lane at Mill Avenue and Curry Road. You can hear the driver swear several times at the bicyclists, telling them to get on the sidewalk.

Even though the driver angrily claims to know the law, it is the bicyclists who were in the right.

Tempe police want to remind drivers and bicyclists that those on two wheels have every right that cars do on the road.

“We’ve all got to learn to get along out there,” said Tempe Police Lieutenant Mike Pooley.

When it comes to riding on the road, cyclists should be in a bike lane if possible, not on the sidewalk and should be going with the flow of traffic.

It is the law that drivers give bicyclists three-feet of room as they are passing.

A biker should make left turns using the turning lane with cars and be sure to use proper, clear hand signals.

Just like in a car, bicyclists also need to remember to give drivers enough time to react when they are merging.

Lt. Pooley said in the case of the YouTube video, the driver did not commit a crime. However, he warned anyone who ends up in a similar confrontation should let it go and report it.

"Whether he's right or you're right, it's not worth getting hurt or injured or sometimes even killed over, it's just not worth it." he said."

https://www.abc15.com/news/region-southeast-valley/tempe/video-captures-confrontation-between-angry-driver-cyclists-in-tempe

When using a bike, I say screw the law just like drivers do most of the time when they see bicyclists. All that matters is living. She screwed by getting hit, but she would probably have been hit in the same spot if biking legally on the street because her profile would have been smaller and seemed to appear out of nowhere.

I currently ride a recumbant trike so I have two lighted poles plus tail lights all the time, plus flashers at dusk to dawn, but I can't build that for a bike, just the flashers. Safety gear I would use for a bike would cost more than the bike I have or she had. I bet she couldn't afford it, guessing she didn't have a car. She was homeless, but apparently working.

She wasn't riding her bike, and she wasn't on a sidewalk, so it's somewhat irrelevant that she can't ride her bike on sidewalks.

She went from one location it was illegal to ride or walk (the wide median covered in bushes, not even next to an intersection) to another location it was illegal for her to walk (the road). The bike part was virtually irrelevant. If you're driving in the left lane at night where she was killed, there is no way to see or detect someone behind the tree and bushes until they actually step out into the road in front of you. https://goo.gl/maps/AniDQnZZcMR2 That's probably why they put up the signs on both sides explicitly telling people it was illegal to be on that wide median.

If she HAD been riding her bike, and complying with the associated traffic laws, such as riding in a lane and having the correct lights and reflectors, she would have been way more visible, as well as much more likely to be alive today. Also, the street she was illegally jaywalking across the middle of has dedicated bike lanes in both directions. She could have walked across the cross walk 50' away at the intersection, or used a bike lane and ridden across. She took an illegal shortcut and made the decision to step out in front of a car and it cost her. It's a sad case of poor decision making, but the causation has nothing to do with bikes nor automated cars in general.

Interesting to me is that the laws and behavior vary by country. It seemed to me when I was London UK the pedestrians had avoid the vehicles more than in the USA. In Honduras if you are pedestrian it's almost all on you to avoid the vehicles.

So maybe we force Autonomous vehicles to be clearly marked, and give them the right of way, making the pedestrians responsible for avoiding them.

Anyway I would do my testing in the UK not the USA.

Cambridge analytics/ could they be meme zombies?
- the story suggests they represented themselves as an academic organization as does
the cheesie ripoff name but they were actually collecting data for political reasons not
academic reasons / there is a difference or mebbe not?

2. Here is some slight supposition, but with fair confidence:

Facebook provided an API for its partners use. Published elsewhere were some guidelines for that API's use. But programmers, being programmers, they simply used the API to its maximum. This allowed harvest of anyone who gave buy-in and friends. Given average fan-out, a few hundred thousand starting points yielded millions with friends.

Once downloaded, it was their data, by possession, if not by contract or the law.

In my opinion Facebook more gave away your privacy than sold it. They left money on the table.

People who used Cambridge Analytics will get some bad press from this, but I really doubt any customer knew these details (maybe the other salacious ones).

Main concern should focus on social media, really all big data owners, and their partners.

We know this stuff was cross referenced with actual credit information, debt to income, etc. They all bragged about it.

I worked on the Facebook platform team for a while and the confusion in the press is really frustrating. Here's what happened.

1. The Facebook platform originally let apps request information from you. Initially, you were allowed to share any information with other applications that you had access to. That included basic information about your friends, like their names and their birthdays.

2. A lot of apps would request data they didn't really need, and just store it in databases. This Cambridge Analytica survey app is one example. Thousands of third parties are probably hanging on to similar sets of data.

3. This information was not generally useful for anything. In particular, it isn't enough information to contact people, or correlate it with contact information. It wasn't worth money to resell it, it didn't help make applications more useful. It wasn't enough information for third parties to target ads, and there are hundreds of ways to get enough information to target ads.

4. Facebook eventually disabled this API because no applications were using this friend-information for anything that made applications better. In the long run, the whole "Facebook platform" was only really useful as a way to make login faster, or to help put ads in third party apps, and the other parts of the platform were more or less deprecated.

5. Years later, many people are freaking out without understanding what happened.

In general, it is not hard to gather a database with data on millions of people. Every ad network or random website or app is likely gathering data on all its users. Undoubtedly, there are hundreds of companies with a database and one of those database entries is you. But it is very hard to make that data *useful* for anything. Cambridge Analytica was not offering a useful service, and their data is not really infringing on your privacy in any way that affects your life. People do not seem to understand how gathering huge databases of personal information is both (a) pretty easy to do, and (b) in most cases does not let the data-collector do anything with it.

Facebook seems to disagree with you on several points in general - 'Protecting people’s information is at the heart of everything we do, and we require the same from people who operate apps on Facebook. In 2015, we learned that a psychology professor at the University of Cambridge named Dr. Aleksandr Kogan lied to us and violated our Platform Policies by passing data from an app that was using Facebook Login to SCL/Cambridge Analytica, a firm that does political, government and military work around the globe. He also passed that data to Christopher Wylie of Eunoia Technologies, Inc.

Like all app developers, Kogan requested and gained access to information from people after they chose to download his app. His app, “thisisyourdigitallife,” offered a personality prediction, and billed itself on Facebook as “a research app used by psychologists.” Approximately 270,000 people downloaded the app. In so doing, they gave their consent for Kogan to access information such as the city they set on their profile, or content they had liked, as well as more limited information about friends who had their privacy settings set to allow it.

Although Kogan gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time, he did not subsequently abide by our rules. By passing information on to a third party, including SCL/Cambridge Analytica and Christopher Wylie of Eunoia Technologies, he violated our platform policies. When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. Cambridge Analytica, Kogan and Wylie all certified to us that they destroyed the data.' https://newsroom.fb.com/news/2018/03/suspending-cambridge-analytica/

I don't think any of that PR statement contradicts what I said.

It's not confusion in the press. It's bias. The media blindly supports the resistance.

You are right about the media facilely duping millions of drooling, knuckle-dragging liberals.

As someone who works in a partnership with FB and in an ad tech company, I think what you are saying is painfully lacking in nuance.

In aggregate, at the database level, the data scraped from places like FB is useful. Ad campaigns can be more targeted to the people who are shown, statistically, to be more likely to be responsive to the campaigns. Some people see this as rather innocuous, others don't like the nudge, and this is only when we are talking about toothpaste or cars. Shift the exact same concept to the political sphere and it definitely rankles, though you'd need a better philosopher than I to argue why that should rankle in politics and not for toothpaste--yes, different outcomes or levels of stakes, but identical mechanisms.

However, at an individual level, yes, I would agree with you--the data is not useful and we data scrapers, aggregators, collectors don't really care much about you.

Until someone accidentally kills Archibald Buttle (which nearly always happens in the long run), and then everyone is shocked, SHOCKED that something like this could happen.

4. I have known a number of brilliant and pragmatic engineers. I have also known some rather unrealistic optimists. The interesting dynamic in an engineering company is that the unrealistic optimists can be promoted, or even put in charge. The more that management is non-technical, or has expertise in a different area, more likely this has to happen.

I have seen it happen more than once, though in those cases the only damage was blown budget and discarded projects.

Are the unrealistic optimists in charge of on-road testing of self-driving cars? Would Uber management be in a position to know?

The answer to both question is yes. But for the most part, none of them work for Uber. Musk, Brin/Page, etc. are using the concept to further their economic and political interests. As a consequence, companies like Uber and traditional automakers have to play along and conduct tests to justify investors handing them billions of dollars. A realistic pessimist manager would have a fiduciary responsibility to use deception to obtain this money, as it usually only costs at most a few million to kill someone, and if the technology doesn't work the company and its executives get to keep the money.

Seems to me Uber would do better to bet against self-driving, or derail it .. WAIT!

(To one-up the cynicism.)

lol. That may be the best read of this.

2. Nothing confusing here: social media collects user data because social media is in the advertising business and their product (i.e., the users) costs social media nothing. That's right, social media pays nothing for its "inventory". If social media had to pay for "inventory", either social media would adopt a different business model (such as charging users rather than using users) or social media would soon disappear. In either case, it would be preferable to what we have now with social media. I know, readers will point out that social media is now "free". Yea, right. People will believe anything.

1. Sam Harris has a soothing delivery, but I have a general gripe about him. I just don't understand how he (or, e.g. Yudkowsky) embraces a "no free will" position, but then wants to talk about morality. Yudkowsky wrote some "dissolving the question" piece on LW that I found unpersuasive. I get the scientific rationalist "no free will" view, and I get the "Hitler was evil" view, but I don't think you get to take both of these positions.

Agree somewhat. He seems to think there are moral facts, but I don't quite understand what means by that. Who "knows" these moral facts, and how? Strikes me as a problem whether you believe in free will or not.

It would be interesting to hear Cowen talk to Harris about politics - hard to think of anyone more in need of Cowen's brand of contrarianism.

I love listening to him but Sam Harris can certainly help with any insomnia issues. Harris is a utilitarian. The great thing about being a utilitarian is you get to make "good" be whatever you want it to be. It is easy to construct a utilitarian framework where Hitler is objectively evil. Whether or not he had free will has nothing to do with it.

I presume this easily-constructed utilitarian framework would classify a lion that ate millions of people as objectively as evil as Hitler.

In that case, the issue is semantics, because I don't see applying morality to lions.

I see your point. I think you have to add the supernatural to make a human any less a force of nature than a lion or a volcano. And I wouldn't call a volcano evil. The problem with adding the supernatural is that then everything seems pointless to me. I appear to have the opposite feeling toward that as everyone else.

Doesn’t Sam purport to get around these areas by defining good and evil to be along an objective scale of net human suffering? One can define the calculus such that absolute value of torture of one > infinite gleeful spectators which seems like a weakness as it may not be objectively true or mild enjoyment may not even belong to the same scale as the worst torture. That seems like a weakness to the model.

Anyone who has dabbled in "utils" can see the quagmire that the quantification and summing of joy and suffering is headed for.

Sam seems to think this stuff just flows from straight rationalist principles, and people just can't follow the proof.

To me, it seems more like Bertrand Russell's forlorn goal of a complete mathematical system, and some sort of Gödelian critique is in order.

Morality cannot be found inside the scientific rationalist worldview, I don't think.

What’s the confusion on #2? Through 2014, Facebook had incredibly lax standards for harvesting user data. That wasn’t illegal under US law (I’m not sure about the EU) but it was careless and ethically dubious. An app developer harvested a huge amount of user data under that regime and, in violation of his contract with Facebook, sold it to Cambridge Analytica (“CA”). Facebook then failed to ensure that the developer and CA had deleted the data even after they became aware of the illicit sale. Along the way it seems like the developer (and perhaps CA) made false representations to Facebook about their compliance. CA then used this data for political purposes in a way that violated Facebook’s terms of service (in other words, even if CA had rightfully come to possess the data they would not have been entitled to use it the way they did). There is also a suggestion that CA used the data to spread misinformation. All of that is deeply shady.

"As far as everyone else is concerned, it doesn't matter whether an app gets the data for research purposes or for straight-up political ones. Average users worry more about convenience than privacy." -Bershidsky

I wonder if that's really true. I'd be happy to talk to a surveyor for one of the Michigan surveys, I'd never waste my time with a marketing survey.

#4. One of the things she doesn't quite say is that, if autonomous vehicles really have an underlying level of safety as good as humans (1 death per 100 million miles), then the chance of that first death happening within the first 10 million miles should have been low (e.g. roughly one in 10). Or alternatively, given the first death happened that quickly, the chance that autonomous vehicles are actually as safe as human drivers is also low.

"Which is not to say that we should pull the plug on autonomous driving. For one thing, regular old-fashioned cars were none too safe when they first arrived on American roads. In 1921, the first year for which such data is available, there were 24 deaths for every 100 million vehicle miles traveled. Over time, improved cars, drivers and roads reduced that figure by nearly 95 percent. Presumably, self-driving cars can also improve."

I don't that's going to work. Nobody's going to accept safety going backwards until we gradually figure out how to get back to 2018 levels. If autonomous cars truly are less safe, then the best we might do is allow the elderly and disabled to use them at slow speeds and under ideal road and weather conditions (which wouldn't make for much of an industry).

The equilibrium is likely adding features to make cars almost autonomous, but with a driver still required. Things like automatic braking/speed maintenance, hazard avoidance (the car swerves before you even notice the idiot in your blind spot swerving towards you), night vision (the car sees better than you at night), etc. This makes cars much safer than unaided human driving. It will be a long time before truly driverless cars go mainstream.

Right. Driverless cars with an asterisk saying "driver is very much required."

Can you FEEL THE POWER OF THE FUTURE??

>It will be a long time before truly driverless cars go mainstream.

Nice backpedaling. When you finally get to "never," let everyone know.

Why are you triggered by driver-assistance features making us all safer?

Last time this came up, it looked like computer-driven cars were safer than teens and the elderly. So we can accept risk going down by giving the autonomous cars to those over 75 and working our way down the age ladder.

+1, yes.

The evidence indicates that autonomous cars are much better drivers than elderly drivers at night, adults who have been drinking or arguing and distracted teenagers.

Also, the technology is rapidly improving. Either the technology and/or economics will plateau or autonomous vehicles will become prevalent within the next 20 years.

Correction: I said "much better" when the truth should be significantly better. Conservatively stated, autonomous cars are no worse than those categories.

"Also, the technology is rapidly improving"

How in the hell would you know?

Yes, and autonomous vehicles also operate under cherry-picked conditions. You don't see them looking for black ice or a blizzard with a 2ft LIDAR range, where driving is actually challenging and a sober driver might wreck.

For them to improve they would need 100% sensor coverage of roads, vehicles, pedestrians, trees, potholes, etc. But even with that, there is no reason to think they can beat human levels anytime in the next hundred years or so. And of course, we can't pay to maintain the infrastructure we have now.

We all know how much the elderly love technology. If they aren't well enough to drive, they aren't going to be able to find/hail a vehicle, get in and out of the vehicle, or do much at their destination. You would also have them dying on cars or showing up 300 miles away from where they were supposed to be. There may be three people in the world who could theoretically benefit from such technology, but I have no idea who they are.

So, if conditions are bad, even human drivers have more risk, too.

"But even with that, there is no reason to think they can beat human levels anytime in the next hundred years or so."

Oy vey! Dan, you really should read up on technology trends before you make statements like this. Do you have any idea how accurate position-measurement technology is today, and will likely be in even 20 years (let alone 100)? What about flash memory capacity per $100? Computations per second per $1000?

Per this site, the fatalities per 100 million miles went from approximately 1.7 to approximately 1.4 from 1994 to 2004.

https://www.researchgate.net/figure/US-Fatality-Rate-per-100-Million-Vehicle-Miles-Traveled-Source-US-Department-of_fig1_228775388

I predict the fatality rate by 2038 will be less than 0.2 per 100 million miles due to computer driving technology. And by 2048, I predict it will be below 0.1 per 100 million VMT, due to nearly all vehicles on the road at all times being computer-controlled.

Have you been on a road? Have you seen who we let drive?

I don't know how far away autonomous cars are, but the best automatic driving programs from today will beat their competition and then get even better, while the worst drivers on the road are going to keep on keeping on.

"I don’t know how far away autonomous cars are, but the best automatic driving programs from today will beat their competition and then get even better,..."

Yes, in 2016, *6000* pedestrians were killed by human-driven cars. Each death was not even state-wide news. In contrast, a computer-driven car kills one person, and massive amounts of research will be conducted, and hardware/software modified, to prevent that sort of accident happening again. The human-driven fleet never learns. The computer-driven fleet learns incredibly rapidly.

4
"Police Say Uber Is Likely Not at Fault for Its Self-Driving Car Fatality in Arizona"

http://fortune.com/2018/03/19/uber-self-driving-car-crash/

That initial report seems to clear the autonomous vehicle:

"Chief of Police Sylvia Moir told the San Francisco Chronicle on Monday that video footage taken from cameras equipped to the autonomous Volvo SUV potentially shift the blame to the victim herself, 49-year-old Elaine Herzberg, rather than the vehicle.

“It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” ... the Uber car was driving at 38 mph in a 35 mph zone and did not attempt to brake. Herzberg is said to have abruptly walked from a center median into a lane with traffic. "

So, a pedestrian steps from the center median directly in front of a driving car. I'll reserve final judgment till I see the video, but I don't think this should derail a promising technology.

Interesting that they're saying that the center median was in the shadows.

The median is about 15-20 feet wide and covered in bushes, etc... as well has having signs on both sides of the "paved" portion of it telling pedestrians not to cross it, presumably because it's recognized as a location dangerous to pedestrians.

Let me ask you flesh and blood humans a question. If you see a woman pushing a bicycle in the road at 10pm, do you think "this is a little sketchy" and slow down or move over, or do you think "this is completely normal" and buzz right by?

The "artificial intelligence" has no way of knowing what is normal, if a party had signals more caution, etc.

A "party hat" signals more caution, or if it is Saint Patrick's Day, etc.

There was no bicycle.

" Herzberg is said to have abruptly walked from a center median into a lane with traffic. Police believe she may have been homeless."

I believe I saw walking a bicycle in some reports, and there is a bent bicycle in the pictures. But heck a homeless woman is another big signal to slow down!

Ok, fair enough. I googled around and did find a picture with a bent bicycle by the side of the road.

"The female pedestrian, identified as 49-year-old Elaine Herzberg, was walking her bicycle across the street outside the crosswalk when she was struck, police said"

Good point.

I have 2 new drivers (currently 18 and 16) in the family, and I was their primary instructor. One of the points I stressed a lot was being aware of what's on the side of the road - situational awareness. In suburban neighborhoods like ours, that meant being particularly aware of kids and animals (dogs, deer, etc.) In a more urban environment, pedestrians, bikers, etc. present potential hazards.

Also, going by a commenter above's figures (assuming I'm interpreting them right) - if the general rate of deaths for conventional driving is ~1/100 million miles and this death occurred within the first ~10 million miles of real world autonomous driving testing, then the favorable comments from the Fortune article seem dubious. i.e. It's hard to believe that pure bad luck/pedestrian fault (no fault for the autonomous system) caused this. Particularly when one considers that a high proportion of conventional driving deaths occur due to really bad situations (drunk driving or bad weather/road conditions), neither one of which presumably applies here. If we compare to a baseline of a sober driver, driving in daytime in generally good conditions, then I suspect the death rate for conventional driving is much lower than the overall rate...

do you know that the car could even see her at all? You keep assuming that it had some view of her and did nothing, but i see no proof that it had a view of her in the first place.

As I say above, none of the cars see in the way we do. They use LIDAR, we are trained by millions of years of 7x24 threat assessment.

Our eyes always trigger that shot of attention when a threat appears at the periphery.

Brilliant! Problem solved. We just equip autonoms with NO sensors, and they can't be held responsible for any accident, since they aren't capable of "seeing" anything "at all".

In my town 2 years ago, they reported 30 pedestrian traffic fatalities, of which more than a third were "people experiencing homelessness" - this as part of a triumphal announcement that they were shooting for zero traffic fatalities of all kinds in 2017. (Interestingly, we have simultaneously had an increase in whatever demographic is confused about which side of the road or even freeway to drive on - it's one of those threats I've gotten used to looking for. But they haven't thought fit to do any PSA's about that yet, perhaps because it would be awkward even to broach it.)

The interests of the homeless, as of kangaroos, may not align with the rest of us, if self-driving vehicles prove they reduce crashes overall.

Did you mean which side to walk on? I doubt many people are confused about which side to drive on...

Perhaps I was uncharitable in calling them confused - how about "people who do not consider the division of the road important enough to observe at all times"?

I'm not sure fault is ultimately going to be the critical issue in autonomous vehicle crashes. There have been a number of times in recent years where I would have been involved in a crash (a crash that would not have been my fault) but for taking emergency evasive action. If autonomous vehicles are notably worse than humans in avoiding accidents in such situations, that won't be an acceptable result either.

The Guardian has a couple great articles detailing a key insider of the development, Chris Wylie:

https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump

https://www.theguardian.com/technology/2018/mar/17/facebook-cambridge-analytica-kogan-data-algorithm

Facebook discussion with real tech people here. https://news.ycombinator.com/

So based on your endorsement I took news.ycombinator off of my hosts file to see if had gotten any better.

Nope. Same old crap.

2) What was done wrong

a) The data was supposedly for academic research, but the psychological profiling and models were used to power elections interference in the USA (e.g., it was not published in a peer review journal, or submitted to a peer review journal, or published on the websites of the faculty or the individual researchers. Instead, it was used by foreign entities to power elections interference.) -- Definitely illegal

b) The data was obtained by paying several hundred thousand registered US voters, but then obtained data on Facebook friends via the settings on the app permissions granted by the paid survey takers, resulting in data on 50 million Facebook users, almost all of which did not consent to this data collection. -- Not illegal in all places, but definitely unethical

c) The company that owns Cambridge Analytica (SCL) has previously interfered in over 200 elections. -- Strongly suggestive that the elections interference was the intent from the get go

d) Facebook's security protocol was triggered, but they accepted a ticked box on a form as verification that data on 50 million people had been destroyed. -- That is not due diligence of any sort whatsoever. If not illegal, definitely bad.

e) The main funder of Cambridge Analytica was also Trump's #1 financial backer in the campaign, again strongly suggestive that elections interference had always been the objective.

f) Foreign individuals (Canadian and UK) were employed to interfere in US elections. There are ways in which this can be done legally, but the fact of providing false/misleading documents for border crossing purposes is strongly suggestive that the activities were known to be illegal before they were done.

If any of this is news to you, you might do well to read the full series at The Guardian.

In other news, working for the Trump campaign in any capacity is now "election interference". Apparently you are interfering with the election of Hillary Clinton and that's just not democracy.

Foreign operations paid to work on (or in relation to) US elections campaigns face fairly specific restrictions in what they may or may not do.

What restrictions were placed on the Clinton campaign when they were purchasing Russian rumors for their dossier?

They should not break the law in any territory, they should especially not hack US public and private data systems.

That's the ongoing oops here, buddy. Trump is now linked to a cart load of crimes.

http://www.foxnews.com/politics/2018/03/13/manafort-faces-very-real-possibility-life-in-prison-court-order-says.html

See? I linked Fox news, like Vox they are sometimes fine.

Manafort’s crimes have nothing to do with the claim of Trump Russia collusion. Your faith-based collusion fanaticism is tearing this country apart is serving the Kremlin well, however.

Indeed, backing up the weasel word "interference" with unspecified "specific restrictions" harkens back to "collusion" in its facile vagueness.

Note that point (b) is precisely what the Obama campaigns were lionized for, back in the day.

And of course this list omits to mention CBS's reporting that CA was not used in the general election.

You missed the part where Cambridge Analytica wasn't used by the Trump campaign in the general election. They used the RNC's data instead, because it was much better for the purpose.

Since moving to wapo Megan’s articles have become more winking less personal and less ideological.

Having blogged just about everywhere that pays bloggers, she is probably thinking now about a govt job.

Wonkish not winking

4: "Megan" ?? Seriously?

That was the least-insightful I've read in weeks, by the way. Thanks so much.

Wow you're an angry elf today.

I thought she wrote a pretty good article where she covered both sides of the issue rather well.

I'm with TPM.

The article is a fairly banal commentary on the use of statistics in assessing the risks, pointing out the lack of data, etc. Nothing wrong there, but no particular insight.

Tyler likes to throw her some links, so I guess I'll skip them in the future.

What's McArdle's background? Is she mathematically or logically illiterate? What reason did she give (behind the paywall) for her dubious assumption that a national average risk is the appropriate statistic to use in the analysis of the actual conditions of this specific accident? Should we also assume she has average intelligence and average insight ...as well as having informed herself about it as much as the average US adult? If so, nothing to see there.

1.

Hanson:

“Many students try to show how good they are at school by studying hard and doing well, and then some students try to show they can ace everything without trying.”

Well, yes. We all learned that in high school, didn't we?

2. National Review's answer to a smug liberal is an insufferable conservative. The disdain is overwhelming. Come on Michael, do better. For once I am more closely aligned with Charles Cooke. Ugh, I feel sick.

The underlying truth of this story is that the surveillance of American citizens is nothing new. We whore our privacy for little to nothing. Free Wifi, convenient apps, unlimited data or mindless entertainment. I am ready for a better conversation on this issue. https://www.theatlantic.com/politics/archive/2015/03/the-privacy-movements-constitution-4th-amendment/386474/.

3. Though the victory is small, Madame Curie and her nameless female predecessors are smiling.

I am not that worried about autonomous vehicle crashes, overall.

I have survived earthquakes (on a higher floor of a swaying groaning skyscraper), missile attacks (the sound of a Scud missile landing and exploding a few football fields away is sort of like a thousand instantaneous cups of coffee - no, not really, you hear it land and you think, well, they missed, the excitement level at having survived is maybe, at best, four cups of tea, if I remember right, sometimes war is a big deal for people, for me it wasn't), and I have survived being T-boned in a little cheap Detroit sedan by a small truck at the speed limit (55MPH) carrying a load of firewood on a slick winter highway: I have survived Lyme's disease, random gunshots in late 70s LA, and lots of other nasty stuff, including a fantastic level of turbulence not far above a deadly mountain range, and probably at least one and more likely two or three serious poisoning attempts (that happens when you visit certain countries and you are not a tourist and not really a welcome guest...(the country in question now exists under a different name))...

Undergoing each of these unpleasant experiences, I was never worried for a second about death. Not a single second. John 3:16 and all that...

What worries me is volcanoes. I am possibly not very bright, compared to the brightest among us, but it would not surprise me if, a decade or two from now, the biggest unexpected story would be how many people will have died - fried (boiled? melted? no, that is not it - I need a more lithographic word) or just simply incinerated, at no cost to the funeral directors of the world (and no profit, sadly - one should care about the funeral directors, too) - fried or incinerated in some previously unnamed volcano.

That being said, autonomous cars are definitely a thing you can get rich off of, with the right lawyers and accountants on your side. Even if I knew a thousandfold more about the truths of volcanoes than autonomous cars, I could not monetize that knowledge.

In the very unusual novel by Gene Wolfe, which maybe one in a million people who speak English have read, called There Be Doors, there is a scene where a winter shore is described, in loving detail. It is not a novel about volcanoes, but none of the characters fear death when they don't need to fear it. Looking at the piled up ice on the winter shore of a lake very far to the North in the Northern hemisphere, one understands in a visceral way the fact that the atheists of the world, as well as the people who use religion as something that they can brag about without the hard work of one-on-one empathy (word!), every day for the rest of their lives in this ugly beautiful world, and even many of the Gnostics and almost as many of the "simple old country boys" who are so pleased with their ordinary opinions, achieved in the way the successful of this world have always achieved their opinions - well, one understands in a visceral way that every gleam of all those trillions of square miles of ice (do the math! - I have, and trillions is the right word) that people have looked at over the years on Northern lakes and everywhere else, not to mention the moonlight on all those desert nights that so many people who are no longer with us looked at with wonder and hope - one understands that today is not today, yesterday was not yesterday, that God not only created you but sent His angels to talk to you. Maybe you replied with silence. But you replied. I remember every single one of my replies, which is why I do not fear death.

Thus ends my review of Paul's letter to the Ephesians as viewed through memories of the first letter of the Apostle John.

no matter how happy or sad or how long-ago or how triumphant or classic or iconic any given moments in our past lives were, no matter who we are, they were, those moments, as straw or hay if we cannot imagine that, at that moment in time, there was a chance - not a big chance, obviously, as the long life of brave Stephen Hawking demonstrated, but still a chance - that if an angel had talked to us about why we are here, and we had heard that angel - we, any of us, even the losers in life who looked so unhappy earlier this afternoon at the DMV ( I remember ) - you can imagine what I am about to say: the angel wanted us to take the chance to reply with words of love and gratitude and understanding, and if we had, so much of what we did that was good would have rushed back, in memory, right then - and if we hadn't - angels are patient and loving creatures. They tried again, and again, and again. None of us (that I know of) live in a world where only the past tense exists

(this comment would sound the way it should if you imagine Robert Donat or Peggy Ashcroft reading it, if you are English, Patricia Neal or Jimmy Stewart, if you are American, R. Davies or my pal Annette from Calgary if you are Canadian)

if you are not american, take my word for it, every word in the 8:55 and 9:16 pm comments was completely proofread, free of any grammatical mistakes, and with the exact words in the right places. Mock on , Voltaire and Rousseau, but the Americans (maybe not this one, but maybe some other American) achieved a better prose style than you.

Everyone reading this is more loved by God than "what would ernest borgnine say" is. Trust me. I have prayed for that and God answers those prayers.

Born long ago, knew what evil was before he could talk (very lazy mother and violent - gorilla level violent, not that Magilla Gorilla was not a beloved childhood cartoon - older siblings). You simply cannot understand what it is like to have several people try to "pretend" murder you, in their juvenile way, before your ninth or tenth birthday, if you have not experienced it (and if you can imagine it, with the exact level of compassion God wants you to, and have not actually experienced it, God bless you, let me tell you for the first time, you are one of the saints).

Year after year living with "Kennedy Democrats" (you have no idea, if you are not an American, how nasty those two words sound when used together): so sad.

Made some friends in high school but - and here is what is key - nobody ever makes friends, in this world, who know how to be true friends, unless there are angels helping along that friendship - try and tell me I am wrong, call your friends to ask if maybe I am wrong - go for it .... good luck, my young friends ....

Foolishly went far away to college, learned that people signal and strive and seek to triumph over others, everywhere. Tried those things himself, freshman year, and so on, and one day, after failure after failure, recognized something.

It is no small thing to care about someone who nobody ever cared about, and to make sure, as best one can, that the bad things that happen in this world might not happen to that someone.

That is all.

Please do not respond. There are billions of comments on the internet, even if we only count recognizably construable comments in the English of 1950-2050. Nobody needs you, who are reading this, to say something negative about this comment.

It is no small thing to be the first person to disinterestedly care about someone who nobody ever cared about. I have never met anybody happier than me, because I have never met anybody (still among the living) who has known that longer than I have. Seriously, 1964 was a long time ago

Thus ends today's lesson in how to write in American English.

Please do not respond, I am a big fan of silence (Cardinal Sarah could explain this better than me). Feel free to plagiarize: such a thing would make me happy.

God bless her, Vitamin C liked to "remember that night in June".

(From "Graduation", circa 2000).

God bless you too. Try and remember the last time an angel talked to you so that you recognize the next time. Seriously. And there was a previous time, everybody knows that - it is true for everyone. No matter who you are. And if you deeply think you are right and I am wrong, well God bless you my friend. Nobody is an atheist for more than a few decades - there is a sort of sad lonely splendor to it in those few decades, but the true artist sees time as something more kind and profound than the few sad little youthful decades when we were proud and healthy and thought that God and his angels did not exist. As if.

Try and remember that night in June.

Or remember that time, which you have long forgotten, when you were amongst friends, and none of you knew who had paid for all the food and all the drinks and the tents and the lighting and the music on that June night. (if you are Australian it was likely a December night - I remember). Try and remember. Only three angels have authenticated names in Holy Scripture (Raphael, Gabriel, Michael). More than three have prayed for you, just in the last few minutes. I guarantee you that. Try and remember, wake up, wake up.

way down below the ocean /// wake up wake up wake up wake up ////

Can't believe nobody has mentioned what to me is the most interesting hypothetical in the Cambridge Analytica case: Facebook seems to have strong case for filing a multi-million (billion?) lawsuit, with certainly Steve Bannon and maybe Robert Mercer as defendants. Would they? Will they? The discovery alone could change the world.

A multi-billion dollar lawsuit? For what, specifically?.

38 mph in a 35 mph zone, with good road conditions and a lighted road should be about as low risk as driving conditions get.

An awful lot of the fatalities in that 100 million miles per fatality statistic come from people with greatly elevated risk factors -- the elderly, new drivers, speeders, drunk drivers, dangerous roads or dangerous weather.

I'm also not reassured by claims that the pedestrian suddenly walked into the road. Does the sensor not pick up on pick up on things happening at the side of the road? Defensive driving should take into account the possibility that people might suddenly move into the road without warning. If you want to pick people up from their homes, it seems like you should pay attention to the possibility that children or pets might wander into the street or step out from behind obstructions.

Comments for this post are closed