Zeynep Tufekci’s Facebook solution — can it work?

Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion:

What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent. There would be no more endless pages of legalese that nobody reads or can easily understand. The same would be true of any individualized targeting of users by companies or political campaigns — it should be clear, transparent and truly consensual.

Who can be against “clear, transparent and truly consensual?”  But this reminds me of those conservatives who wish regulations would be shorter, simpler, easier to write — it’s not always that easy and wishing don’t make it so.  (Try sitting down with someone in the immediate process of writing such a rule.)  That said, let’s think about what maybe will happen.  How about the United States adopting some version of the forthcoming EU GDPR?  That might in fact be an OK outcome (NYT).  But will that be clear and transparent?  Is any EU regulation clear and transparent?  Can anyone tell me, sitting in their seats right now, if it will outlaw the blockchain or not?  Whether it outlaws the blockchain or not, could either of those outcomes be called “consensual”?  I don’t think Tufekci has given an actual proposal yet.

Second, people would have access, if requested, to all the data a company has collected on them — including all forms of computational inference (how the company uses your data to make guesses about your tastes and preferences, your personal and medical history, your political allegiances and so forth).

This is not feasible, as computational inference is usually not transparent and often is understood by nobody.  But even the simpler stuff — what exactly is the call here?  That Facebook has to send you a big zip file?  Is the goal to inform people in some meaningful way?  Or simply to deter Facebook from having the information in the first place?  If it’s the latter, let’s have a more explicit argument that people would prefer a Facebook they have to pay for.  Personally, I don’t think they would prefer that and already have shown as such.

Third, the use of any data collected would be limited to specifically enumerated purposes, for a designed period of time — and then would expire. The current model of harvesting all data, with virtually no limit on how it is used and for how long, must stop.

“Must”?  Not “should”?  That is a classic example of trying to establish a conclusion simply by word usage.  In this context, what does “enumerated” mean?  Are we back to GDPR?  Or they send you an email with a long list of what is going on?  Or that information sits behind a home page somewhere?  (So much for simple and transparent.)  You have to opt in to each and every use of the data?  So far it sounds like more bureaucracy and less transparency, and in fact this kind of demand is precisely the origin of those lengthy “opt in” statements that no one reads or understands.

Fourth, the aggregate use of data should be regulated. Merely saying that individuals own their data isn’t enough: Companies can and will persuade people to part with their data in ways that may seem to make sense at the individual level but that work at the aggregate level to create public harms. For example, collecting health information from individuals in return for a small compensation might seem beneficial to both parties — but a company that holds health information on a billion people can end up posing a threat to individuals in ways they could not have foreseen.

Maybe, but there is no example given of harm other than an unspecified speculation.  It also seems to be saying I don’t have a First Amendment right to write personal information into a text box.  And who here is to do the regulating?  Government is one of the biggest violators of our privacy, and also a driving force behind electronic medical records, another massive medical privacy violator (for better or worse), most of all after they are hacked and those who have sought mental illness treatment have their identities put on Wikileaks.  The governmental system of identity and privacy is based around the absurdity of using Social Security numbers.  Government software is generations behind the cutting edge and OPM was hacked very badly, not to mention Snowden made away with all that information.  And government is to be the new privacy guardian?  This needs way, way more of an argument.

I do understand that the author had only a limited word count.  But googling “Zeynep Tufekci Facebook”  does not obviously bring us to a source where these proposals are laid out in more detail, nor is there any link in the on-line version of the article to anyone else’s proposal, much less hers.  So I say this piece is overly confident and under-argued.

What instead?  I would instead start with the sentence “Most Americans don’t value their privacy or the security of their personal data very much,” and then discuss all the ways that limits regulation, or lowers the value of regulation, or will lead many well-intended regulations to be circumvented.  Next I would consider whether there are reasonable restrictions on social media that won’t just cement in the power of the big incumbents.  Then I would ask an economist to estimate the costs of regulatory compliance from the numerous lesser-known web sites around the world.  Without those issues front and center, I don’t think you’ve got much to say.

Comments

There is a lot of talk right now about Facebook, but I'm not sure this actually goes anywhere. The vast majority of Americans simply DNGAF. And why should they? I might be worried about Facebook election ads, but the people being influenced aren't going to be worried or they wouldn't be able to be influenced. The kind of privacy that it actually makes sense to be worried about (as opposed to ad targeting) is not currently at risk due to specific Facebook data aggregation. The biggest problem is people simply oversharing. The second biggest problem is not being able to block information access to 2nd order friends. But that's not being discussed.

This is not a problem that can be solved. It can be legislated such that all the honest/legitimate people cannot legally get your data and then only the dishonest people will get it. The simple rule is do not put anything on facebook or anywhere online that you wouldn't want public. The internet is not safe, cannot be made safe, will never be safe! Anyone, business, individual, government, who puts anything of value online is a dunce!

That's probably about the size of it. Just as the First Law of e-mail is "Never be frank by e-mail" maybe the First Law of Fascbook should be "Post nothing that you wouldn't be happy to write on a lavatory wall".

It is possible to influence people who are worried about something. The fact that they are worried about something would be a specific angle to use for the purpose of manipulating them.

Perhaps you are excessively medicated and lacking in ability to rationally evaluate legitimate causes for paranoia?

I don't think you get my point. People are not going to be worried about ads being able to influence them. As Matt indicates below, this is about pundits and opinion-makers. But the public is not going to get behind this and I don't think it's really going to go anywhere as a result.

I hope you're (somewhat) wrong, but maybe you're (more so) right.

Yes, but pundits and so-called opinion makers DO care. So we all must hear about it.

The sky is not falling. But it's good business to convince people that it is.

One thing that many Americans do care about is that Facebook be allowed to exist. We know this because so many Americans continue to use it, even after the Cambridge Analytics "scandal". Some might prefer that this or that Facebook policy change, but Facebook users prefer at least having the option of Facebook, as it currently exists, over no Facebook. One question asked of any new regulation ought to be, "If this regulation had existed when Facebook was first being built in Zuckerberg's dorm room, would it have prevented Facebook from getting off the ground?" So called "good regulation" would not lead to a "yes" answer.

1. eliminate cookies that track you across your web activity
2. make all "advertising" publicly accessible and tied to buyer
Chips fall on the rest (buyer beware).

Or, like good Coaseans, we could just leave Facebook alone: https://priorprobability.com/2018/04/07/leave-facebook-alone/

"eliminate cookies that track you across your web activity"

Many, many, many browser extensions do exactly that. Others block the requests to read those cookies. The market has already solved this problem.

I will opt-in. I want "smart add" popping up.

Most Americans don't care about targeted ads, and I would venture to guess that almost no one is influenced by them. That's just another fallacy put forth by people who want to tell themselves that their side lost the 2016 election because voters other than themselves are stupid.

Note how the issue nearly flips the political affiliations of the "voter-fraud" crisis, in that now the people making outlandish claims of voters influenced by comical Russian ads (without any evidence of actual influence) are on the other side from the people insisting that millions of illegal voters are tilting our elections (without any evidence of such voters).

Totally true about it having no real effect. I think one has to realize the attack on Facebook is "political," in the sense they didn't deliver for a certain group and some part of that group wants to make them pay.

Companies and political organizations spend millions and billions on processes related to targeting ads.

The free market knows best.

But almost no one is influenced by targeted ads.

Facebook is in a tough position. Its obvious and demonstrably truthful defense is “despite what you’ve heard the data is way too noisy to reliably predict whether any person or group will even notice an ad; much less be swayed by it.” But its business model is: “only Facebook can reliably predict who will be interested in, and likely to be persuaded by, an ad for your product”.

And then there’s the unstated premise of this show trial: “while we the elite couldn’t possibly be affected by such ads, the benighted mouth-breathers who we represent are lacking in the ability to think critically and so must be protected from those who would exploit the fact that they lack, as a practical matter, agency.”

Our servants have decided we’re morons.

And of course the media keeps churning out one anti-Trump conspiracy after another (golden showers! 19% of Rosneft! Trump Tower server!) and the people lapping up each one still like to call the Trump voters fools.

2% of people who "liked" post A clicked on ad X.

1% of people who "liked" post B clicked on ad X.

0.01% of all other people exposed to the ad clicked on ad X.

Ad X is not selling electric toothbrushes or Ford trucks. Based on previous data, it is tailored in a way that polarizes and/or de facto brainwashes people.

Is there something complicated about this? It is at about the level of week 3 of Intro to Web Marketing.

"Companies and political organizations spend millions and billions on processes related to targeting ads."

Pretty sure the winning 2016 campaign spent a lot less than that on Facebook.

Pretty sure there's more than one thing going on at a time, not all of which related to Facebook, and not all of which related to the 2016 campaign.

TC: "How about the United States adopting some version of the forthcoming EU GDPR? " - right as rain. Readers should be aware that the forthcoming EU GDPR, far from ensuring freedom, actually diminishes it. This is because under the proposed GDPR, any party at any time can complain that somebody is misusing their data, and under the proposed EU regulations, last I checked, a gateway/ISP/FB has two (2) hours to investigate and either remove the data or allow it to stand. Since two hours (or even two weeks) is not enough time to investigate, in practice ISPs will simply take down any offending data, which will result in self-censorship of the internet. Another example of a "good regulation" in theory being hard to write and failing in practice.

Bonus trivia: we killed a kid (goat) over Catholic Easter. I had no idea that in PH, the humane (and quite effective) manner of slaughter is not exsanguination (throat slitting) but forcing the kid to drink vinegar, which makes them choke and in a few minutes they go to sleep, forever. Really was an eye-opener and I was surprised how well it worked. Silence of the lambs (goats)? They are adorable as pets btw, and the meat was delicious.

Then there's the flipside: lets say they give in. Now you get flooded with documents detailing where and how your information is being used. Then you ignore it and life goes on.

That being said, a lot of existing laws and principles aren't being applied to the internet because our lawmakers and judges are all old and I would bet that well into the 1990s, most judges didn't use computers. I would be shocked if a majority of the Supreme court regularly used a computer.

For instance, it would be trivial to enact basic "right to delete" rules. Right now if you delete a post it only marks it as "deleted" but does not actually delete it. I know I can't control whats happened after I share it, but at the very least the direct site I shared it with should have to delete it if I hit delete. I think we can all agree that most people would interpret "delete" to mean "delete" and not "hide from view but still keep."

Simple things like that would be enough to satisfy me. I would like to see the government try to push the internet away from the "click bait" fest its become. I am not foolish to think they can do it (the government ) but the internet now is such a crapfest that its already teetering on the brink of "how do I unplug from this?"

"at the very least the direct site I shared it with should have to delete it"

In principle this sounds like a good idea. Probably it is a good idea. But if malicious actors are harvesting data, this could also prevent the ability to know what data malicious actors may have harvested, which would tend to reduce the ability to mitigate risks.

For example, if some local criminal organization can use some seemingly-unrelated data to predict with 95% certainty who will pursue a certain type of case, and who will with 95% certainty succumb to extortion or bribes to not pursue the case -- but the police chief has no idea about that -- this would be a big problem.

Undelete being ubiquitous in computer programs, I wonder if the CTRL-Z generation really does believe that "delete" means remove without possibility of recovery.

Another government solution in search of a problem. You give somebody information, they can and should be able to use that information how they want absent an agreement with them to the contrary or some sort of legal protection, like copyright. If you don't want them using that information, then don't give it to them, simple as that.

@CG - This was the traditional legal view, but how do you stop business owners from using your your consumer data if credit card data is not anonymous? There's no contractual or technical way to "opt out" of data collection. Hence the outcry that started roughly 25 years ago on this issue. If anonymous credit cards and the like existed back then we'd not be where we are now. The Libertarian would answer that the market should be allowed to develop such anonymous means (maybe Mastercard can create, for their valued or paying holders of their card, a pseudo-anonymous version that can be used online), while the Liberal would answer it's simpler to simply have government pass a law. You see the same thing happen with copyright. There are tech solutions that make infringing copyright difficult to do (starting with the "dongle" but even more high-tech) but the simpler solution is MPAA copyright litigation. It's what society wants, sadly, namely Big Brother cradle-to-grave protection. It's been that way now for over 100 years and short of a catastrophe, I don't see it changing (and said catastrophe may make Big Government even more intrusive).

The above comment was the 64th comment in this post, the same as the number of chess squares. Am I the only one who notices how often that happens? Amazing coincidence, or self-selection?!

How about Andy Kessler's market solution in the WSJ:

"I propose a simple fix. Let’s flip the whole thing—make it about property rights, 21st-century style. America was built on property rights. Congress can deliberate for 90 seconds and then pass the Make the Internet Great Again Act. The bill would contain five words: “Users own their private data.” Finis."

https://www.wsj.com/articles/a-better-way-to-make-facebook-pay-1523209483

That shoves all the hard questions into the definition of "private data."

If I tell you my brother likes chocolate cake, are you allowed to remember that?

I'd love to see Facebook taken down several pegs, which I think is most people's anima, but that doesn't necessarily translate into any sensible legislation or regulation.

If we're implementing regulation, I'd like something done about the fact that Google could suddenly decide, for any reason at all, that I suck and lock me out from a significant portion of my life. I'd want something that says that I have the right to download all my stuff if they decide to lock me out from further services.

That last one is really important. I have copies of most, but if some strange person takes offense at my photos of osprey fucking it would be nice to get my emails back.

Tim Wu believes Facebook is too corrupted to resolve this issue. Wu's solution is another company in the same business but with a different business model (i.e., one not built on advertising). https://www.nytimes.com/2018/04/03/opinion/facebook-fix-replace.html Asking Facebook to give up all that advertising revenue is like asking an addict to give up heroin. Sure, the addict might have good intentions, but the drug (whether heroin or money) is much stronger than good intentions.

Tim Wu has a history of magical thinking.

Wu's argument is predicated on two assumptions that I don't think are at all proven: first, that advertising is inherently bad, and second, that users actually want an alternative paid social network. If either or both are untrue, he's solving a problem that doesn't exist.

If someone told Facebook that its users would be just as - or more - happy with a subscription model, I suspect they'd be totally indifferent. It wants to build a social network, not evangelize ads per se.

Why don't we nationalise Facebook? It wouldn't get rid of all problems but removing the profit motive for abuse of privacy would go a long way. We don't trust law enforcement to private contractors because the risk of abuse outweighs potential gains in efficiency. We may have reached the same point with Facebook.

We would lose most innovation Facebook would have come up with but as a user I haven't seen much over the past 5 years or so. The innovation these days seems to be about how better to parse what they know about you so they can sell it on

I was to ask it. Why not nationalisation?

Because politicians will now control it. IMHO, that's likely to make every FB problem even worse.

"Facebook is invading my privacy. Let's give all their data to the government."

As Tyler said in his post, the government regularly loses information it compels me to give it. At least with Facebook someone volunteered to give it that information in exchange for lolcats.

Facebook losing information would be a feature!

"Facebook losing information would be a feature!"

How that would work in practice:

Facebook: Yeah, we lost the information where you said to keep your data private. .... Again.

There are non-evil alternatives to most of the big tech products (e.g., Brave instead of Chrome, Gab instead of Twitter, etc). The problem is that the overall ecosystem makes it hard to switch over (e.g., content websites don't always work, Apple blocks Gab, etc).

If government wants to 'do something' here, the easiest thing would be to insist on standards compliance in all of it's purchase decisions. Or, if they want something flashier, maybe a "right to side-load" apps onto your devices.

I understand the general "anti-Facebook" argument to be that the status quo has proven to have large social costs. Fake news, echo chambers, etc. However, any individual Facebook user has little reason to care about such things, as you say.

Is this not the perfect situation for government regulation?

Have we exhausted all other options? Have users tried forming a union or society to argue for their collective rights?

Demand for government regulation is a way to argue for collective rights.

An advantage being that government already exists, whereas this society to argue for collective rights does not exist and would take a lot of time and resources to organize.

A preference for non-government solutions does not constitute a good reason to do nothing until decades or centuries have passed in order for every other possible option to be tried.

We have no idea what users actually want. They say "yeah don't invade my privacy" but then answer surveys telling complete randos about their sex lives.

I know you really like the current Congress and POTUS, but not everyone else does.

"They say "yeah don't invade my privacy" but then answer surveys telling complete randos about their sex lives. "

Excellent summary of the problem.

Actually, why stop at regulation? We should nationalize Facebook so the people have direct control of it. The government is always trustworthy and are definitely less powerful than Internet companies when it comes to what they can do with your data.

Pretty much all demands for clear and concise disclosures are also accompanied by demands for complete accounting of terms and an end to "loopholes." These demands are usually in conflict and there's no way to resolve them. It sort of ended up okay with credit card disclosures, where some particular information is in bold at the front, and then they follow it up with endless pages of small print. But I doubt a privacy policy can be distilled to something as simple and concise as an APR.

'That Facebook has to send you a big zip file? '

They already provide that service, actually.

'Downloading Your Info

How can I download my information from Facebook?

Can I pick and choose which information I would like to download?

What security measures are in place to make sure someone else
doesn’t download a copy of my information?

Learn more about what's included in your download. ' https://www.facebook.com/help/131112897028467?_fb_noscript=1

I actually use this on a 1-2 monthly basis to take a local backup of all my Messenger history (and keep the rest because why not). It's a neat enough function and works pretty well, although the huge HTML files it generates for long message histories lag the browser badly, so might need to do some local work to process them.

Mandating it would add a little more overhead to small customised site operations- it's the sort of thing that on its own wouldn't be super onerous if it was the *only* thing needed, though.

Facebook themselves are already in compliance with this, though, and as I understand it, looking in this zip file is how people discovered FB was keeping call histories in the first place.

So if we wanted to make sure that anyone else doing things like FB was there got discovered eventually and judged for it publicly, mandating that they offer the same "big zip file" approach as FB seems like a strategy with at least an anecdote of it working.

I am not on Facebook, never have been and never will be. I deal regularly with a number of psychotic patients who, when off meds, might seek to do me or my family harm. Making it hard to find my personal data is worth the trade off.

But I cannot just avoid a Facebook, they skim off many sites and have the wherewithal to flag me as a node in other users' social graph without data from me. In the current regulatory regime how do I truly opt out?

Suppose some harm does come to me that would not have happened absent Facebook, how do I establish their culpability and get restitution?

Say someone snaps a selfie of my front porch and Facebook can determine both the location and that it is this guy who lacks an account, but shows up in a bunch of feeds. They then sell the social graphs of a bunch of business associates of mine in the local hospitals. A patient gets the "local hospital employee" data and Facebook has nice set of locations where I tend to show up. They come to my house and light my car on fire.

Facebook received monetary compensation for something that ultimately led to harm for me. Something I have actively tried to avoid. Under the current regime how can I even establish that?

Facebook's business model looks a lot like misrepresentation to me. They claim that they protect our data, that everything is optional, and that we cannot be harmed from it. Yet they routinely sell data without controls, they allowed third parties gather data without individual user consent, and harm is still an open question.

Whatever regime we decide to put data into, we do need some way to ensure that the representation made by tech firms results in matches between reasonable public perception and their actual practices. Having enough lawyers on payroll to obfuscate what is actually being done with data should not be sufficient to waive off any claim of misrepresentation.

Excellent post - At first glance, I would prefer to see a market driven solution which emphasizes personal data as property, but the issue is one of enforcement. It does make you wonder if the complexity in modern data systems overwhelms the ability to codify laws preserving those rights ---- which then brings into question the paradigm of property rights, in general, for digital data.

Facebook received monetary compensation for something that ultimately led to harm for me

That is way too many legal hoops to jump through to show that they have any responsibility for that harm. Just because they were a component in the string of events doesn't make them complicit. A dealer that sells me a car that I use to run you over has received monetary compensation for something that ultimately harmed you.

Like I said elsewhere, I wouldn't mind Facebook being taken down several pegs, but most of the proposals are worse than the status quo, like giving all the sensitive information they've collected to the government that Trump runs.

We could try to establish that "data about me" is something I have a property right in and can control, but that is a huge, huge shift. Someone proposing that would need to lot of hard work to show what it does and its benefits and drawbacks.

I acknowledge that Facebook may not be complicit, but how can I tell? Unlike with most other goods, we do not have a well established code of conduct that requires businesses to make commonsense efforts at averting harm (e.g. not serving intoxicated bar patrons who they know will drive harm, not selling guns to patients with obvious psychosis). I know what they do and I can make a determination of what their product is (the car, the gas), how they typically sell their product, and if anything untoward appears to happen.

Facebook is a giant black box. I cannot establish guilt because I don't know what they collect, how they use it, and to whom they sell it.

As is, Facebook can be doing massive harm to you and your options are basically nothing or nothing.

I would be much more happy with just some truth in advertising. I.e. every webpage that lets Facebook skim data has to announce that BEFORE it collects data. I should be able to ask Facebook to delete my data (having never used their service) and have them actually do so.

I am fine if people want to make an informed exchanged with Facebook ... but the current setup is not it. I don't know what information they have, there is nowhere I can go to find out, and nothing I can do about if I can. None of that is ever mentioned in their corporate advertising.

In many other contexts this sort of opacity becomes sufficiently misleading as to be fraudulent. We need some clear standards to adjudicate this sort of thing on more even footing, not just some Fortune 500 company telling us all to take a flying leap.

'every webpage that lets Facebook skim data has to announce that BEFORE it collects data'

You mean like this one? That Facebook icon is not there as a coincidence.

This is exactly the kind of situation I was referring to earlier with blocking 2nd order friends - you can't actually fully opt out or go private. This doesn't matter for the vast majority of people, but it's a really big issue for a small set. Nobody is really talking about this problem though.

I downloaded the location info that my Android phone has collected over the couple years that I have used it. I want Google to sell me something. I want them to notice patterns, when did I get my truck oil changed, how often I bought ice cream, have I seen the sunrise yet at this fantastic location.

I want to know the physical location of the phone scammer, preferably in a format that an intercontinental missile could use.

I really want to know the phone number of the voice who keeps threatening me if I don't but their advertising something bad would happen.

There is lots of information that I want. Regulations will in the end only prevent me from accessing information that the government would prefer I not know.

The cost of not using data that was not collected in the first place is $0.

If the value of using data is less than the cost of telling people what you're going to do with it, what argumentation could support the idea that the activity would create any surplus?

Also, ignorance of users about the diversity of malicious uses of data 1) yes, would result in Americans undervaluing their privacy, but 2) does not constitute any sort of quality argument about whether their ignorant actions constitute an econometrically useful representation of their preferences.

This is the longest thrashing Tyler has ever given to anyone.

This is a guy who will link to a post supporting single-decider health care and can only muster an "I don't find all aspects of this entirely persuasive."

At least now we know what he's passionate about.

For "...a source where these proposals are laid out in more detail", maybe her newsletter on tinyletter could be a good source for you, (that you were not able to find in your google search): https://tinyletter.com/zeynepnotes

There are a number of Facebook Alternatives out there, some based on subscription, some shared resources, some with other variations of advertising and privacy.

Still, Facebook users do have a big option before they switch. They can continue to push Facebook and see what accommodations they can extract.

The average user might free ride while approving those efforts.

This really is a case of a load of people writing about a situation who aren't part of the deal.

As you correctly say, most people don't care about privacy. They might care about their banking details and STI test results, but they aren't posting those on Facebook. They're posting holiday photos and brief film reviews and pictures of dinner.

But Trump used that information to steal the election! /s

No he didn't.

It seems likely that the US will get to see the effects of the GDPR and be able to make a more informed opinion.

We could consider social media like roads and highways, which almost no one argues should be privately owned with tolls collected as soon as you leave your driveway.

Given the revenue minus profits, SWAG places the labor cost of Facebook at $50 a person, maybe less, given much of the labor costs today are related to selling ads and ad placement tools, etc.

So, for $50 times 300 million, or $15 billion funding an independent agency a la the Fed, USPS, a commercial free substitute for Facebook could be created. As a bonus, Congress would be able to dictate 99.999% security while a 100% backdoor access to every bit of data covered by a valid Federal court search warrant.

For $20 billion it could integrate Facebook, Twitter, Snapchat, Dropbox, etc into one free social media public square.

At a cost to taxpayer likely less than the cost of regulating, investigating, etc, all the current social media services.

And if you can't trust a government created and run public square, you can't trust a private rentier profit driven public square with enough money to capture government officials writing laws, regulations, and enforcing them.

"Congress would be able to dictate 99.999% security"

I'm guessing you're not one of the millions of federal employees whose security clearance files got hacked out of OPM.

Are the parties arguing for a regulatory regime that would lock in the market position of the giant incumbents the same parties who argue for "net neutrality" to prevent the same?

"Are the parties arguing for a regulatory regime that would lock in the market position of the giant incumbents the same parties who argue for "net neutrality" to prevent the same?"

Net neutrality wasn't about hurting the market position of the giant incumbents. Google, Apple and Facebook were all for it. Net neutrality was about them locking in the status quo.

How the heck does neutrality, and the ability of any new entrant to play, lock in a status quo?

Every darn success on the internet today entered in a neutral regime.

Alternate title for this excellent blog post: Pick your poison — GovernmentFailure or MarketFailure

Let's face it, people are ignorant and the schools never teach anything about identity theft.

"I would instead start with the sentence “Most Americans don’t value their privacy or the security of their personal data very much"

This is probably true in general, but look at some of the specific information that Facebook let Cambridge Analytica harvest:

https://www.nytimes.com/2018/04/10/technology/facebook-cambridge-analytica-private-messages.html

There's a big difference between Google using your email content in their backend, behind at least a decent layer of security and anonymization, versus letting a minimally if at all vetted third party release a quiz app on your platform that gives them access to your private messages. Regulations can overreach and they can produce unintended consequences, but these hearings wouldn't be front-page news if Facebook had kept its house in order. The best way for the private sector to avoid regulation is often to just not provoke complete outrage, which isn't really the highest of bars.

This privacy business reminds me of my longstanding peeve over junk mail and telemarketing. The postal service doesn't really let us avoid receiving reams of paper waste. Private senders sometimes have procedures to get off a list but it doesn't scale, requires obnoxious effort, often does not work, etc. And things like political messages are exempt.

So if that is the craptastic solution I can expect from government then it can stay home.

Obviously all of that sort of advertising should be specifically opt-in (preferably on an annual basis).

I think privacy can be similarly simple if you just approach it from that basic mindset. Namely that information collection (or sending mail) can be default to reasonable usage for the purpose of conducting the services being rendered, but by default not used for other marketing or giving/selling the info. And the same defaults apply to business stop business relationships.

So *starting with "opt-in"* (not opt-out which is the current legalese) absolutely right from the perspective of people. The desires of companies (or politicians) to market things is irrelevant and carries zero weight in that argument.

Ads don't need targeting beyond the old fashioned kind: geographical, interest based (car ads on car sites) etc.

Sandy Parkilas, ex FB security engineer:

http://nymag.com/selectall/2018/04/sandy-parakilas-former-facebook-employee-interview.html

Who is it, actually, that is complaining? I see, those who are still pisssed off that they lost control of the government and need a whipping boy, even one that they have used to their advantage previously. Well, la-di-da.

Also, sorry, but the "Facebook Solution" is a pretty hilariously bad title for a blog post. Facebook is a symptom, and it's a one-way trip.

Remember net neutrality? Facebook bravely stood up to defend the free and open internet. What a complete and total scam that was. I'm still irritated people could be so dumb.

"Most Americans don’t value their privacy or the security of their personal data very much,”

I would contend that this argument is not particularly compelling. Most Americans don't value free speech or freedom of religion when it is applied to others. We are content to allow the no platforming of speakers on college campuses. We are content to allow mosques to be no platformed in southern and rural states.

My intuition is that the Supreme Court will see a case in my lifetime concerning privacy and the 4th amendment. It is coming. Hopefully Sotomayer can rally the other zombies to protect average Americans.

One argument for using the General Data Protection Regulation (GDPR) as the model for US legislation is that the companies are going to have systems in place to comply with that anyway as essentially all of them also operate in the EU. This means that they have already had to put systems in place to comply with GDPR before it comes into force on 25 May 2018. It would simplify compliance as existing privacy protection systems would be extended to US customers rather than having to create a new and slightly different system.

I know a number of people who are dealing with GDPR by disbelieving its existence. It might be a good idea in theory but it's completely crazy to implement post-hoc after the Internet has been built.

That is an obviously bad idea. The EU comprises a group of wealthy countries with a total population of over 500 million so either you comply with GDPR or you can't trade there. Few of the major services would willingly forego the EU market and have therefore already accommodated the existing data protection regulations which were already a great deal stricter than the US scheme. In order to continue operation they must meet the new regulations.

Post hoc regulation was always going to end up happening as it is hardly feasible to come up with a set of regulations that deal with how the internet evolved before the internet existed.

Wouldn't it be easier if they passed a simple law saying "only Democrats can win elections"? I mean, that's the root problem here. I didn't hear about this whole Facebook thingy when Democrats used it to win elections. Let's be honest and call this by its name - they must have a name for countries where only one party can legitimately win an election.

Comments for this post are closed