Why Do Experiments Make People Uneasy?

People were outraged in 2014 when Facebook revealed that it had run “psychological experiments” on its users. Yet Facebook changes the way it operates on a daily basis and few complain. Indeed, every change in the way that Facebook operates is an A/B test in which one arm is never run, yet people object to A/B tests but not to either A or B for everyone. Why?

In an important and sad new paper Meyer et al. show in a series of 16 tests that unease with experiments is replicable and general. The authors, for example, ask 679 people in a survey to rate the appropriateness of three interventions designed to reduce hospital infections. The three interventions are:

  • Badge (A): The director decides that all doctors who perform this procedure will have the standard safety precautions printed on the back of their hospital ID badges.

  • Poster (B): The director decides that all rooms where this procedure is done will have a poster displaying the standard safety precautions.

  • A/B: The director decides to run an experiment by randomly assigning patients to be treated by a doctor wearing the badge or in a room with the poster. After a year, the director will have all patients treated in whichever way turns out to have the highest survival rate.

It’s obvious to me that the A/B test is much better than either A or B and indeed the authors even put their thumb on the scales a bit because the A/B scenario specifically mentions the positive goal of learning. Yet, in multiple samples people consistently rate the A/B scenario as more inappropriate than either A or B (see Figure at right).

Why do people do this? One possibility is that survey respondents have some prejudgment about whether the Badge or Poster is the better approach and so those who think Badge is better rate the A/B test as inappropriate as do those who think Poster is better. To examine this possibility the authors ask about a doctor who prescribes all of his patients Drug A or all of them Drug B or who randomizes for a year between A and B and then chooses. Why anyone would think Drug A is better than Drug B or vice-versa is a mystery but once again the A/B experiment is judged more inappropriate than prescribing Drug A or Drug B to everyone.

Maybe people don’t like the idea that someone is rolling dice to decide on medical treatment. In another experiment the authors describe a situation where some Doctors prescribe Drug A and others prescribe Drug B but which drug a patient receives depends on which doctor is available at the time the patient walks into the clinic. Here no one is rolling dice and the effect is smaller but respondents continue to rate the A/B experiment as more inappropriate.

The lack of implied consent does bother people but only in the explicit A/B experiment and hardly ever in the implicit all A or all B experiments. The authors also show the effect persists in non-medical settings.

One factor which comes out of respondent comments is that the experiment forces people to reckon with the idea that even experts don’t know what the right thing to do is and that confession of ignorance bothers people. (This is also one reason why people may prefer pundits who always “know” the right thing to do even when they manifestly do not).

Surprisingly and depressingly, having a science degree does not solve the problem. In one sad experiment the authors run the test at an American HMO. Earlier surveys had found huge support for the idea that the HMO should engage in “continuous learning” and that “a learning health system is necessary to provide safe, effective, and beneficial patient-centered care”. Yet when push came to shove, exactly the same pattern of accepting A or B but not an A/B test was prevalent.

Unease with experiments appears to be general and deep. Widespread random experiments are a relatively new phenomena and the authors speculate that unease reflects lack of familiarity. But why is widespread use of random experiments new? In an earlier post, I wrote about ideas behind their time, ideas that could have come much earlier but didn’t. Random experiments could have come thousands of years earlier but didn’t. Thus, I think the authors have got the story backward. Random experiments generate unease not because they are new, they are new because they generate unease.

Our reluctance to conduct experiments burdens us with ignorance. Understanding and overcoming experiment-unease is an important area for experimental research. If we can overcome our unease.

Comments

My life was saved in 1997 by an A Only medical experiment that I was worried was going to instead be an A/B experiment in which I might die due to getting a placebo.

There is a Bayesian aspect to this: usually the people running an experiment are not completely agnostic about whether A or B will work best. Instead, they think one will do better than the other.

For example, when I had lymphatic cancer in 1997, I was the first person in the country to be treated with Rituxan in the Phase II trial of what became a billion dollar per year drug. I really wanted to be treated with Rituxan, the first commercial monoclonal antibody, because the Phase I test on dying people had been impressive, and my chance of beating non-Hodgkins lymphoma with just the traditional chemotherapy was not high.

As a marketing researcher who had conducted numerous A/B test markets, I was familiar with experimental design. So I asked asked several times if this was an A/B test using a placebo. The doctors repeatedly assured me it an A Only test and there was zero chance I would get a placebo instead of Rituxan.

When I first got the Rituxan I immediately suffered severe chills and shivering, which alarmed the hospital staff, but overjoyed me: clearly, this was NOT a placebo, but was instead Strong Medicine.

thanks for the personal anecdote. whats your point?

+1. With weak stories like that, maybe he should have died of cancer.

Cancer is tricky, especially detecting the cancerous cells. You would thnk not, but for most cancers it is. Even what appears to be a wound can be skin cancer. And super-sensitive blood tests often don't detect cancer until too late. Polyps in colonoscopies are excised as a precaution, since there's no way of knowing what they are until after biopsy.

Bonus trivia: today's Google Doodle features Greek Dr. Pap, a cancer detection pioneer. Note he got hardly anything but a steady salary and fame from his invention, and he died childless. TC is right about praising scientists to get them to work hard at inventing things. Triste...

The vast majority of experimental cancer treatments don't work, and cause unpleasant side effects. Patients with deadly illnesses are particularly likely to be overly optimistic about experimental treatments.

Steve S, in a way, this is an A/B experiment, right? The A is the people randomly selected for the drug Rituxan, and the B is the people who used traditional chemo. Comparing the A vs the B gives the delta?

Steve W, don't be a dick.

I don't think there is a treatment for being a dick.

Correct. There is no treatment for being me.

My concern was that this study was going to be an A/B experiment with

A: The new Rituxan plus traditional CHOP chemotherapy
B: Traditional CHOP chemotherapy.

And therefore a 50% chance I wouldn't get Rituxan.

Intead, I was repeatedly reassured, that this was an A-Only study in which all participants would get new Rituxan plus old CHOP.

If this had been an A/B study and I had ended up in the Control Group that didn't get the new wonder drug, I would have been sore if I'd died due to the experiment design.

One aspect of the problem is that the medical profession is set up to encourage patients to have only one doctor and to do what the doctor tells them. So, A/B testing is at odds with the general Trust Your Doctor message of the medical profession.

In contrast, when I came down with non-Hodgkins lymphoma, I quickly discovered that there were in the Chicago area 3 experts on NHL, each pursuing clinical trials. Which one should I pick? But how would I choose among 3 experts pursuing immensely complex research on a subject I had never heard of before yesterday?

Being a corporate executive, my initial reaction was to hire a consultant to help me evaluate the 3 experts. So I found a 4th doctor, a young general oncologists, who would charge me for my phone calls to him after my meetings with the 3 experts. He strongly recommended I go with the expert who was starting the Rituxan clinical trial.

This seemed like an extremely reasonable business arrangement, but to a lot of people it sounded wholly novel: is it ETHICAL for a patient to hire an M.D. as a consultant to help him choose among other M.D.'s? The medical profession certainly doesn't promote this practice.

I don't think people are particularly freaked out by Facebook trying out various ideas. I think there is an attempt to get people to freak out about everything that Facebook and Google do. This is due to the same reason that Willie Sutton said that he robbed banks.

Remember that emails can sink politicians. Facebook and Google have enough information to blackmail a whole generation that naively uploaded things to their site. There is a reason why the US gov told China they couldn't buy a US gay dating site like Grindr.

If google or facebook did this and it became known, their business model would be dead. They have almost a trillion dollars at stake, they would have to have a huge payoff for blackmailing any politician. But in any event we are talking about experiments here not blackmail.

Ironically, I was all set to get rich in 1996 off of Internet A/B testing except I came down with cancer, and had my life saved in 1997 by an A Only test.

The COO of my marketing research firm wanted to get in on the 1990s Tech Bubble so he negotiated his leaving the firm and one concession he got was that he could take one employee of his choice with him and he chose me. I quickly came up with the idea of a startup doing A/B testing for websites. Our marketing research firm had been running A/B testing for 15 years, so it was an obvious analogy for us.

But then my health collapsed and I eventually decided I'd better stay where I was and hang on to my employer-provided health and life insurance.

So I missed out on the Internet Bubble.

I think Jim Manzi came up with the same idea about 3 years later and did pretty well off it for awhile.

Approximately 95% of every MR post is an experiment.

why do experiments make people uneasy!
https://www.npr.org/2019/05/12/722629025/world-war-ii-veteran-and-navajo-code-talker-fleming-begaye-sr-dies-at-97
His landing craft was blown up and he literally had to swim to the beach to survive," MacDonald said. At Tinian, MacDonald said, Begaye "got shot up real badly, "SURVIVED" one year in naval hospital."

I was thinking along those lines, too.

I was involved in a similar experiment (with a different result). It was called Vietnam.

That was almost two lifetimes ago.

Look at the pseudo tough fake veteran. He owns guns and threatens minorities, but he never served a day outside the wire.

I served outside the wire, metaphorically. My wife cheated on me and our children aren’t biologically related to my spermatozoa. That’s real service. Serving the brave women of this country, by letting other men make the most satisfied.

... mouse.

If you grew up, you'd be a rat.

Dick, I’m a combat veteran also.

I don’t have a Twitter. Can I send you a burn email address?

I guess these are the people who need to believe that "the science is settled"

"Random experiments could have come thousands of years earlier but didn’t."

It can be frustrating looking back at the geniuses of the past and wondering why they didn't follow up on some rudimentary great ideas they had. For example, in the opening pages of Plato's "The Republic," Socrates quickly puts forward as obvious the basics of Adam Smith's division of labor theory and, perhaps, David Ricardo's comparative advantage theory of trade. But then he moves on to a whole bunch of other ideas and never develops these two ideas. My vague impression is that Plato's instincts about economics were more sophisticated than Aristotle's (which were similar to medieval Catholic economics and much appealed to Marx), but he just didn't see the point in thinking rigorously about economics.

Similarly, the invention of statistical reasoning, such as correlation, was ridiculously late, with an elderly Francis Galton doing a lot of the heavy lifting in the late 1880s. It's not like they didn't have data before: there are at least three censuses mentioned in the Bible. So it's puzzling why nobody did much with data. (It's possible the Japanese did some interesting things -- the oldest sports statistics I've heard of go back to sumo wrestling in the late 1600s, but I don't know if they did much beyond simple tabulation of wins and losses.)

Lack of patents? Pretty obvious to me. PCR inventor Kary Mullis wanted to keep his invention trade secret and only with great reluctance agreed to go along with his employer Cetus' desire to patent it. Turns out later it comprised half of Cetus' $600M valuation when sold (and clearly it was undervalued).

'The lack of implied consent does bother people but only in the explicit A/B experiment and hardly ever in the implicit all A or all B experiments.'

Explicit, implicit - almost as if A is not equal to B.

And some of this seems based on a (very) broad definition of experiment. When I put up an umbrella when going outside, is it an experiment to see whether I remain drier? And if I tell one person that opening an umbrella during a storm will keep them drier, and yet not mention that fact to another person, is that an A/B experiment on the effects of advice in opening umbrellas, even controlling for the fact that opening an umbrella in high winds is bad advice?

A good experiment would be deleting the comments of Clockwork and then checking how he feels about deplatforming. Just a thought.

i think that the issue is loss aversion being stronger than the desire to be a winner. Because in an A/B study, you might come out the "loser" because you got the trearment that doesnt work and the benefit of getting the working treatment doesnt make up for that loss. i bet people would be ok with A/B testing if they were told one day you get A and another day you get B. People are jealous. Thats human nature!

+1 This is an insightful idea and seems likely. Can you think of a way to test whether this is the mechanism?

Make a pain study for two placebos (tylenol and advil) and tell everyone that some days they will be getting the real new medication and some days they will be getting the placebo and see peoples propensity to participate and their happoness level. Then do the same study again and tell them they will be getting either the real medication or the placebo for the duration of the study (30 days) and again see their propensity to participate.

Checklists: There are two types of pilots: those who follow lists and those who don't. Which pilot would you fly with? I would prefer the list guy, but some may interpret the presence of the list as an indication that the list guy needs more experience (so he has memorized the list). Now think about personality types and the professions they are drawn to. The list guy is drawn to engineering, while the not list guy is drawn to medicine. Maybe that explains why private aircraft are often referred to as doctor coffins.

I'm sure this all sounded brilliant in your head before you tried to type it, but I gotta tell you... No.

Call me after you have flown with a doctor pilot and let me know how it went.

It's about the implied level of respect towards the user/customer/patient, like those studies that said that people prefer confident doctors (even though it may not be correlated with better treatment). Confidently stating that A or B is better and providing it shows a higher level of respect towards the recipient - or so people feel. Running the experiment implies that the provider doesn't know which is better, and doesn't care which the user gets, and has unpleasant equations to them as "lab rats." The implication from users who don't like being the subjects of experiments is that you (the scientist) should've respected me enough to have run this experiment in the past on people I don't care about, in order to have had the "right" answer to apply to me today.

That is my intuition as well: people don't like to be commoditized. And--yes!--I understand it happens all the time, but it's particularly unpleasant to have the fact thrown in your face. Everybody dies, but no one likes to be reminded of the fact on a daily basis.

"Random experiments generate unease not because they are new, they are new because they generate unease."

Prof. Tabarrok, great aphorism =)

It's not only random experiments but all experiments. I work on an engineering context and people reacts badly it's when they listen that a thing needs to be tested. The typical exchange is:

- So.....are you not sure that is going to work?
- We trust the design, but we need to test it for validation.
- So, are you sure or not?
- After validation

There should be an optimal way to handle these situations, I'm still learning.

When I worked in RDT&E, I witnessed the following exchange:

"How do we know this thing will perform as advertised?"
"We don't."
"Then..."
"If we knew for a fact how this would go, we'd be in production, not engineering development."

Alex, that was excellent, and I loved (as usual) your analysis. My worry is that when everything is done by A/B experiment then whims may determine outcomes. Think of politics. The way to win is to "swing the swing voters," and the way to do that is to test, test, and test until you get the message they respond best to that gets your candidate the vote. Is this how we will form a representative government in the future? Is it any worse than the way we did it before? I don't know, but I think "testing everything" creates a new category of short-attention-span activities that may keep us in localized equilibria without being able to reach over the testing barrier to better states.

I wonder about an irrational insistence on ‘fairness’ of potential outcome. Do people get offended at the possibility that those who received the inferior side of the A/B trial “lost”? I know many folks who insist on the same treatment for everyone, even if it turns out to be an inferior treatment, rather than allow for the possibility that some benefit more than others due to “luckily” receiving the more beneficial of the treatments.

A patient must conclude that either: 1. the doctor does not know what is best; or 2. she does, and is OK ignoring it for the sake of 'science.' Neither is reassuring. And why should we look down on people who prefer confident doctors? Medicine is complicated, we have to trust other people, anxiety about getting the right treatment is real, and the pain of anxiety is real pain. The ethics of A/B testing in medicine are not a slam dunk; it is appropriate in particular circumstances, but only in those circumstances. As a patient, wariness is a rational response.

I think there's still some vague negative association left over from experiments done on unaware or captive populations. Some pioneering 19th century gyno was just under fire in NYC for experimenting on slaves.
Not to mention even the perfectly reasonable use of lab animals -- no one wants to feel like rattus norvegicus in some of these projects where the subjects are deliberately saddled with cancer, obesity or psychological trauma.

And then there's the most famous psych experiment of all time -- where people apparently were just fine with shocking the crap out of each other.

Casual punters, asked in a poll, answer such that they do no harm. They've deprived no one in the search for better answers.

This is why casual punters should not direct research.

I wonder if there is a relationship between this and unease about markets. In both cases, there's a sense that the other parties involved aren't necessarily motivated by optimizing the immediate welfare of all people involved - and that there's something not quite moral about that. Market outcomes are also about interacting with other people who are self-interested and then trusting that the effect is going to work in your own self-interest.

No, market outcomes are about interacting with other people who are self-interested and working out if the proposed transaction is going to work out in your own self-interest. Insisting that self-interest is bad is at the heart of Socialism, but then you end up with highly inefficient and mispriced products, and a cadre of special people whose job it is to make the whole thing work, and who end up (being, like everybody else, including me & you, self-interested), looking after their own interests with their dachas and special shops and the rest. It's all there.

Because "“Humankind cannot bear very much reality.”

I’d be curious to see the results if the word “experiment” was removed from the third prompt. I think it has a lot of negative connotations that are biasing the results.

Right, look at the horror and sci-fi genres, crammed with "experiments" gone terribly wrong. "Research" at least is a less loaded term

Experiments do not make people uneasy if they are conducted correctly.

You do not even know there is an experiment being conducted or what it is about.

And, experiments conducted with Google Analytics or Facebook data are done without your awareness. If you work with people who do analytics or are in marketing, they do A/B tests all the time. Daily. You don't know it.

Next time you go to a grocery store and decide to buy an impulse item on the end cap display, you are participating in an experiment.

So would you rather that companies and organizations just make decisions without testing them out first? It's also an experiment, but with just one treatment condition. In the grocery store example, are you suggesting that they let you know that they decided to put more candy bars next to the register every time they do it?

Michael, I do not understand your comment. In the grocery store example, for example, marketers will use eye movement studies on consumer exploration on shelf and adjust accordingly.

I just don't get your comment. I am suggesting that companies should and do run experiments. Continuously.

Perhaps you didn't understand what I wrote.

That's the whole point, though. Yes, clearly those experiments occur all the time. It does not at all follow that people don't object to them. These results heavily suggest that people DO object to those experiments. They simply literally do not know about them. Once made aware, they may in fact object.

Now, why is this true? Who knows.

There was a dustup here recently over a gradual realization on the part of some parents that the 1:1 take-home iPads were having certain effects: frequent-enough incidents of viewing pornography and violent images (e.g. first graders watching an ISIS beheading) whether because school administrators can't figure out filters, or to simply disable the gorram internet for littles, or because the kids can do whatever they want on their devices when they leave school grounds; the iPad having become not merely the instructor, and the work, but also the prize for finishing quickly, with predictable results; the iPad even having taken pre-eminence in art class over old-fashioned things made by hand. Not to mention some social media stuff I didn't follow, and something called Fortnite. There were plenty of laments of tech trouble with the older kids as well.

But the "ask" was simply that the district offer an iPad free option for K - 5.

No one approached it from the angle of whether it was feasible to return to old-school pedagogy, or to find teachers who would be able to get through the day without the iPad - nothing practical like that. And given that the district already has an opt-in pilot program for elem. kids to be educated - "immersed" - in the language of their parents' lawn crews, perhaps that's not a big hurdle.

While many people offered their own anecdotes, and said, let's get the damn screens out of elementary school (yours truly pointed to our actually being behindhand, trendwise, compared to our cooler older brother tech center in CA - where the elite have gone the other direction with tech and their kids) and a few imbeciles seemed to equate manipulating an iPad like a chimp with becoming "proficient with technology for the world they will live in," many others seemed mainly concerned at the idea of there being offered a tech-free or tech-lite option they themselves didn't want. Like, we must not only get the same thing, we must all want the same thing. As though there there might be other values, and that in itself was threatening.

Prolly irrelevant to random assignment to an A/B experiment; but interesting to me that a massive social experiment - has there ever been such a one? - with no control would be more acceptable to many people than having a choice, which choice itself could be enlightening in a few years' time.

"Fine, you're the expert, so if you say A is best, we'll go with it."

"Fine, you're the expert, so if you say B is best, we'll go with it."

"Wait, if you don't know if A or B is best, then why are those the only two options? You'd better add my preferred policies C, D, and E to your list before I can support any experiment."

Could this also explain the federal government's scope creep? All parties object to a system where policy A exists in one jurisdiction and policy B in another, overwhelming the supporters/opponents of A/B in their respective locals?

I should say: "the tendency for government power to centralize". I don't think it's a uniquely American phenomenon, and indeed even crosses national boundaries with some regularity.

While there are probably many reasons why people feel unease about being "experimented on", one reason that surprisingly hasn't been mentioned here is that there are a number of famous instances in the past where there have been experiments conducted without participant's permission with seriously negative outcomes:

Withholding treatment for syphilis to monitor the effects - https://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment
Spraying the population of California with bacteria to learn about the spread of chemical weapons in a city environment -https://en.wikipedia.org/wiki/Operation_Sea-Spray

There are a number of other examples too, mostly from the 1950's and 1960's before ethical review standards improved.

I wouldn't be surprised if people's unease in experiments correlated with their trust in public institutions.

"While there are probably many reasons why people feel unease about being "experimented on", one reason that surprisingly hasn't been mentioned here is that there are a number of famous instances in the past where there have been experiments conducted without participant's permission with seriously negative outcomes"

Very much this. Any discussion of experiments involving humans that doesn't address this isn't just inadequate, it's negligent. This hesitation isn't a bug, it's a feature we had to work long and hard to instill in people. It's GOOD that people react with a visceral anger to being experimented on, in exactly the same way it's good when people hesitate before pointing a gun at someone. Human experiments are incredibly dangerous, and require the utmost caution.

The fact that this isn't the immediate reaction of this blog--the authors or the participants--is deeply disturbing to me.

Some theories:

1. Creates unease with the idea that they (or someone) is being "used", in an instrumental way, towards someone else's ends. It feels inhuman, or at least, "uncaring".

2. Unease with the notion that the experimenter might learn something about the experimentee that the experimentee doesn't know about themselves... and this can be used against them.

I've encountered this phenomenon myself in a very different context. Some years ago I created a service that, before releasing to customers, some believed could result in negative externalities. In response, and naively, I labeled the prototype of the service an "experiment", which resulted in much *more* negativity towards the service by everyone involved.

Excellent question, but it has been answered—by Immanuel Kant in the second formulation of the Categorical Imperative in his *Groundwork for the Metaphysics of Morals*. (“Act so that you use humanity, as much in your own person as in the person of every other, always at the same time as an end and never merely as a means.” AK 4:429) What people object to is being used as a disposable tool in the discovery of medical information. If they consented to being part of a study, they would have a buy-in: insofar as the discovery of medical knowledge is one of their ends, the study would be achieving that and they would not be being used merely as a means. Whatever you think about the correctness of Kant's theory, it is hard to deny that he picked up on a very deep intuition about our attachment to dignity (For the distinction between dignity and price, see AK 4:434.)

Maybe Kant was remembering the time Nollet wired up all those monks to the Leyden jar.

It was a period when the experiments were often of an electrical nature.

I think Agnes Callard's comments about means vs. ends and steven wolf's about winners and losers have some element of truth. But to me they seem to overlap with a more central issue, particularly in a medical context: When we seek the care of a doctor, we need their help, and we must trust them to get it. They have great power over us, and if they chose to abuse it we are helpless against them; thus the Hippocratic oath. If they treat us in ignorance of the effect of their treatment on us, not knowing whether it will hurt or harm, it is a betrayal of that trust. We want our doctors to always act in the positive belief that they are helping us. Anything else is betrayal. Thus the old crack about surgeons --- "Sometimes wrong but never in doubt."

I'd say you'd see the same effect in any circumstance in which the stakes are high and the subject must trust the experimenter to have their best interest at heart. Would you send your kids to a new type of school if you thought their teachers were indifferent to the method of teaching being used? Would you stake your kid's future on "well, might be better off this way, might be better off that way, don't know, don't care much, try what you like, good luck"?

Consider the specifics of the prompt itself. Both A and B are unobtrusive and thus few people have objection to either -- and thus one might think to oneself "if each of these reasonable safety precautions are likely to have positive outcomes, doing only one or the other in order to see comparatively how many more people die is perverse."

The argument assumes the only way to do A vs B is at the same time.

"Surprisingly and depressingly, having a science degree does not solve the problem. In one sad experiment the authors run the test at an American HMO. Earlier surveys had found huge support for the idea that the HMO should engage in “continuous learning” and that “a learning health system is necessary to provide safe, effective, and beneficial patient-centered care”. Yet when push came to shove, exactly the same pattern of accepting A or B but not an A/B test was prevalent."

My HMO, Matthew Thornton Health Plan, run by doctors, did

Lots of patient education on their implicit A v B v C v D...

Took the existing standard as A, picked a new standard, B, told all patients why they chose B, then compared A to B.

Then based on results, picked a new standard of care C, explained the change to patients, continued study.

Specific example. For prostate cancer detection, the A was simple "palp".

The B was palp plus the new PSA giving a lot of weight to the PSA test with patient implicit consent.

Then as the HMO and other international studies found too many false positives, and economies of scale cut PSA test costs drastically, changed to C:

The primary test is the palp test, with PSA tests done annually as standard but effectively ignored unless levels change significantly, and patient given results with possible positive tests suggesting watch full waiting, unless the patient wants more invasive testing.

The studying of prostate care would have continued if Congress had not effectively made HMOs illegal in the US with "tax reform" requiring the IRS to tax imputed "insurer profits". How can you understand what part of an HMO is "insurance" and "delivery" if billing does not occur between the two on a fee for service basis using standard codes. The standard codes are those developed for CMS aka Medicare. This also hit the doctor and hospital prepaid health plans, Blue Shield and Blue Cross, each originally run by doctor and hospital coops. Both the Blues and HMOs were legally NOT insurance, but were regulated by State insurance department under special laws for public corporations delivering health care because insurance departments had auditors and actuaries.

And the tax reform created a B or C in regard to health care.

If no public policy on health care was A, say the situation circa 1910, the Texas hospital creating prepaid care plans sold by deducting from employee wages, was B, with implementation rolling out to most States, and expanded to include doctors, and States passing laws to enable Blue Cross Blue Shield to operate, was B. A parallel C was the employer created HMO on the west coast to deal with poorer health workers during the war, eg, Kaiser. In 1970, Nixon siigned the law promoting HMOs by mandating all large employers offer any HMO in the area to employees along with BCBS or self insurance.

In the 70s, insurers flipped their view, reflected by many conservatives today: "insurance should never pay expected medical expenses", thus creating option D.

So in the 70s, most States had expanding robust B, C, and D options.

Patients increasingly picked C (HMOs), pressuring B (the Blues), and D, the for profit health insursance kept failing with patients and with the public, demanding laws contrary to the profits of for profits.

Thus lobbyists got Congress to create E, competing for profit health insurers and prohibiting tax exempt not for profit prepaid, B and C.

Option E has failed the test.

I think the unease generated by A/B testing is that it makes people feel like objects rather than sovereign individuals. You have the sense of a puppet master treating you as an interchangeable object of experimentation rather than a human. This strikes at our sense of selfhood, so we irrationally push back.

In all the controversy over the new Common Core curriculum earlier in this decade, I seemed to be the only one calling for testing it before implementing it broadly.

As far as I could tell, David Coleman made it up and persuaded Bill Gates that it would be good, but that there was little research on how it worked in practice. Personally, I find both Coleman and Gates to be smart guys, and if the future of education came down to the hunches to two individuals, well, they were among the best two I could think of.

But still ...

On the other hand, nobody else seemed to share my moderate skeptical open-mindedness. Either they were enthusiasts for Common Core or root and branch rejectionists.

Alex-

As you write, many survey respondents' uneasiness with A/B tests is likely due to "prejudgments" that one of A or B (which they might accept as interventions in the absence of other options) is a better treatment than the other. This entirely reasonable feature of the responses explains every result of the paper -- including the case the authors use to control for it.

That case is the one you highlight in your post: a doctor prescribes all his patients "drug A" or "drug B," or decides to prescribe them to the patients randomly. Meyer et al think (as do you, it seems) that in this case, "there is no reason for participants jointly evaluating the two policies in the A/B condition to think that A or B is better." But this confuses uncertainty in the distribution of drugs with uncertainty about the quality of the drugs: just because participants don't have information about the drug they're receiving does not mean the drugs are equally good. Individuals like the researchers conducting the trial may very well have antecedent reason to think one of the drugs is better than the other, e.g if one was developed by a notoriously unsuccessful company, or approved by the FDA after a sketchy lobbying effort -- which would make the ethics of the trial significantly more complex.. [To truly control for this issue of comparison, the authors would have had to stipulate in the drug case, perhaps ridiculously, that each group was actually prescribed exactly the same thing].

You abandoned your initial diagnosis of the survey results too quickly, in other words. There's nothing "depressing" about the data, either: randomized trials really can raise moral issues of this comparative sort that other interventions do not. I may have responded to the survey much as the other "ignorant" respondents did.

Somewhat related addition: It's also a mistake to assume that individuals are reliable assessors of their own moral judgments, at the paper often appears to. The "moral dumbfounding" literature begun by Haidt, among others, shows quite convincingly that many people can't reliably explain what makes them uneasy about actions they find immoral or inappropriate..

An experiment will be something you have to think about, and argue about, and will be misused by fools to argue for the wrong thing. That's a mental burden that a lot of people would rather not bear. "Let someone else do the experiment, and tell me when it's done", they think, "or better, don't tell me when it's done, just tell me if it found anything surprising -- or wait, better still, tell me only once there's a consensus that the surprising thing is actually correct and useful." It's a narrow point of view, but not a crazy one. Indeed, it bears a strong resemblance to the FDA's approval criteria for drugs. Not everybody is cut out for life at the leading edge.

Besides, ordinary people judge the word "experiment" by its fruits, and these days there are a lot of ill-conceived experiments whose fruits are poisonous. This badge versus poster experiment sounds like one of them; it sounds like a contest as to which will be easier for the staff to ignore -- and both will be very easy to ignore. With such a petty decision, the attitude "just choose one, dammit, and let us get to work ignoring it" is quite reasonable. Hospitals are serious places with serious problems, and interlopers who come in with badges and posters get about the level of respect they deserve, which is very little. If you're actually serious about infection safety precautions, you fire doctors who don't follow them.

In the case of Facebook, though, that organization has such a high reputation that the resistance to experiments is likely not from contempt for "experimenters" but from a fear that the organization will get to know them better than they know themselves. This, indeed, is part of Facebook's motivation; knowing someone's weaknesses is a great way to sell them advertising.

This article is deceiving. You implied that participants saw all 3 options and rated each for its level of inappropriateness. The study clearly says the participants were randomly assigned to read about ONE of the possible decisions and rate it.

Of course it’s obvious to you which is the best out of the 3. It’s not the same thing as just rating one in a vacuum. I would bet the results would look a little different had the experiment been conducted the way you think it was. Misrepresentation plain and simple.

This warrants an edit.

Comments for this post are closed