What to make of Robert Aumann’s “agreement theorem”?

Here is my 2004 paper with Robin, if you need the background, or try this recent Scott Aaronson post.  Recently I was sent a question somewhat along these lines:

If someone blithely continues to disagree with their (apparent) epistemic peers, how much should we downgrade the rationality and/or intelligence and/or integrity of that person.  My answer was:

We can take a dimmer view of them, and should, but also have to take a dimmer view of ourselves, I think. I don’t think the “they” get downgraded relative to “us.”

…let’s say we agree with it [Aumann’s construction] completely. Then it would be true and non-operationalizable, keeping in mind that the smartest people I know — by whatever metric — do not themselves internalize the argument. There is some kind of semi-rational back and forth estimation process, where in part you judge [peer] expertise by the actual views people hold, and iterate accordingly. There is probably no fixed point theorem here, and no single obviously best way to proceed. Maybe we should downgrade those who do not know that. But I don’t know by how much. Maybe not by a lot, since knowing all those complications doesn’t improve one’s own rationality by a whole lot, as far as I can tell.

With a bit more thought, I have come up with a further and more specific answer.

Let’s say you are staying at a hotel, and everyone agrees that the hotel offers room service.  There is also a very good restaurant one hour away, but people strongly disagree on how to find the place.  Half of the people think the restaurant is to the West, and you strongly agree with this group; the other half strongly believe the restaurant is to the East.  If you choose the wrong direction, you will have wasted two hours driving and will have to settle for the room service in any case.

If you buy into Aumann, you should be more likely to start with the room service, even though you strongly believe the restaurant is to the West.

You will note that is a purely self-regarding choice only.  For choices in that category, accepting Aumann means you should be more willing to focus on what everyone agrees is good, possible, beneficial, etc. — you might call this common sense morality.

Alternatively, let’s say it is a choice for all of society, and many other people are pitching in their efforts to some kind of common enterprise — let’s call it politics.

You then have to ask what kind of stupidity you are most likely to expect from the contributing others.  If the relevant bias is excess conformism, I see no special case to take greater care to converge upon what others think is best.  In fact, there might be external benefits from doubling down on your own stubbornness.  You might be wrong a lot of the time, but still it will be truly rare when lots of people are really quite right, and it is important that your voice shine through and be heard in those cases.

So in a nutshell, the implications of Aumann are “common sense morality for yourself, but political orneriness remains on the table.”

Comments

I must be young because I had to stop and think how it would be possible to disagree on which direction a restaurant is in. Is this that singularity thing people used to talk about?

It is deeper than that. The restaurant has an actual location, and your agreeing with one group or another does not change that location in the least.

Suggesting, at a minimum, at least trying to find someone who has been to that restaurant before bothering with room service. Because 'think' and 'believe' are just a sign of people not actually knowing the location.

"at least trying to find someone who has been to that restaurant before"
This is a pretty good answer. As far as the political analogy goes: History repeats itself, some of us have been here (or there) before. So we know where the restaurant is, or where it definitely isn't.

Nah. I'm a lot older than TC and I had to realize that his 'world' required no telephone service, no internet, and an unexplained preference of very good but remote restaurant fare over local. TC apparently isn't green, since just the cost in CO2 to get there and back would dissuade most people who took GCC seriously. I really don't understand his analogy: if he believes "strongly" that it's West and assuming the issue is time sensitive (why start in a hotel? why's that preferable over starting at home? to suggest that the locale isn't familiar? but if it isn't familiar, then a person possessing a "strong" belief isn't MY definition of rational) then what purpose is served in debating E vs W?

Sorry, meant to respond to Crikey. I do not 'get' these lines. New look for site sucks, imho. Very poorly thought out. A graphic designer who came up with using an unmoored line to designate dependency ought to find a profession better suited to his/her talents.

We can posit a solar powered electric car. Acceleration might be a bit jerky though, what with the apparent lack of electronics in this world.

Li has a good point about the "lines".

• The lines are static, dead, and take up too much screen space.
• Perhaps clicking on a line could cause something useful to happen:
i.e., scrolling up/down to the first or last comment at that level.
• A comment could have small icon; clicking on it could do something useful, perhaps collapsing all sub-comments.
• Neither news.slashdot.org nor english.stackexchange.com
are perfect, but they illustrate some good ideas.

• Furthermore, this comment is so cloise to the right margin that
I must reformat (inserting line breaks) so as to be able to read
what I'm typing. (Worse that glass teletypes of yore.)

Also on a cell phone, too many replies results in a post that is a row of single characters down the right hand side of the screen.

The way I read the analogy is "East vs. West" is "Left vs. Right", or maybe even just "what is the best way to get from where we are now, to where we all agree we want to be, politically", the hotel is the current state of affairs and room service is the status quo. It is presumed that room service is dissatisfying for varying reasons, and we want to change that state of affairs to get to a certain destination.
In this analogy, we all agree on the destination, but not on how to get to it, although one could possibly argue that maybe we don't all agree on the destination.
Maybe people are just confusing two entirely different restaurants with the same place, or maybe some of them have a different restaurant in mind and are hoping the group will settle for their preferred restaurant when they find out they went the wrong direction.

Aaronson's blogpost is an interesting commentary on it. There is a problem with the agreement theorem, but it is indirect. If I'm not mistaken, Aumen's agreement theorem postulates rational thinking. And the problem with this is that most thinking is not rational. Aaronson's blog post gets to this when he discusses Simler's book, "The Elephant in the Brain". Quote: "The book’s basic thesis is that a huge fraction of human behavior, possibly the majority of it, is less about its ostensible purpose than about signalling what kind of people we are". Which sums it up pretty well. It's not that people agree or disagree about a rational conclusion - it is that they make assumptions along the way, and insert those into the rational chain.

According to my complexity-theoretic refinement of Aumann’s agreement theorem, which I later published in STOC’2005, two Bayesian agents with a common prior can ensure that they agree to within ±ε about the value of a [0,1]-valued random variable, with probability at least 1-δ over their shared prior, by exchanging only O(1/(δε2)) bits of information—completely independent of how much knowledge the agents have.

I am not sure whether this enormous effort at proving what anyone's grandmother could have told them is funny, tragic or praiseworthy.

But I did love Hanson's claim that the discounted value of the future environment is zero.

Apart from that, he considers the hysteria of the internet too much. The redistribution of sex issue is perfectly reasonable - and as he points out, no one has problems with talking about the problems of the fat, disabled, wheelchair bound ethnic minority who can't find love. Or even the fat Gay Asian who isn't as popular on Grinder as he would like. Even acknowledging their outrage is to give them power. The only response Hanson should have offered involves the middle digit of the right hand.

The Bayesian refinement is an important one. In plain English: one's belief regarding the location of the restaurant (East vs. West) is not an all-or-nothing affair. Instead, one might believe there is a .6 or .3 or whatever probability (depending on how strong or weak one's subjective belief is) that the restaurant is located west or east ...

Corollary of Bayesian Probability: “… any probability is necessarily an expression of a personal judgment and is therefore necessarily *subjective* in the sense that two reasonable men may assign different probabilities to the same event.” (Schlaifer (1959), p. 15.) In other words, the subjective nature of probability explains why disagreement persists, even among epistemic peers.

" I think. I don’t think .... With a bit more thought, ..."

Wow -- what a muddled post & disjointed thought process. Clear as mud. Epistemology is difficult, apparently

Here's an interesting essay on the Intellectual Dark Web: https://www.nytimes.com/2018/05/08/opinion/intellectual-dark-web.html The author of the essay is sympathetic with the IDW, but faults them for not being more selective in who is allowed in as a member. I've been sympathetic to contrarians since Michael Kinsley; indeed, Kinsley made a career of being a contrarian. But Kinsley's audience was small and anything but influential. Today, contrarians have huge followings, mostly due to the web, but also due to the times. I sometimes chide the right for hating everyone, but that's a problem on both ends of the spectrum. And that's the problem, whether members of the IDW or social justice warriors on campus. Cowen is a contrarian, although not on the level of Hanson, who often seems to relish the role of provocateur. Cowen the contrarian, on the other hand, challenges his readers and listeners to think critically and not just accept conventional wisdom or the prevailing view. There's an enormous (intellectual) difference between Cowen and a provocateur. I have no answer to the current tribalism and the hostility it provokes. But whatever the cure it won't be found in bands of like-minded ideologues. [I will point out that sectarianism and intolerance of the other played a huge part in the development of Christianity, when the penalty for heresy was far greater than losing a few friends. I mention this just to put today's sectarianism and intolerance in perspective.]

Here is a review of a new biography of Netanyahu: https://www.nytimes.com/2018/05/07/books/review/bibi-benjamin-netanyahu-anshel-pfeffer.html Of course, Bibi reflects the times, tribalism, and social and political division, that has spread throughout the West, which, ironically, makes the West look more like the chaotic middle east.

Disagreeables thrive on disagreement. That explains why disagreeables seldom if ever propose solutions to disagreements: if the disagreements were resolved what would the disagreeables do? In Bibi's case, if there were a solution to the Palestinian issue, Bibi's political career would be over. Consider the social provocateurs, right and left, who, absent the tribalism on which they thrive, would return to obscurity. The web is the platform for disagreeables to attain status otherwise way beyond their reach. The success of the disagreeables actually promotes more tribalism.

Why are you talking to yourself? No one else would?

Each of us are a collection of experiences. Identity, childhood, schooling, relationships, and area of expertise. Our interactions are clouded with memory and we rely on intuition to guide us. If I submerge myself in a particular ideology, my perspective is shaped by that experience (e.g. Fox News or CNN). I become a more vehement cheerleader for a team. The wisdom of a crowd lacking diversity of thought. We are conditioned to dehumanize the other. When I step out of that bubble I recognize that my orthodoxy blinds me to the value of compassion and generosity. Not the condescension of "bless their heart" but the wisdom of sympathy.

I once asked my mother "What is wrong with that woman? Is she crazy?" (Said woman was driving 30mph on I-75 through Knoxville while every one else is zipping around her @80mph. ) And my mother replied, "We never know what someone is going through. Maybe she is grieving or maybe she is just having a bad day." Wise words.

"Does anyone bathe in a mighty little time? Don't say that he does it ill, but in a mighty little time. Does anyone drink a great quantity of wine? Don't say that he does ill, but that he drinks a great quantity. For, unless you perfectly understand the principle from which anyone acts, how should you know if he acts ill? Thus you will not run the hazard of assenting to any appearances but such as you fully comprehend." - Epictetus

"Maybe she is grieving or maybe she is just having a bad day." Maybe she is going to get somebody killed.

Or maybe just look the place up on Google Maps? :)

Do you have Google Maps in America, too? After reading the post, I thought you hadn't. Maybe they don't have it in Virginia.

Economics and science command that one buy an assault rifle (12 ga. pump shotgun, heavy caliber pistol, claymore mine) and seriously know how to use.

I liked the Bible quote. People choose death and evil. Act accordingly.

Maybe wh should just agree to disagree.

Another great example of TC's utter inability (apparently, although perhaps his disregard is a ploy to increase blog traffic) to define terms. If A & B have a disagreement over the facts, it is one thing. If A & B have a disagreement over the interpretation of the facts, or their implications, then it is something quite different. It is quite rare that two rational agents will A. Agree to what the basic relevant facts are and B. strongly disagree about their implications. I am not aware of many (any?) examples of it, are you? (point being, invariably the set of relevant facts are different between two rational agents who disagree.)

I'd say that there are way too many moving parts for that theorem to hold, as the commenters above point out. Present day people arguing about morality say, could have many unexamined assumptions about what human welfare is like, today that would be Aristotelian virtue based ethics versus Epicurean utilitarian based ethics, and since they differ on this without realizing it, they get radically different answers to some moral questions and there is no real point in getting too exercised about it.

There is one exception to this, that would be math. That theorem isn’t a theorem in math, it’s an axiom, usually unstated. Euclid doesn’t list that in his list of axioms, but one has to take that as given in that everyone reading his stated list of axioms reads them in the exact same way.

I’d say that it’s pretty boring and uninteresting as to why it doesn’t work in everything except math, it’s much more interesting as to why it does work with regards to math.

Just to be clearer, I'd say that theorem is "why shouldn't we think about things that aren't math like we think about math". The obvious answer is that it's practically impossible, and probably theoretically impossible too. So how can we think about math the way we think about math?

Comments for this post are closed