Truth be told, physicists are terrified of quantum mechanics. Really. The rules of quantum calculation seem so strange that anyone afraid of losing his or her mind should be scared. (Those who love to lose their minds, on the other hand, adore it.)
Struggling to make the quantum rules square with a reality "out there," many physicist’s position is "shut up and calculate." Others have abandoned standard logic, probability, or decision theory for "quantum" versions of these things, or have decided that consciousness must play a fundamental role. (There is even a quantum game theory.)
In eleven days I give my first talk at a physics department, on my conservative research program that tries to have it all: the quantum rules, a reality out there with no special role for consciousness, and keeping standard logic, probability, and decision theory. I’m not quite there yet, and I may be too close to my work to be objective, but I feel I’m very close.
Of course we can’t make all the quantum strangeness go away. For example, reality seems to be intrinsically non-local, and it seems to be far larger than we ever imagined. But the universe we are all familiar with now is far larger than our ancestors ever imagined, and even Newton gave up on locality.
Fear not the quantum night – it really will all make sense someday.
In the latest issue of Pacific-Basin Finance Journal, Jay Ritter looks at long-term changes in sixteen countires; he finds that
the cross-country correlation of real stock returns and per capita GDP growth over 1900–2002 is negative,
specifically -0.37, with a p-value of 0.16. For 19 nations from 1970 to 2002 the correlation is -0.08, and for 13 other nations from 1988 to 2002 the correlation is 0.02. These confirm previous similar results. He expains these results saying,
If increases in capital and labor inputs go into new corporations, these do not boost the present value of dividends on existing corporations. Technological change does not increase profits unless firms have lasting monopolies, a condition that rarely occurs. Countries with high growth potential do not offer good equity investment opportunities unless valuations are low.
During the dotcom boom, I had doubts about the relation between the clear (if modest) longterm economic benefits of the web and the less clear profits to be gained by web first movers. (So I was mostly divested of stocks then.)
I’m teaching urban economics now for the first time. There is still a lot I don’t know, but it seems clear to me that there is one big overall market failure in urban economics, one that the textbooks don’t seem to make clear.
Land in populated areas is valuable mostly because other people live nearby; people with whom one can have social, job, and shopping relationships. While our neighbors often hurt us, their net (and marginal) effect is on average positive, and huge.
This externality, however, mainly comes from the people on nearby land, and not from their gardens. So when we consider how much land to use for our homes or offices, we do not consider the gains to others from our using less land, and so allowing more people to be nearby. We also neglect the benefits we provide others when choosing to live at the edge of the populated area, versus living in an unpopulated area.
These neglects suggest a big market failure, wherein housing and office density, and the size of the populated areas, are too small. It might perhaps be countered by a status externality, wherein people gained status by living closer to an urban center. But I doubt status effects fully compensate.
Local governments are in a position to reduce this externality, but they seem to mostly make matters worse. Minimum lot sizes, maximum building heights, maximum densities, and barriers to development at the populated edge are far more common than their opposites.
Now why don’t the urban economics textbooks make this point clear?
A student and a colleague of mine, Colleen Berndt and Larry Iannaccone, have an interesting paper on the Oracle at Delphi. It turns out that the Oracle’s prophecies tended to be pretty accurate:
The long journey some made to reach Delphi, combined with the long waits for a consultation, indicate a greater opportunity for information to make its way to the priests and pythias. In addition to the information circulated at the local watering holes, the priestesses were able to aggregate information gleaned from petitioner.
On political subjects, it was especially important for Delphi to be independent of political influence:
When Delphi gained its independence from the Phocians, it began to benefit greatly from a perception of fairness. Lack of control by any one state meant that Delphi could operate free from any political pressure.
Since the Oracle charged by the question, it was important to think carefully about the questions one asked. This is also good advice for those creating prediction markets today. The cost of creating a market is largely independent of the topic, while the value varies greatly by topic. So for the best cost-benefit, ask the biggest questions.
Our ancestors thousands of years ago knew that if they really wanted to understand the heavens, they would have to sit down and carefully count some things. By a few centuries ago, such painstaking efforts had yielded an impressive understanding of dozens of other subjects. By the twentieth century, the virtues of counting to understand would seem to have long been established.
Ordinary people are far more interested in the social world around them than they are in most of the arcane topics to which counting was first applied. And yet, social science didn’t really start to count in ernest until the twentieth century. Why? Here are some possible theories:
- We thought we already understood the social world as well as we needed.
- Social science is just very hard – simple counting yields far fewer
useful insights than in other fields. So social counting had to wait
until we could do it on a massive scale.
- The subject was taboo because we thought that a better social science would mainly just let some people take more advantage of others – there were few net benefits.
- We held strong opinions on social topics, but at some level knew many of them to be false. Social science was taboo for fear of confronting our self-deceptions about the social world.
I lean toward #4. Comments are open.
A favorite (alas unpublished) theory paper of mine shows:
He who pays the piper calls the tune, but he can only successfully call for a tune that he will recognize upon hearing. … When experts must pay to acquire information, have no intrinsic interest in client topics, and can coordinate to acquire the same information, no expert ever pays to know more than any client will know when rewarding those experts.
So why should you ever believe what you read? Consider “Avoid bridge, construction delays.” The newspaper might fear that you will try the bridge, and think less of them if their forecast is bad. Or they might fear that your close friend will, and tell you.
How about “Michael Jackson arrested today”? Few readers would normally check this firsthand, but if a big story like this was wrong then a competing publication might make a stink, and then one of your friends might check it out. For the vast majority of media claims, however, there is little incentive to make a big stink, and few people who care would ever learn the truth given a stink. So if it takes the media much effort to learn the truth, why should they bother?
Media watchdog charities might claim to check for you, but why should you believe they share your interest in knowing the truth, instead of just wanting you to write them a check? Bloggers can also claim to help you check, but if those bloggers mainly care about attracting readers (an interest MR and others often admit), the same problem remains.
The only real solution I can see is to make better use of the people you have good reason to think really do share your strong interest in knowing the truth about some topic. So connected blogs by people who know each other in other ways may be the key. Perhaps such networks lower the cost of raising a stink to the people who care.
Of course it may also be that most of us do not really care about the truth; we might just want interesting new things to talk about with each other. Which might explain the otherwise-surprising lack of interest in this problem (which applies to academia at least as well as to the media).
We spend endless hours arguing who is right in current controversies, but minutes or less remembering who was right before. Oh we sometimes brag about selected cases, but we rarely collect systematic statistics. (Rare exceptions include weathermen, business analysts, and sports punters.)
Yet such track records are just what we need to figure out who is right today. You might think it enough to know which side is smarter or better informed. But a janitor can consistently beat his arrogant CEO, if the janitor is careful to only disagree on topics where he clearly knows more. When disputants are aware of each others’ opinions, it is those who better know when to defer and when to stand their ground that should be right more often.
Yes it would be hard to track and score everything everyone says, but we could do a lot more than we now do. Widespread idea futures or David Brin’s prediction registries could help us estimate which individuals tend to be right more often. And it should be even easier to evaluate standard demographic categories.
When a husband and wife disagree, who tends to be right? How about a parent and child, a student and teacher, a boss and employee, a liberal and conservative? For a few thousand dollars, we could bring dozens of such pairs into the lab, ask them various questions together, and see who is right when they disagree. Perhaps lab disputes differ from field disputes in unknown systematic ways, but it would be a great first step.
Perhaps even more useful, we could take a sample of real media disputes and see both who tends to take which side, and which side seemed more right in the end. I have just finished one such analysis, on the dispute over the policy analysis market (PAM), a.k.a. terrorism futures. Four readers rated 555 media articles on which gave favorable or unfavorable impressions of PAM, and these ratings were regressed on sixteen features of articles, publications, and authors.
The result? Since five strong indicators of more informed articles agreed on a more favorable rating, the favorable position looks like the “right” one here. In the case of PAM, these groups were right more often: men, conservatives, web or broadcast media over print and books, and those who talked to people with firsthand knowledge, wrote longer articles, wrote news as opposed to editorials, and wrote for specialty publications with larger circulations and more awards.
Of course we need to look at more disputes to see which of these indicators holds more generally. But a few tens of thousands of dollars should pay for that. And with good indicators in hand, we could in real time predict which sides are probably right in current disputes. Wouldn’t that be something?
Intuitions are our least introspective belief components. We know the least about their origins, or how they would change if our other beliefs changed. Of course this does not make them wrong; since we are only consciously aware of a tiny fraction of what goes on in our minds, in a sense most belief is intuitive.
Alex reminds Tyler that initial moral intuitions are often contradictory, and therefore in error. We should thus “curve fit” around our initial intuitions to create a better estimate of moral truth. And the higher our error rate, the less influential each specific intuition should be. In this post, let me highlight a huge error source: cultural and genetic heritage.
Put yourself into the frame of mind of a reasonable creature of some indeterminate species and culture, before your culture or species arose. Did this creature have a reason to expect the moral intuitions arising in your culture or species to be closer to moral truth than intuitions in other random cultures or species? If not, then any such correspondence would be random luck.
We do not want to just hope that we happen to believe truth; we want to see that the process that produces our beliefs produces a correlation between our beliefs and the truth. So random influences on our beliefs are bad, inducing more error. Unless you can see a reason to have expected to be born into a culture or species with more accurate than average intuitions, you must expect your cultural or species specific intuitions to be random, and so not worth endorsing.
A similar argument suggests you reject ways that your intuitions differ the average in your culture or species. If a neutral observer would have no good reason to think you special, then neither do you.
Once upon a time one’s social status was clearly signaled by so many things: fragile expensive clothes, skin not worn from work, accent, vocabulary, and so on. As many of these signal have weakened, one remains strong: tantrums.
CEOs throw more tantrums than mailboys. Similarly movie stars, sports stars, and politicians throw more tantrums than ordinary people in those industries. Also famous for their tantrums: spoiled young wives, bigshot patriarchs, elite travelers, and toddlers.
These patterns make sense: after all, beautiful young women and successful older men are at their peak of desirability to the opposite sex. If you are surprised that toddlers make the list, perhaps you should pay closer attention to the toddler-parent relation. Parents mostly serve toddlers, not the other way around.
Of course, like a swagger, the signal is not so much the tantum itself as the fact that someone can get away with it.
Addendum: Todd Kendall has a data paper on this for NBA players.
A July 30 New Scientist article (sub. rec.) on lying reports:
A succession of studies using tests like this have shown that most of us are not very good at spotting if someone is lying. Even people whose job it is to detect deception – police officers, FBI agents, therapists, judges, customs officers, and so on – perform, on average, little better than if they had taken a guess. … But a few people seem to be the exceptions that prove the rule. … In a range of studies that totalled about 14,000 people, … The researchers identified 29 “wizards” of deception detection, who are now the subject of intensive study … One of the studies, published last year, investigated women’s skills at detecting men who were pretending to have appealing attributes … a man claiming he owned the Ferrari outside, rather than admitting he had borrowed it from a friend for the night. … single women seemed to be better at detecting men who were faking good than those who were in a committed relationship. “Women have a kind of radar for deception in men, which they switch on or off, depending on the context.”
So sometimes we are bad at detecting lies because that serves our interests. Tyler taught me the centrality of self-deception in human affairs, and so I wonder: could our need to be good at believing lies explain why we are surprisingly bad at detecting lies? Are those wizards of lie detection the vanguard of a future humanity, or do they pay a high price in their relationships, finding it hard to support the lies that fill daily life?
In Wired, Kevin Kelly describes the colorful web pioneer Ted Nelson:
Computing pioneer Vannevar Bush outlined the Web’s core idea –
hyperlinked pages – in 1945, but the first person to try to build out
the concept was a freethinker named Ted Nelson who envisioned his own
scheme in 1965. However, he had little success connecting digital bits
on a useful scale, and his efforts were known only to an isolated group
of disciples. Few of the hackers writing code for the emerging Web in
the 1990s knew about Nelson or his hyperlinked dream machine.
In 1984 I quit U. Chicago physics grad school to join the unpaid fringe of Nelson’s group. (Also to pursue A.I., but that’s another story.) I met Nelson a few times, but mostly spent untold hours talking with the brilliant crowd hanging around his Xanadu project.
During those years (through 1993) I learned that with some effort one can discern a substantially clearer outline of the future than is found in Sunday supplement punditry or even conservative academic commentary. And one can even have substantial influences on key changes. We were way ahead of the curve on the web, nanotech, and much more.
But I also learned why this is possible – such insight doesn’t produce much compensation or recognition. Those who made money and fame on the web were at very specific places and times with just the right skills and resources; foreseeing the general outlines of the web mean rather little. Let this be both an encouragement and a warning to those misspending their youth today. 🙂
Of course if we had enough prediction markets about such things, such insight might both be rewarded and better guide the actions of others.
Thanks to Chris. F. Masse for the pointer.
The Washington Post reports a record pork feast today
President Bush signed into law a massive $286.4 billion transportation
bill Wednesday that includes more than 6,000 pet projects of lawmakers
across the country that range from a crucial parkway linking two
interstates in Illinois to a snowmobile trail in Vermont. … Keith Ashdown … said the distribution of the money “is based far more on political clout than on transportation need.”
Such waste may seem inevitable; how else can congressfolk claim credit for “bringing home the bacon” to their district? But consider this alternative:
Allow federal tax rates to vary by congressional district. Given this, taxes would suddenly become a concentrated benefit. Incumbents could brag about how much lower taxes were in their district, and challengers could complain how high they were. Incumbents would have clear incentives to trade votes to get taxes lowered in their district.
With “diet pork” on the menu, politicians would have a healthier way to feed their need for concentrated benefits. (I’m leaving comments on for a change.)
My colleague Bryan Caplan has emphasized for years that people treat politics differently from other topics. This has seemed to me a deep insight, and I’ve long puzzled over it. Wouldn’t you know it, Plato noticed the same thing (Protagoras, translated by Benjamin Jowet):
Now I observe that when we are met together in the assembly, and the matter in hand relates to building, the builders are summoned as advisers; … And if some person offers to give them advice who is not supposed by them to have any skill in the art, even though he be good-looking, and rich, and noble, they will not listen to him, but laugh … But when the question is an affair of state, then everybody is free to have a say–carpenter, tinker, … and no one reproaches him, as in the former case, with not having learned, and having no teacher, and yet giving advice; evidently because they are under the impression that this sort of knowledge cannot be taught….
Our human willingness to have confident opinions on topics where we are poorly informed seems to me a key problem in politics.
Thermodynamics lets us make engines, refrigerators and much more. But why does it work? The usual answer is that physical changes are deterministic (i.e., one-to-one), and the early universe was highly ordered (i.e., flat). But why was the early universe so ordered? Various new fundamental principles have been proposed to explain early order, but so far these have not been fruitful.
A century ago Boltzmann suggested that the order we see (billions of light years of flat space) is a rare random fluctuation in a much larger universe. One might hope that observer selection could explain why we see such a rare event; only if there is a big fluctuation can there be observers to see it. But observer selection predicts a fluctuation just big enough to make one observer. This is the “Boltzmann’s brain paradox;” the order we see is much larger than is needed to explain just your brain.
Andreas Albrecht explains that while technical problems remain, it now seems hopeful that inflation is the missing key here (along with assuming the universe is large). Since early order is required to create inflation, inflation cannot by itself explain the order we see. But inflation can eliminate the difference between brain-sized and visible-universe-sized fluctuations. A fluctuation that creates inflation is more likely than one that just makes a brain, and any fluctuation big enough to make inflation creates order on the scale we see.
Perhaps we now just need ask: why is the universe so big?
Returning home victorious from Gibraltar after skirmishes with the French … the English fleet … discovered to their horror that they had misgauged their longitude … the Scillies became the unmarked tombstones for two thousand of Sir Clowdisley’s troops. [Admiral Sir Clowdisley] had been approached by a sailor, … who claimed to have kept his own reckoning of the fleet’s location during the whole cloudy passage. Such subversive navigation by an inferior was forbidden in the Royal Navy, as the unnamed seaman well knew. However, the danger appeared so enormous, by his calculations, that he risked his neck to make his concerns known to the officers. Admiral Shovell had the man hanged for mutiny on the spot. … In literally hundreds of instances, a vessel’s ignorance of her longitude led swiftly to her destruction.
Even though shipmates had a strong common interest in knowing their longitude, other social incentives apparently prevented them from sharing their information. As a consultant on the use of prediction markets within organizations, I’ve also noticed that managers are often surprisingly uninterested in the prospect of more accurate forecasts and more informed decisions. Could these phenomena have similar explanations?