by Tyler Cowen
on June 26, 2012 at 7:34 am
in Economics, History, Philosophy
What would we do if it turned out there were more black swans than we had thought? What should we do?
You can view the talk here, given at the Legatum Institute in London not too long ago.
I told you. .22 short.
Thanks Tyler. Great post here. This is MR at its best – original, eclectic and very thought provoking.
I agree with the conclusion that we ought to study history more but I fear the only good reason might be that it is so interesting. Studying history to avoid repeating it is a lot like studying weather to avoid repeating it.
I dunno seems like human beings in general are making a lot fewer bad decisions today than they did 300 years ago. China is a good example.
This is a good analogy I think. Golf tournaments will look at yearly weather patterns before deciding on when to hold a tournament. Its no guarantee, but you increase the chance of hitting good weather.
Does studying history offer us at least a guide as to what to expect based on current events?
“What would we do if it turned out there were more black swans than we had thought?”
Find a different color swan to pick on?
Would a single sentence explaining what a black swan is have been too much to ask?
Black swan: “When some guy makes a statement like ‘there are no black swans’ and then we blame the swan for being.”
Black swan: a highly improbable and difficult to predict event, e.g. the Lehman crash.
With the internet in front of me I can find an explanation of pretty much any topic. If I’m sitting in a luncheon hearing Tyler speak, that is more difficult.
I watched for 5 minutes but couldn’t figure out the definition from context so I searched Google. I’m sure Tyler’s audience either understood what he meant because they’re all really smart or they just pulled out their iPhones.
Okay, I finished the video and my complaint is addressed at about 10:25.
Not to be pendantic, but the Lehman crash was a black swan only to the turkeys. There are a number of people who predicted the collapse.
It is more an event that challenges the common assumptions of how things are.
I think the real question is, when do you know it’s a black swan? if a number of people predicted it, how far in advance did they do so? If the Fed had stepped in, which they contemplated doing right up until the last minute, would it not have been a black swan?
On any given day, there are tens of people predicting the end of the world – and that’s in Times Square alone. Eventually they’ll be correct.
The Lehman crash was a highly improbable event – for many reasons.
If we assume that the color of swans is normally distributed statistically, then the mean color is “white” (or basically white) and the standard deviation is very low. We would then assume that there are no black swans, because that would be a large number of standard deviations from the mean. Certainly if your business depended on the color of random swans you encountered, you would calculate the probability of encountering a black swan as being microscopic, ignore it, and organize your business accordingly. Maybe you would prepare for a grey swan now and again, but not black.
But the color of swans is not normally distributed. There are in fact far more black swans than the normal distribution would predict, so you will encounter them far more often than you assume, so your business is taking a much bigger risk than you had thought.
Replace “color of swans” with “liquidity in the financial markets” or “returns on mortgage-backed securities” or the like and you have the mindset in many financial institutions leading up to the crisis.
“But the color of swans is not normally distributed. There are in fact far more black swans than the normal distribution would predict..”
Yup. Cauchy distribution seems a more accurate fit: smaller central hump and thicker tails (meaning more black swans).
Ah yes, the reason that there is a black swan when no one had seen them before is not because they are so rare, but because once you venture far enough, there’s a whole species of them.
Yes, as an NZer I found the book very confusing. I kept thinking “But there’s heaps of black swans”. It took me a while to realise that they are pretty uncommon in the Northern Hemisphere.
Correct, there are no black swans endemic to Europe, and so the Europeans had a saying that there were no black swans, period. Then they traveled south, and lo and behold, black swans!
There are two lessons in my opinion. First, just because it’s true here, doesn’t mean it’s true when you get far enough away from the conditions under which your model was developed–whether we’re talking physical distance or economic structures. Second, a black swan event doesn’t normally indicate that there is a single fluke–it means that it’s likely quite common under these different conditions, just as travel to NZ means that you see black swans all the time.
Moral of the story: Never lose sight of your initial conditions, and know when you’re straying from them and try to understand what that means for what you can expect.
Hence why Australia never went into recession following the Global Financial Crisis.
@ rkw – from the youtube notes under the video: “. . . Professor Tyler Cowen presented his take on the theory of black swan events, the notion of seemingly surprising and unforeseeable major impact events that are then rationalized with the benefit of hindsight. . . .” However, the audience in this case could be reasonably assumed to know what a black swan is, and Tyler did make that assumption.
Personally, I find this particular phrasing, as a definition of black swans, lacking, even obnoxious. You see, a black swan is supposed to be something unforeseeable, not something unforeseen. Using it to cover something like our recent economic crash is excusing those who SHOULD have foreseen the consequences of their actions and their prognostications. A bit like the weatherman who has access to the weather radar and developing patterns in the Atlantic, but fails to notice the hurricane.
Spot on: the recent great financial crash was foreseen (by a few) and therefore wasn’t unforeseeable.
What next: a greater deflation followed, at some point, by inflation as government actions over egg puddings?
Onc class of black swans are largely predictable events for which society was generally unprepared, either because theory was poor or there was a lack of appetite for dealing with the issue. The collapse of the Euro Zone might be such an event; the housing bubble was another.
Unforeseen events are black swans by their nature but, from what I remember of Taleb’s book, any probabilistic event that imposes costs that cannot have a probability distribution associated with them is a black swan.
An economic crash of some sort was not unforeseeable. However, the costs associated with that crash could not have been predicted in advance. Some predicted huge costs associated with losses in subprime, others predicted less, but these predictions all depended on messy things like expectations and how politicians and central bankers would react and on highly complex interactions in financial markets.
Things could have been closer to the Great Depression than they were. Does anyone have a scientific way of calculating the ex ante probability of another Great Depression-like slump? Probably not.
Appoint a presidential commission on swan color, to report back after the election.
An interesting historical question is whether systemic blow-ups (i.e. Black Swans) are inherently “bad” considering alternatives. Other things equal, blowing up is worse than not blowing up, particularly for the current citizens. However, could it be that occasional black swans that destroy the system or parts of the system are good? Imagine a world where there are tight governors on risk-taking to limit upside and downside to an optimum interval. Is that world better in any sense (better returns – unlikely, better society, etc.) than one with black swans? The assumptions that black swan’s are undesirable should in itself is a question. Is there an element of systemic creative destruction?
I wonder what Taleb would think of this, given his personal animosity toward Tyler.
No. Tyler is talking about actual black swans. The bird. Wings. Feathers. The whole bit.
Nah, he’s talking about Taleb’s swans. Taleb has beef with Cowen. They have not squashed the beef.
Learn more history? Yes, the human nature remains pretty constant – but the specific crises is always unique and can only have happened when it did. It is far more difficult to apply a dispassionate theory of human behavior to ever changing and always new environments – especially when you happen to live in that environment. This is the difficulty of social sciences and especially economics.
Commenting on the pessimism about the ability of laws to eliminate tail risk using options, in light of Arrow-Hahn and the replication of derivatives using dynamic trading:
I’m not so sure. You can actually do this rather well, if there are rules about trading that make dynamic trading more transparently impossible. Right now, dynamic trading requires liquidity, and when things change abruptly (tail risk) then these dynamic option-like securities stop working. Without getting into what that means about risk, it can be stopped easily; most of the proposals for eliminating HFT would do it by default, by imposing some type of limitations on dynamic transactions.
Does this make sense? Am I missing something?
Your talk is great and it addresses a specific dimension of Black Swans, but if I may take a step back and write something about Black Swans more generally.
What troubles me about the recent interest in this notion is that epistemologists have known, thought, and dissected the Black Swan issue many times in many different contexts over the past 2 centuries, both implicitly and explicitly. In fact, Black Swans are a logical component of many epistemological theories, rather than a central idea. Familiarity with these theories therefore aids in understanding the issue holistically rather than idiosyncratically, merely in the context of economics or in politics or in management, etc.
Also, epistemologists have developed the Black Swan concept into something much more complex and rigorously defined than anything we’ve hitherto seen in management literature, economics, etc. In the writings of Imre Lakatos, Karl Popper, Hilary Putnam, WVO Quine, and even moral philosophers like Ronald Dworkin you can see that Black Swans have figured as an implicit element of the theory of human knowledge for a long, long time.
But because philosophy and epistemology in particular have been so marginalized in recent decades, society had to wait until Nassim Taleb and a massive financial crisis brought the issue up.
In “Proofs and Refutations”, for example, Imre Lakatos implicitly shows that Black Swans exist even in mathematical theory, in the form of definitions and objects that do not follow directly from existing theory but can appear suddenly and completely ruin some seemingly strong conjecture when some small but crucial axiom is altered.
Karl Popper’s theory of verisimilitude very clearly states (and rigorously shows) that if human knowledge is an expanding filtration of theoretical statements, then you can never say that your knowledge of the world captures all statements needed to describe it. The Black Swan concept is very obviously derived from this theory and looks like a very basic, elementary fact rather than something that requires a financial crisis for people to finally see.
The real tragedy of the Black Swan is that people looked at theorists like Popper, Putnam, Lakatos, etc. as marginal thinkers that were irrelevant in our modern computational age. But even basic familiarity with their work leads to obvious conclusions about the inevitable existence of Black Swans. Perhaps if we had paid attention we would have been more aware.
PS: This is also why I find Nassim Taleb’s work very offensive. He acts like he discovered something new and fascinating. In reality, his book butchers the concept, he presents it unclearly, and he completely ignores all the work before his that encapsulates the idea.
Black Swan theory, though, will not be considered by the majority of the populace unless physical evidence is apparent. Besides, knowing of the existence of Black Swans in advance did not and could not possibly have caused any difference in the unfolding of the crisis.
In short, listening to Popper would not have prevented or ameliorated the financial crisis.
Thought has power through its predictive qualities. If denying the existence of Black Swans allows the elite to predict and develop methods to amass large amounts of wealth, they will do so.
Finally, as discussed above, the crisis was not truly a Black Swan, because it could honestly be predicted. It was simply inconvenient to believe, and so ignored in order to increase wealth until it can no longer be ignored. This observation, more than anything, has more predictive power: Tipping points will be reached as long as the imbalance favors a group with the power to increase the imbalance.
What does the literature on epistemiology recommend hunamity do in this case?
I’d argue that knowing about the existence and dangers of Black Swans would have made a difference in the magnitude of the crisis rather than in its inevitability. You might be right, but this is what my intuition says.
Black Swans present problems when you lack “buffers” against them. So if you suspect that a theory describes reality with great accuracy, you may make decisions that do not leave much room for error: if reality moves against you, you will lose everything or almost everything. If, however, you remember that you cannot know how well a theory actually describes reality and that any theory is always only local rather than global, you may prefer to act with caution. Intuitively, I suspect that more (real) awareness of Black Swans may have led to more diversification both financially and in other areas.
Epistemology matters because it can guide our confidence in various circumstances towards specific theories and beliefs. It humbles us by suggesting and specifying blind spots in our understanding of how ANY system (social, economic, physical) works. It is not a panacea and epistemologists can’t resolve the issues that Tyler describes in his speech. But they knew about many of the holes in science and human knowledge that we are just now re-discovering a long, long time ago.
So I claim only that an understanding of the weaknesses in human reasoning and in science (i.e. the arguments Hilary Putnam and Ronald Dworkin have made that scientific statements depend on value judgments) reduces our propensity to trust experience too much. It makes it impossible to believe that any local set of experience can possibly describe a global system, which induces greater caution, at least in those that take it seriously.
I also claim that it can’t hurt for people to re-discover this branch of philosophy at least a little bit, starting with Imre Lakatos, who blew perhaps the biggest holes in the amount of confidence we can have in scientific statements and any statement whose “truthness” depends on the extent to which a given community accepts it as true based on subjective local criteria.
Except that it’s not a weakness in human reasoning that was a problem, but a lack of incentive to recognize the potential for weakness. We weren’t overdrawn because we couldn’t imagine the trouble we were getting ourselves into–many people did imagine the trouble and protected against it. It was that a number of movers and shakers found it best to act without the safeties, even if it meant that it would blow up in everyone’s face eventually and unexpectedly.
So it’s not that we were unprepared for the trouble, but that we found it expedient to be unprepared for the trouble. Humility and caution won’t help in this case.
” This is also why I find Nassim Taleb’s work very offensive. He acts like he discovered something new and fascinating. In reality, his book butchers the concept, he presents it unclearly, and he completely ignores all the work before his that encapsulates the idea.”
I disagree with the first statement. His book is full of references to previous thinkers and I think any fair, careful reading of the book would give the reader the impression that Taleb is a popularizer. He certainly devotes a lot of space to discussing Karl Popper’s ideas.
As to whether the book “butchers the concept”, I don’t know enough to say one way or the other. Any popular audience book seeking to popularize ideas has to do a bit of simplification and I found him sometimes unclear myself but I thought the presentation was good overall by the standards of the popular audience genre.
I don’t think the charge that he ignores previous all work before his is correct at all, though.
Studying history is too depressing. Why be realistic? Better to be aspirational, follow your dreams and ride the wave until it crashes upon the shore.
I like Tyler’s point about Black Swan identification… was the creation of the Euro or the Euro Crisis the black swan (or either/both, for that matter)? To me, a common thread between Tyler and Taleb is an appreciation of the power of story. Both authors grasp that humans are run by stories that can blind them from what might seem obvious to an objective, dispassionate observer.
I agree with the commenter who said that the distinction between unforeseeable and unforeseen is a key here. Taleb, citing the limitations/design of the human mind, argues that certain major impact events will be unforeseeable and there is little we can do about it (other than make our best, if rather imperfect, attempts try to design a life that weathers/benefits from volatility).
But if Black Swans are truly unforeseeable, how exactly are we supposed to prepare for them? Maybe you can’t do much about them anyway, so why bother? Alternatively, if you can do something about them… are they really black swans? As Tyler seems to be suggesting, many events we might judge to be unforeseeable Black Swans may in fact be foreseeable. For example, perhaps studying history would help us spot material impact events before they happen. In that case, it makes a lot of sense to try to imagine and prepare for Black Swans.
Perhaps its useful to imagine a continuum… on one end you’ve got unexpected events that are totally unforeseeable. On the other end, you’ve got events that are currently unforeseen but might possibly be completely discerned through study of history, innovative imagination, etc. In between, you’ve got events that could potentially be understood/foreseen, in part. The idea here, is that perhaps some events exhibit more black swan-ness than others.
Its black swans all the way down?
In an efficient market, any big change is a black swan?
There we were, just minding our business making upside-down bets with Other People’s Money, relying on models premised on perpetual price increases, a totally rigged secondary market, moral hazard, perverse incentive, central planning, externalities all over the place, when all of a sudden … BLACK SWAN! Who knew?
This is sort of Taleb’s Point in Antifragility.
Great video. I quibble with comments about preferable size of country in short blog over at nanocivics.com. I think that question is off-base and I suggest a replacement question instead.
Comments on this entry are closed.
Previous post: Toward perfect price discrimination in groceries? (from Graton Gathright)
Next post: Assorted links
Email Tyler Cowen
Follow Tyler on Twitter
Email Alex Tabarrok
Follow Alex on Twitter
Subscribe in a reader
Follow Us on Twitter
Marginal Revolution on Twitter Counter.com
Get smart with the Thesis WordPress Theme from DIYthemes.