The Buying Slow but Selling Fast Bias

In this clip professional money manager Ben Griffiths approvingly quotes fellow-trader Larry Williams, “If you get one thing right in your career it is to learn to be a slow buyer and a fast seller”. “If you can master that”, Griffiths continues “you will be well down the way to being a successful manager of money.” Using a huge database of 783 portfolios averaging $573 million in size and covering 4.4 million trades over 16 years, Akepanidtaworn, Di Mascio, Imas, and Schmidt show that professional money managers follow exactly this advice and it is exactly wrong.

Professional money managers do well on their “slow”, buy decisions–somewhat surprisingly, well enough to beat benchmark portfolios. It’s on their “fast”, sell decisions that money mangers significantly underperform the market. Remarkably, the authors show that on average professional money managers would have done better had the chosen what to sell randomly. Why? On their buy decisions money managers put in effort–you can tell they are putting in effort because their buy decisions cannot be explained by simple heuristics based on past returns (such as buy past winners or buy past losers). On their sell decisions, however, managers do appear to follow a heuristic of selling their big past winners or past losers. See the graph where the blue buy decisions are independent of past returns while the red sell decisions show a clear preference to sell positive or negative return outliers. The authors show that this bias reduces return (just as you would expect). When you sell fast you sell what comes to mind quickest, an availability bias, and that’s often a past winner or a past loser even if greater thought would convince you that these are not the best stocks to sell. The sell fast bias, however, is pretty easy to fix. I expect that institutional investors will induce money managers to take a second look at sell decisions, much as computer systems now ask physicians to check branded prescriptions when generics are available.

Addendum: In related news, Deep Mind’s Alpha Star trounced human players of StarCraft II, a game of imperfect information that is much more complicated than chess. Amazingly, Alpha Star made fewer actions per minute than the human players. As with GO the AI developed new long-range strategies never before seen.


The result is pretty intuitive and confirmed by Buffett's Berkshire Hathaway approach of buying and holding "forever."

@AG - never selling by definition is not a trading strategy. And Buffett himself does not follow it.

As for AlexT's paper, by definition if traders are *taught* the rule of "buy slow and sell fast" then can it really be a 'behavioral' action, or just a conscious convention?

And this howler by AlexT: "Deep Mind’s Alpha Star trounced human players of StarCraft II, a game of imperfect information that is much more complicated than chess " lol! Cite please? A teenagers game of shoot-em-up and hand-eye coordination cannot surprise the Royal Game with over a thousand years pedigree.

The difficulty and richness of chess is commonly alludded to by way of stating the number of possible moves at a given moment in the game. By that metric, chess gets crushed by many video games, including i imagine star craft.

You may argue that SC is less sophisticated than Chess, but it is certainly more complicated: Many maps with different layouts, 3 factions, each with a dozen+ units which can be independently evolved via tech tree and unit upgrades. All with limited visibility into the tactics of the opponent when not in close proximity.

Take Don Quixote and boil it down to its constituent parts. "In To." What's happened? That's the affect of being a painter. That's proof that method acting can evolve.

Well the proof is in the pudding. Starcraft was much more difficult to master than chess

@Anonymous - thanks for the laughs! It takes a minimum of ten years ot master chess, and for some people a lifetime. How long have you played at the child's game called "Starcraft"? Does it have complex strategy like the arcade game "Asteroids"? :)

I was just getting good at Space Invaders when our local bar switched to Asteroids instead. I suppose there's some analogy that could be made to portfolio management.

Starcraft has been played on a professional level for 20 years. The game is far more popular and lucrative than chess. More importantly, it was much more challenging for Deepmind to solve starcraft than chess because it is so computationally complex. Indeed Starcraft has not yet been fully solved, whereas chess was solved long ago. Chess by comparison is child's play.

Ray, it's obvious from your descriptions that you have no idea of what StarCraft is. It is more complex than Chess, and I love Chess. But I think, for the sake of our amusement, keep giving us your opinion on the matter without looking up anything about StarCraft.

Ray, you might know more about the backside of young, taut Filipino boys but you clearly don't know a damn thing about Starcraft.

Bonus Trivia: More things happen in one minute of Starcraft than an hour of chess. Also, no draws.

You are either trolling, or being painfully ignorant & dismissive here.

Starcraft is a real time strategy game first published in 1998. I'm going to describe the Brood Wars iteration of that era. You control around 200 units over a large map, managing resources and building defensive and offensive assets, playing against a human opponent you are trying to destroy. There are far greater variety of unit types than chess, and there are three entirely different playable races with capabilities that do not overlap. The map is divided into around 4096x4096 playable positions rather than 8x8, with resources and meaningful barriers spread around it; Gameplay is divided up into time steps ("Turns" if you will) of around 1/15th of a second, and the highest level players are issuing some order to their units every three steps (300 actions per minute) on average, with intense sequences going up to 800. You can only see a small fraction of the map at any one time (the map is much larger than your porthole on it), and you cannot see anything too far distant from your units (a "fog of war" approach is used).

Starcraft has much less professional competition than chess - a few tens of millions of dollars in prize money over the life of the franchise - but it does sustain a number of careers, and there is a strong regional interest in it in South Korea.

The comparative difficulty is at least somewhat objectively comparable here, because it's taken 25 years of progress in AI (featuring dramatically more attention & research than in the previous century) between the time that an AI could compete on a professional level in chess, versus the time that an AI could compete on a professional level in a version of Starcraft, even absent inherently human physical limitations on things like persistence of vision and coordination.

Larry Williams's greatest success was in trading commodities, which is not the same market as the market in trading stocks. In any case, I would think that a lot depends on the investment horizon and approach of the trader. A hedge fund trader may act fast while a mutual fund trader would not. And more and more trading is done not by analysts but by robots (algorithms). As for Mr. Buffett, he takes large positions in his stock picks (often the entire company or at least a dominant position). So it depends. But even in the case of the single typical trader, selling fast may well reduce her overall returns but selling slow can end her career. Better to reduce overall returns than risk a career.

"the single typical trader, selling fast may well reduce her overall returns but selling slow can end her career. Better to reduce overall returns than risk a career." Another reason to use trackers.

Exactly correct at the end there.

That's kind of Taleb's point. As a trader, you only need to get ruined once.

If you have done your research and found new opportunities where do you get the funds to invest? By cutting your loses on your biggest losers or selling your biggest winners. The buy decisions are based on some, perhaps, superior insight or information. Sell decisions might be based on a need to get funds, without any real insight or information. So random selling from your portfolio could be better.
Unless you might be missing something in data, such as moving in or out of sectors of the market and selling losers or winners helps balance overall portfolio.
Perhaps the ability to gain insight into an individual firm is greater than an ability to predict overall trends or movements in the market or market segments. Market movements are more random than individual firms. On a larger scale, the markets have more information on the overall market than individual firms. But that implies that it is easier to find positive information on firms than negative information.

Astute observation, but selling your biggest losers doesn't raise much in cash for new deals. The greater utility is clearing your watch list and responding to client expectations. Holding on to a dog is tough to explain to a watchful and scared client, especially if you previously urged the buy. Selling appears to be taking responsibility for your mistakes. Holding appears to be vainglorious hope, stubbornness, or stupidity, even when it isn't.

Baron de Rothschild said he never bought at the bottom and always sold too soon.

Reportedly, only 13% of money managers beat the averages.

Presently, I am "down" on mutual funds. I don't like ones that hold FAANGs stocks. IMO all are overbought. I can't figure how they are valued; is it direct capitalization of gross revenues?

That being said, hedge fund strategies come in many forms. One I once read about would keep most funds in AAA, short term debt instruments and, using algorithms(?) or IT programs(?), move in and out of equities to profit from dips and recoveries. I did one such trade late November 2018 and sold at approx. 6% gain December 3 (?). The Christmas Eve crash dive scared me so much so I missed that dip - the markets are up 13%+ since.


Every so often Babe Ruth struck out. And, "Don't cry over spilt milk." I apologize for this last bit of "toxic masculinity."

I don't think the markets can work if too many participants operate like me.

I've never understood FAANGs and I've paid dearly for it. Being bearish on Amazon nearly destroyed me. Early on, I didn't understand their business model. After that became apparent, I expected replication of that model. Twice the loser. Now it is valued too high. Or perhaps not.

I've beaten the market in the long run, and one of my key principles is to not invest in what I don't understand. If I violated that principle, I would be throwing darts with client money or following the herd. I can't do that.

Hedge funds succeed like vampires, sucking their clients' blood. Occasionally they suck them dry.

The corporation that employed me leased offices near Wall Street from October 1999 to December 2009. I remember worrying about people jumping (the president of a bank in our building jumped during the Depression) out of tall buildings on days when the market sharply dropped. That only happened on 9/11 - not market-related.

This wasn't/isn't my employment. I gamble with my (really the children's) money and don't employ leverage.

One son told me the market's (what you or I may think to be) irrationality can easily outlast one's capital.

My gravest fear with the dip-buying is to have grabbed the (proverbial) falling knife.

I think your son stole that line...

The addendum regarding actions per minute (APM) in Starcraft II holds true for human players as well. Even among grandmaster players, those with the highest APM are rarely the best. This likely due partially to wasteful and inefficient actions.

I think the best players are known for great tactics and strategy, as well as using scouting information wisely. In short, quick reaction time rarely overcomes poor strategy implementation.

Of course, this is mostly true for Starcraft II. In Starcraft I, you essentially needed 300+ APM in order to perform on the highest levels.

Mostly remember things from Starcraft I, but in that game even "slow" APM at the professional level would be mind boggling to normal human beings.

If you watch the actual gameplay you learn that AlphaStar is actually a composite of agents. Each agent can be defeated by each other agent, rock, paper, scissors style.

In addition, the actions per minute is not the advantage of AlphaStar. The advantage is to be able to simultaneously manage guns and butter without taking any time to context switch. The very last game, in which the human won, the agent is constrained by what the camera can see, much more similar to the constraints that humans face.

Interestingly, if you listen to the post game interview, the human credits his ability to gather more information from invisible scouting units, which he usually doesn't invest as much in against humans because there are only a few strategies humans would even try at elite levels and they are revealed relatively early on due to the path dependence.


"The very last game, in which the human won, the agent is constrained by what the camera can see, much more similar to the constraints that humans face."

If the other games didn't include that constraint, it's hard to call them "AI victories." The whole challenge of SC is imperfect information and limited attention.

"The advantage is to be able to simultaneously manage guns and butter without taking any time to context switch." - That's one advantage big advantage, but another one - related, and I suspect no smaller - is no mis-clicks. AlphaStar was able to micromanage engagements better than humans could even with its lower APM. It could rotate units with lowered shields (especially important as protoss) to the back and let recharged ones "tank".

This is all interesting, but if the AI could win even with these advantages, is there any real doubt that it could be further developed to beat humans on a more level playing field?

The basic ability to master the game at a high level has been demonstrated. Further refinement would undoubtedly improve this a lot (I doubt there has been all that much effort put into the "Beat Starcraft" AI problem. Compare to chess, which has been a whole field of study for decades).

I'll not quibble about the findings that quick sells lead to adverse results. I do dispute though that sell decisions have less "effort" in them than buy decisions. First, compiling information for a buy does take a lot more time and effort. If it is a good buy, then by definition it is expected to outperform the current
public information reflected in price. That is, information is available but not inculcated yet. This is analogous to military intelligence which combines information and analysis. Gathering info takes time and analysis takes time.

The sell decision is quicker, but that is because most of the information is already available. The metrics that caused the buy decision may have turned, signalling a sell. But as the paper shows, this may be a false signal or a false read. Alternatively there may be additional returns to be had even knowing of a certain future dip.

The underperformance on the sell decision is likely related to increased risk of losses. It could also involve loss aversion that becomes a self-fulfilling prophecy. There is also an element of client perception. Failing to dump a stock that turns can give you lots of splainin to do.

My point is that the "second look" requires a completely different perspective and the situation can unfold faster than you can reconnoiter, like an enemy counterattack.

Tyler reported on this same study January 9th.

Buys are evaluated against a random choice of something already in the portfolio. I know why they do this, but it makes no sense.

Sells are properly evaluated against something in the portfolio (assuming short sales are not allowed, as they generally are not) but it seems to me that the central result (that managers are too quick to sell both large losers and large gainers) is easily explained by simple doubt. All securities were purchased with the idea they'd go up. Things that actually went up a lot induce doubt that they'll go up even more and things that have gone down a lot induce doubt that you were right to buy it in the first place. Things in between are simply assets for which the jury is still out.

That's good intuition, but only one of the motives.

Analysts famously make many more buy decisions than sell decisions. Part of it is the psychology of optimism. Those decisions are easier to sell. But I suspect there is also asymmetry in the metrics for buy and sell decisions. If you think of a stock as a durable asset, like a bridge, as it ages it depreciates and has increasing chances of collapse. By this I don't mean the firm failing, but having diminishing returns from trading. Your initial estimate of longevity has a degree of error, and downside errors are catastrophic, I.e. your bridge falling into the bay. So you minimize maximum regret rather than maximize returns. The bridge falls uncontrollably, people die, cars and ships are destroyed, traffic is disrupted for years. Planner armageddon! This often leads to loss of value, I.e. you're replacing a bridge at salvage value that could have had more returns for years.

It would be interesting to see if financial advisors exhibit the same downside bias on their own portfolios as client portfolios.

I agree the CYA criterion of minimizing maximal regret is also a factor, but that should have equal force in buy decisions, right? But it won't in this study because the high downside-risk but expected value positive buy decisions won't appear in the comparison set as long as the comparison set is just assets already in the portfolio. This then can directly bias towards their results that buy decisions are better than sell decisions, since the foregone expected-value maximizing buy decisions aren't in the comparison set, which instead consists entirely of investments made under the minimizing regret filter.

I think that loss aversion amplifies the CYA on the sell side. But if I understand you correctly, yes I think the experimental design biases the outcome.

"Analysts famously make many more buy decisions than sell decisions. Part of it is the psychology of optimism." - You're solving for a spherical analyst in vacuum. Even if you're a brilliant analyst, companies that you give sell rating to stop talking to you, and then the quality of your ratings will go down, ceteris paribus. Also, even if your sell recommendations are spot on, you'll probably have really unpleasant meetings with your co-workers from the investment banking division, who will tell you in no uncertain terms that to a first approximation you generate no revenue, and you're costing them business - because companies tend not to give investment banking business to banks that give them sell ratings.

It's all about incentives, and analysts generally have strong ones not to give sells. Their research, while technically available for sale, is mostly given away for free to clients in hope of generating future business for other divisions, whether that be M&A, trade flow, or whatever else.

As someone who plays Age of Empires and casually watches StarCraft online, the AlphaStar thing is... complicated.

On the one hand, gaming AI have traditionally been a large file full of if-then statements, while this AI seems to have taught itself (via iteratively playing against forks of itself in an internal "league") and the end result is truly impressive, especially considering how bad it was a year or so ago.

On the other hand... a factor of RTS games is things like speed (Actions Per Minutes), precision, sight, etc. These things matter in addition to choice to strategy. And while DeepMind tried to make it as even as possible:

1. The AI was still capable of insane spikes in it's APM, which went as high as 1500 at points. Most famously, there was a match in which the AlphaStar deployed a mass of ranged units that were capable of teleporting, and would damaged teleport units from the front of the mass to the back.

2. The AI was capable of extremely precise actions, while human players that would have a lot of wasted clicks e.g. selecting the wrong unit, or had to select a mass of them, or used the wrong action etc.

3. The AI was capable of seeing and executing actions across the entire map at once. This is huge, because a human player has to first select a place to look at and then execute actions in that place. This allowed the AI to e.g. command three separate armies simultaneously, flanking their opponent in a way a human player couldn't.

So the AI still had some "physical" advantages over a human player, rather than strictly being capable of out-strategizing them. When DeepMind unveiled an AI with a built in camera restriction, it lost.

That said the AI they unveiled was likely quite young, and I'm sure that, in the future, we will see AI that's capable of beating the top players even with "physical" restrictions. It was a good day for both gaming and AI

Yeah, that about sums it up. AlphaStar was playing worse strategically, but overcoming this deficit with better mechanics. For the first 7 minutes of the game it looked like the best PvP player in the world, but then after that often threw away massive advantages with some poor plays, but was able to win because early game advantages snowball in Starcraft. Mana and TLO were often 30 supply behind, and still trading out evenly, which is rare to see in human games. I do think they've gotten past the major hurdles of developing an AI, and it is mostly a matter of simple fixes to get things working to a point where they can beat the world champion with any race.

Also, with superhuman micro (being able to make accurate clicks per minute at rates much faster than humans), some of the units become broken. That teleporting unit cannot be physically played the way it was played by the AI, changing the entire dynamic of the game and making it much less about strategy and more about exploiting broken units. The "APM average is below that of a human" stat is bogus. What matters is spiking effective APM (over a thousand in the case of this AI, triple what the best human is capable of).

The Google team is impressive but still has a long way to go on besting SC2.

@Navin Kumar - you do realize Deep Mind's Alpha Star was merely uncovering, via trial and error, what the programmers programmed the StarCraft game to do? For example, the StarCraft programmers disfavored the "quick rushes by Photon Cannons" (see below) and favored more subtle techniques, and AlphaStar discovered these techniques. That does not sound like advanced AI as much as trial-and-error. How is AlphaStar any smarter than IBM's Watson?

From the site: While some new competitors execute a strategy that is merely a refinement of a previous strategy, others discover drastically new strategies consisting of entirely new build orders, unit compositions, and micro-management plans. For example, early on in the AlphaStar league, “cheesy” strategies such as very quick rushes with Photon Cannons or Dark Templars were favored. These risky strategies were discarded as training progressed, leading to other strategies: for example, gaining economic strength by over-extending a base with more workers, or sacrificing two Oracles to disrupt an opponent's workers and economy. This process is similar to the way in which players have discovered new strategies, and were able to defeat previously favoured approaches, over the years since StarCraft was released.

It's not StarCraft programmers that disfavored them, it's the players

The players need to learn what weights the programmers put on various strategies. By playing the game for years, they do so. The difference with AI is that it takes a few days or weeks, not years, to learn the optimal strategies. If players could talk to programmers and learn how the programmers optimized the program then it would cut down on the amount of time necessary to play to learn the strategies. Kind of like paper, rock, scissors (once you learn it's random, you can quickly just randomly flash one of the three signs rather than stopping to 'think about it', which loses time).

I am prepared to take on AlphaStar in rock, paper, scissors.

The programmers didn't put any weight on various strategies so it would be useless to ask them.

Ray, I'm afraid you don't understand much about AI. Learning the optimal approach to a complex task through trial and error is exactly what "advanced AI" is.

It's a matter of opinion, but I find this more impressive than Watson (which was some fairly standard technology that IBM sold through the genius marketing gimmick of having it play Jeopardy).

The key thing to remember about sell decisions is that they are often made with tax implications in mind. Selling a security at a loss in order to offset taxable income, or selling at a large long term gain rather than a smaller short term gain my be strategic depending on the situation. So analyzing the pre-tax return on sold positions relative to a benchmark can never give you the full picture.

If you read the article, you'll see this isn't the reason, since the vast majority of the portfolios were tax-free.

Y axis label is confusing. Looks like it is returns %, but looking at the paper actually it is the % bought or % sold. I guess that's why the buy percent is lower - there are more un-owned stocks than owned ones. Or am I missing something?

Amazingly, Alpha Star made fewer actions per minute than the human players.

Commentary on the matches (for example, and I have seen people saying the same in other places) indicates that this is not really true, and that the DeepMind people presented some statistics in a misleading fashion to make it appear true.

Sample para:

While arguably the fastest human player is able to sustain an impressive 500 APM, AlphaStar had bursts going up to 1500+. These inhuman 1000+ APM bursts sometimes lasted for 5 second stretches and were full of meaningful actions. 1500 actions in a minute translates to 25 actions a second. This is physically impossible for a human to do. I also want you to take into account that in a game of Starcraft 5 seconds is a long time, especially at the very beginning of a big battle. If the superhuman execution during the first 5 seconds gives the AI an upper hand it will win the engagement by a large margin because of the snowball effect.

, > Alpha Star made fewer actions per minute

This is not true. AlphaStar had much higher click rate.

AlphaStar also had the benefit of seeing the whole map in the zoomed out form. Human players had to move their viewpoint around the game.

After AlphaStart won 5-0 they had another game with new version of AlphaStar that used similar camera movement as human players. AlphaStar lost that game.

Jovita steel

Chess is turn based, Starcraft is real time. It's no surprise that even the simplest AI could beat a human, just on reaction time alone and being able to surveil the battlefield much faster and integrate the information than a human could. In terms of the size of the space of game states, Starcraft is likely more "complex", since there are multiple maps, many units and factions, etc. so there is more variability. Personally Starcraft is too simple an RTS compared to what the genre is capable of, there are much more impressive examples that allow thousands of units, command queues, more available strategies, and better interfaces to manage it all.

Comments for this post are closed