The author’s purported cure is far worse than the disease. Positional externalities from shaving latency are indeed real, but they’re not really that large relative to market size. A good way to estimate their magnitude is by how much money has been spent on cutting down the Chicago-NYC messaging latency, the two most liquid and hence profitable trading centers. The cost recently spent on this infrastructure (largely microwave relay networks) is about $500 million. Assume that the infrastructure depreciates in about a year and generously assume that the spending on intra-market latency is about the same.
That’s a total cost of about $1 billion/yr in market costs imposed by latency based positional externalities. American equity markets trade $24 quadrillion in value a year (and that’s only counting shares, not derivatives). Which means the cost to the typical investor of the latency externalities comes out to an upper bound of $4.5e-05 per dollar traded, or for example to trade one share of MSFT: $0.0016. That’s the upper bound of cost savings by perfectly eliminating latency externalities. The cost certainly isn’t trivial, but it is much lower than the forced imposition of $0.005 in bid-ask trading costs because the SEC refuses to decrease the minimum $0.01 tick size. With an economical tick size the average bid-ask spread would easily go in half. (Plus it would reduce the latency externalities since market makers could price improve rather than rushing to jump first in line the order book queue). My point being is that if we’re that worried about reducing costs to investors there’s an alternative that we’re already ignoring that both has a larger impact and poses much less risk than completely tearing up the foundations of the market structure.
Finally the authors assume that batch auctions don’t come with any of there own structural costs. Not only do they indeed have substantial defects themselves, but they don’t even eliminate the latency externalities. The market already uses batch auctions at market open and close. As any trader will tell you these are far more manipulated than continuous trading. During a batch auction an indicative price is published prior to crossing based on the currently resting buy and sell orders. A trader can easily change this indicative price or imbalance by entering a large order and canceling it before auction. Analogous strategies aren’t impossible, but are much harder in continuous trading because a resting order can be crossed at any time, and hence poses real economic risks to the trader. To paraphrase Alex continuous trading acts as a tax on bullshit.
The flip side of a pre-cross indicative price is that traders will wait for as long as possible before the cross to enter their orders. No trader using proprietary signals is going to want other market participants to see his order for any longer than is absolutely necessary. The counter-strategy being shaving down your latency even further so you get to see others’ orders first. Then modify yours accordingly by trading even closer to the cross time using your lower latency. So what frequently happens in opening and closing batch auctions is that the order book and indicative price is pretty much garbage until a few milliseconds before the cross, at which point the real price formation occurs. When I worked in a much larger HFT firm I was a continuous guy, but sat next to the batch auction guys. We certainly cared about our latency, but generally we focused much more on our signals and execution algorithms. The auction guys in contrast were always obsessed with their latency.
Switching to batch auctions will not reduce the cost of latency positional externalities, and is pretty likely to increase them. On top of all that it will give us a much lower-quality and less efficient market structure. There are certainly better ways to tackle the latency externality costs. But it’s important to recognize the perfect’s the enemy of the good here, I doubt we can ever fully eliminate them under any sane structure. It’s better to think of moderate improvements that work on the margin, rather than centrally planned grand sweeping re-designs of the entire market structure.
At the first link, in the comments, he has several follow-up explanations, all recommended.
Tyler is more optimistic about financial innovation than I am. Strange, but true. I recommend Andrew Haldane’s speech, The Race to Zero, on high frequency trading (HFT). Haldane is Executive Director for Financial Stability at the Bank of England and his speech is eminently quotable. First, some background from Haldane:
- As recently as 2005, HFT accounted for less than a fifth of US equity market turnover by volume. Today, it accounts for between two-thirds and three-quarters.
- HFT algorithms have to be highly adaptive, not least to keep pace with the evolution of new algorithms. The half-life of an HFT algorithm can often be measured in weeks.
- As recently as a few years ago, trade execution times reached “blink speed” – as fast as the blink of an eye….As of today, the lower limit for trade execution appears to be around 10 micro-seconds. This means it would in principle be possible to execute around 40,000 back-to-back trades in the blink of an eye. If supermarkets ran HFT programmes, the average household could complete its shopping for a lifetime in under a second.
- HFT has had three key effects on markets. First, it has meant ever-larger volumes of trading have been compressed into ever-smaller chunks of time. Second, it has meant strategic behaviour among traders is occurring at ever-higher frequencies. Third, it is not just that the speed of strategic interaction has changed but also its nature. Yesterday, interaction was human-to-human. Today, it is machine-to-machine, algorithm-to-algorithm. For algorithms with the lifespan of a ladybird, this makes for rapid evolutionary adaptation.
Consistent with the research cited by Tyler, Haldane notes that bid-ask spreads have fallen dramatically.
Bid-ask spreads have fallen by an order of magnitude since 2004, from around 0.023 to 0.002 percentage points. On this metric, market liquidity and efficiency appear to have improved. HFT has greased the wheels of modern finance.
But at the same time that bid-ask spread have decreased on average, volatility has sharply increased, as illustrated most clearly with the flash crash
Taken together, this evidence suggests something important. Far from solving the liquidity problem in situations of stress, HFT firms appear to have added to it. And far from mitigating market stress, HFT appears to have amplified it. HFT liquidity, evident in sharply lower peacetime bid-ask spreads, may be illusory. In wartime, it disappears.
In particular, what has happened is that stock prices have become less normal (Gaussian), more fat-tailed, over shorter periods of time.
Cramming ever-larger volumes of strategic, adaptive trading into ever-smaller time intervals would, following Mandelbrot, tend to increase abnormalities in prices when measured in clock time. It will make for fatter, more persistent tails at ever-higher frequencies. That is what we appear, increasingly, to find in financial market prices in practice, whether in volatility and correlation or in fat tails and persistence.
HFT strategies work across markets (e.g. derivatives), exchanges, and stocks and can have negative externality effects on low frequency traders. As a result, micro fat-tails can become macro fat-tails.
Taken together, these contagion channels suggest that fat-tailed persistence in individual stocks could quickly be magnified to wider classes of asset, exchange and market. The micro would transmute to the macro. This is very much in the spirit of Mandelbrot’s fractal story. Structures exhibiting self-similarity magnify micro behaviour to the macro level. Micro-level abnormalities manifest as system-wide instabilities.
For these reasons I am not enthusiastic about innovations in HFT. Earlier I compared high-tech swimming suits and high-frequency trading:
High-tech swimming suits and trading systems are primarily about distribution not efficiency. A small increase in speed over one’s rivals has a large effect on who wins the race but no effect on whether the race is won and only a small effect on how quickly the race is won. We get too much investment in innovations with big influences on distribution and small, or even negative, improvements in efficiency and not enough investment in innovations that improve efficiency without much influencing distribution, i.e. innovations in goods with big positive externalities.
He/she writes to me:
Since there is a lot of confusion in the media regarding HFT, I wanted to clear some of it up. Disclaimer: this is my personal view, I am not representing any firm or group and I wish to remain anonymous.
Colocation is a practice, whereby any market participant can pay the exchange a fee which allows them to locate their trading computers in the same building as the exchange itself ( the matching engine ).
Colocation is actually good for investors. Why? Suppose a mutual fund from Kansas City wants to execute orders to buy stocks on an exchange which physically located in New Jersey. If this mutual fund is looking at the market data feed directly from Kansas City, then it is at a disadvantage relative to investors who just happen to accidentally be in New Jersey. So, what are the options? Well, the fund can either rent space in a New Jersey data center or execute through a bank/broker which is doing exactly that. However, all New Jersey data centers would still have to somehow connect to the exchange. And their location within New Jersey would matter – some data centers would be better to locate at relative to some other ones.
With co-location, all investors are given the opportunity to trade from the right data center – the same one that houses the exchange.
If exchanges operated on some sort of discrete auctions rather than continuous matching, some of the issues above would be mitigated. But other issues would arise, for example, lack of synchronization of auctions between different exchanges, as well as increased incentives to trade more prices within a given amount of time, thus generating increased volatility
Furthermore, exchanges, as private for-profit businesses, are able to grow their profit and bottom line by selling co-location services to speed sensitive traders, thus mitigating the need to grow the revenues in other ways, such as raising commissions and trading fees on all investors
Direct Data Feeds and Accusations of Insider Trading
Exchanges sell access to direct data feeds to all investors. When high frequency traders subscribe to a real time direct feed in the colocated facility and they observe the order book as well as trades, they have no idea who is trading – a customer, a big bank or another HFT firm. They see the same exact trades in this feed as all other market participants. Many, if not most, HFT firms do not deal in any way with customers whatsoever. The ones that do are supposed to have a clear separation (a Chinese Wall) between customers and proprietary trading, so no customer information can flow through to the prop desk – the same thing is true of big banks and other broker/dealers.
Consider, for example, the trade that is described in Flash Boys. HFT places 100 shares on the offer at 100.01 on BATS and when it trades, it goes and buys the same offer price of 100.01 at Nasdaq, thereby running up the price to 100.02 and selling back the liquidity at 100.02 to the non-HFT customer
This doesn’t actually work. Why? because in order to sell successfully at 100.02, HFT algorithm had to have priority in the order book queue and the only way to do that is to continuously quote that price, not knowing at all whether or not anyone is going to buy at 100.01 on BATS, and facing market risk of market running though 100.02
Secondly, just because the offer price on BATS traded, that doesn’t mean that the market cannot go down right after that and the HFT has no way of knowing who bought the price they bought and why they bought it. It could have easily been a fund or a retail trader who have no alpha (no predictive power ).
Contrast this with the main protagonist of Flash Boys Brad Katsuyama, who was getting paid multiple millions at RBC for executing customer order flow.
Suppose a customer called Brad and asked to buy 3 million shares of IBM right now and 2 million potentially later in the day
The current market on public ( “lit” ) exchanges is 188.00 bid 188.01 offered
Brad says, ok, I will do the deal for 188.15 ( 14 ticks above the current market ) and then proceeds to work the 3 million shares in the market in order to buy back what he sold at a price lower than 188.15
Now, Brad is trading in the market, having material non-public information about his customer’s order flow. This trading has to be immensely profitable for the bank to pay Brad millions a year. So to the extent that there is any insider trading going on, and not market making, it is being done by Brad and not by the HFT algorithm because Brad really is using truly non public information and HFT is reading the direct data feed available to anyone who cares to sign up for it
Bank Equity Trading Revenues
Indeed, as a result of HFT which quotes much tighter spreads than the banks, bank equity trading revenues have gone down dramatically, by much more than the profits generated by HFTs in equities. Where did the difference go? it accrued to investors in the form of lower trading fees
Ok. Is There Anything Wrong With US Equity Markets?
Yes. Dark pools are destructive to all investors, including HFTs. An HFT firm that derives its edge from mining statistical patterns in the data cannot do this very well for dark pool data because it is simply not available in a clean format. But neither can any other investors
In order to promote transparency and reduce conflicts of interest between broker/dealers and their customers, our regulatory agencies should force all equity trading to happen on lit exchanges
By the way, this is a problem not just with equity dark pools. Consider trading in off-the-run treasuries ( off the run meaning not the latest issue ) or interest rate swaps, or many other securities that Wall Street banks trade for their customers
Since none of these are on the exchanges, and there is no transparent data, effective spreads paid by customers are wide, lining the bank pockets. If regulators were able to force trading in these instruments to happen on exchanges, it would reduce fragility of the financial system and create pricing transparency
What about Reg NMS
Reg NMS was designed to make sure that if a better price is available on any exchange, that price has to be filled before taking out worse prices on other exchanges
In practice, it has created a lot of complexity in the market and forced market participants to all implement their own software solutions to comply and monitor with it
All this compliance should be fully shifted to exchanges and other trading venues themselves, where it’s much easier for regulatory bodies to verify compliance
Matthew Philips explains it clearly:
The idea that retail investors are losing out to sophisticated speed traders is an old claim in the debate over HFT, and it’s pretty much been discredited. Speed traders aren’t competing against the ETrade guy, they’re competing with each other to fill the ETrade guy’s order. While Lewis does an admirable job in the book of burrowing into the ridiculously complicated system of how orders get routed, he misses badly by making this assumption.
The majority of retail orders never see the light of a public exchange. Instead, they’re mostly filled internally by large wholesalers; among the biggest are UBS (UBS), Citadel, KCG (KCG) (formerly Knight Capital Group), and Citigroup (C). These firms’ algorithms compete with each other to capture those orders and match them internally. That way, they don’t have to pay fees for sending them to one of the public exchanges, which in turn saves money for the retail investor.
There is also this:
…according to estimates from Rosenblatt Securities, the entire speed-trading industry made about $1 billion, down from its peak of around $5 billion in 2009. That’s nothing to sneeze at, but it isn’t impressive once you put it into context: JPMorgan Chase (JPM) made more than $5 billion in profit in just the last quarter.
If that doesn’t convince you, just listen to all those Keynesians who are proudly calling this a form of useful economic stimulus, akin to pyramid-building, or an invasion from outer space…oh wait…
London, 7th December 2012, PRNewswire – Reality TV is an old concept which we’re all familiar with. However, Alexis Kirke, a leading composer at the Interdisciplinary Centre for Computer Music Research at Plymouth University , and Dr Greg B Davies, Head of Behavioural and Quantitative Finance at Barclays, have taken reality performances to another level by partnering with Barclays to produce a reality opera. The performance, which was held on the 15th November 2012 at Egyptian Hall, Mansion House, focused on the drama unfolding on an “open outcry” stock trading floor, the type of trading floor on which traditionally traders shout and use hand-signals.
To view the Multimedia News Release, please click:
Thanks to melodies carefully crafted with evolutionary computer algorithms, during the performance singers sang what they wanted, when they wanted, within certain rules, just like the freedom people have in reality TV programmes. However, like a traditional opera, music was used alongside singers to demonstrate the emotion of the story.
The concept was conceived by Alexis and the opera brought to life in collaboration with
Dr Greg B Davies, Head of Behavioural and Quantitative Finance at Barclays. Alexis created the music while Greg applied his behavioural finance expertise to create the market within which the singer-traders were responding, generating the ebb and flow of emotion and money on a trading floor. Furthermore, the performance was directed by Alessandro Talevi, former winner of the European Opera Directing Prize.
To enable the music to express the emotional activity on the trading floor, Kirke composed what he calls “trading phrases” such that when most singers were buying, the harmonies between them were pleasant, and when most are selling, the harmonies clashed. When two performers sang ‘buy’ and ‘sell’ melodies for the same asset, the two sounded in time and harmonious.
‘Open Outcry’ featured 12 singers and a cellist, Joseph Spooner. The audience sat at tables among the “traders” (in the trading pit), and the conductor rang a bell to signify the market opening. As the cellist played, large screens displayed stock information and the conductor guided some aspects of the permissible actions of the singers. The performers sang one musical phrase to buy each asset and another to sell. The prices were largely driven by random market movements generated by a computer model, though the conductor did have some power to influence stock prices, as did the effect of the “trading” between the singers themselves.
I believe that is the only press release I have ever covered on this blog. In general you press release people, unless you have something as important as Markets in Everything High-Frequency Trading Opera Edition, I don’t give a damn about what you send me! You all bore me! Really.
I’ve yet to see a good argument that they are high. HFT is taken to mean many things, but let’s (for now) focus on high-speed arbitrage and near-arbitrage.
Let’s say the market for coconuts in Thailand reacts somewhat slowly, and the market for coconut derivatives in Singapore allows for quicker trading. A storm comes to Thailand, the two coconut prices split, and a number of traders rush in to take advantage of the price discrepancy. (Of course since the Thai market is slow and less liquid, this won’t be perfect arbitrage.)
If ten traders have more or less the same speed (and quality) of trading technology, the returns to rushing would appear to be pretty small. At most, the $$ invested in speed will rise to equal the size of the available p x q discrepancy. That’s basically the same result you get with slower trading technologies. Call it waste, or not, but I don’t see that any new problem has arisen here. There is some waste, bounded by the p x q discrepancy, whether people compete over speed at higher speeds or lower speeds.
If one trader has dominant speed, that seems to also limit the costs of running after the arbitrage profits. Rent exhaustion will be far from complete.
Alternatively, imagine a leapfrog model. The quickest firm gets to be clear leader for a year, but by the time that year is up they are leapfrogged by a new and speedier technology, and then there is a new leader. It still seems to me that the investments in the new speed technologies are bounded by the p x q discrepancies, as they were in slower times.
Keep in mind, if HFT yields profits, there are also incentives to improve the trading technologies in the slower of the two coconut markets. Those incentives will limit the profits from HFT and thus the resources invested in HFT.
I understand full well that this discussion considers only a few relevant factors. Nonetheless I don’t see that the critics are imposing even this much structure on the problem. I don’t see why “at higher speed” makes the rent exhaustion problem from price arbitrage more costly in social terms. I don’t see a good theoretical or empirical argument on the table, much less a verified argument.
You also might think that more volatile intra-day asset prices are a cost of HFT. Hold off on that for now, I’ll consider it in another post.
High-speed trading tools pioneered in the stock market are increasingly driving price movements on Amazon’s website as independent sellers use them to undercut and outwit each other in a cut-throat online market place.
Product prices now change as often as every 15 minutes as some of the 2m sellers on Amazon’s site join the online retailer in using computerised tools – often developed by former data miners at investment banks – to lure shoppers with the best deals.
…Amazon sellers – using third-party software – can set rules to ensure that their prices are always, for example, $1 lower than their main rival’s.
…Some sellers have even created dummy accounts with ultra-low prices to deliberately pull down those of rivals so they can corner a market by buying their goods, say pricing experts. That practice violates Amazon’s rules of conduct.
Here is more, “Amazon robo-pricing sparks fears.”
The author is Jonathan Brogaard of Northwestern and here is the abstract:
This paper examines the impact of high frequency traders (HFTs) on equities markets. I analyze a unique data set to study the strategies utilized by HFTs, their profitability, and their relationship with characteristics of the overall market, including liquidity, price efficiency, and volatility. I find that in my sample HFTs participate in 77% of all trades and that they tend to engage in a price-reversal strategy. I find no evidence suggesting HFTs withdraw from markets in bad times or that they engage in abnormal front-running of large non-HFTs trades. The 26 high frequency trading (HFT) firms in the sample earn approximately $3 billion in profits annually. HFTs demand liquidity for 50.4% of all trades and supply liquidity for 51.4% of all trades. HFTs tend to demand liquidity in smaller amounts, and trades before and after a HFT demanded trade occur more quickly than other trades. HFTs provide the inside quotes approximately 50% of the time. In addition if HFTs were not part of the market, the average trade of 100 shares would result in a price movement of $.013 more than it currently does, while a trade of 1000 shares would cause the price to move an additional $.056. HFTs are an integral part of the price discovery process and price efficiency. Utilizing a variety of measures introduced by Hasbrouck (1991a, 1991b, 1995), I show that HFTs trades and quotes contribute more to price discovery than do non-HFTs activity. Finally, HFT reduces volatility. By constructing a hypothetical alternative price path that removes HFTs from the market, I show that the volatility of stocks is roughly unchanged when HFT initiated trades are eliminated and significantly higher when all types of HFT trades are removed.
The paper you can find here, and I thank a loyal MR reader for the pointer.
The participants were myself, Felix Salmon, and Mike Konczal (of the excellent Rortybomb) and the link is here. I haven't heard the final editing but at the time I thought the questioners did a very good job. They started off with the question of which financial innovations of the last twenty-five years — if any — have been of real value.
1. On one hand, critics wish to charge that there is little or no advantage to having prices move more quickly to reflect new information. On the other hand, some of these same critics charge that short-run volatility of prices — assuming this is in fact the result of HFT (and that is not proven) — creates social costs. That's not quite a contradiction but it is an odd mix of views about the relevance of the short run.
2. I haven't seen a good estimate, or for that matter a bad estimate, of the social loss involved from investing resources in HFT. Even if the practice has no gain, I suspect the loss is small. It's the symbolic nature of the issue which excites people — bailed-out elites doing fancy things with powerful computers in a non-egalitarian manner — rather than the belief that it is a policy priority. Even if you think HFT is bad, on an actual list of bad policies or practices in our world, would it be in the top million? Mostly it's a canvas on which to paint complaints about the continuing political and economic power of finance, but we shouldn't let that skew our judgment of the practice itself.
3. There is no argument to date, and probably no argument period, that HFT can lead to financial insolvency or collapse on a major scale. The cost, if there is one, is that the associated trading strategies bring a temporary collapse of asset prices for some period of time (how long?) or perhaps greater ongoing price volatility, or uncertainty about order execution, in the short run.
When I read that HFT may give markets "a new, currently unknown set of emergent properties" I think buying opportunity.
4. Research by Hans Stoll indicated that program trading was not in fact an instrumental culprit behind Black Monday in 1987, yet media coverage of HFT seems to be indicating that it was. Many of the HFT debates echo themes from the earlier program trading debates from the late 1980s but in fact program trading did not turn out to be a major problem. We have been down this path before and it turned out there was much less there than the critics thought at the time.
5. The more I read these debates, the more nervous I get about the idea of a financial products safety commission. Essentially on innovation we're seeing a flipping of the burden of proof and I don't think it is possible to easily fine-tune that flipping in a way to capture good innovations and rule out bad ones. We should still follow the rule of regulating practices shown to be harmful or likely to be harmful.
A few MR readers ask about high-frequency trading. Senators are calling it unfair because some traders have access to more powerful computers,and better quants, than do others. The traders with the most powerful aids get there first and make more money. Here is a typical critique. Felix Salmon is also skeptical.
I do not worry about high-frequency trading. Telegraphs and telephones also brought their own, earlier versions of high-frequency trading. As did stock index futures. There are second-best arguments relating to hockey helmets and the like but that is the case with most forms of progress and greater economic speed. You don't have to think that the current profits measure the current social value of high-frequency trading to argue that the overall trend should be allowed. The correct judgment of efficiency occurs at the system-wide level, not at the level of the individual trading strategy. The short-run story is that private profits exceed social returns but in the longer run the trading activity and liquidity brings increasing social returns and better communication of information.
I'm not a believer in the strong versions of efficient markets hypotheses, so I do admit that high-frequency trading, like just about every other trading strategy, can bring short-run "whiplash" effects on market prices. But if you don't like it, you can trade yourself at much lower frequencies, which is probably what you should be doing anyway. At the same time high-frequency trading smooths out or shortens many other cases of price whiplash. High-frequency trading brings more liquidity into the market. Call it "low quality liquidity" if you wish, but it still looks like net liquidity to me.
The complaint is that this liquidity sometimes vanishes. Maybe high-frequency trading can scare other traders out of the market;
that charge has been leveled against every method of informed
trading. In the short run it is sometimes true but markets respond by
upping the general requirements for quality trading and many market
participants rise to meet the new standard or else switch to longer
On the critical side there is lots of talk of "unfairness" and "manipulation," combined with snide references to the financial crisis. I'd like to see a serious efficiency argument against high-frequency outlined and defended, without the polemics. That would include a case that regulation will prove workable and catch only the "bad liquidity," while at the same time avoiding capture by envious and inferior competitors.
If high-frequency trading is used to trick other traders into revealing their demand schedules, and then canceling orders, I can see a case for regulating that particular practice. On that issue, here is background, from a critic, but note that these charges seem to be unverified.
The philosophical question is why it might possibly be beneficial to have market prices adjust within five seconds rather than within fifteen. One second rather than five? 0.25 rather than one? If you had been writing in the year 1800, what comparisons would you have chosen?
Remember that old comic book where they had Superman race against The Flash? The Flash won. Someone had to, just keep that in mind.
American adults had sex about nine fewer times per year in the early 2010s compared to the late 1990s in data from the nationally representative General Social Survey, N = 26,620, 1989–2014. This was partially due to the higher percentage of unpartnered individuals, who have sex less frequently on average. Sexual frequency declined among the partnered (married or living together) but stayed steady among the unpartnered, reducing the marital/partnered advantage for sexual frequency. Declines in sexual frequency were similar across gender, race, region, educational level, and work status and were largest among those in their 50s, those with school-age children, and those who did not watch pornography. In analyses separating the effects of age, time period, and cohort, the decline was primarily due to birth cohort (year of birth, also known as generation). With age and time period controlled, those born in the 1930s (Silent generation) had sex the most often, whereas those born in the 1990s (Millennials and iGen) had sex the least often. The decline was not linked to longer working hours or increased pornography use. Age had a strong effect on sexual frequency: Americans in their 20s had sex an average of about 80 times per year, compared to about 20 times per year for those in their 60s. The results suggest that Americans are having sex less frequently due to two primary factors: An increasing number of individuals without a steady or marital partner and a decline in sexual frequency among those with partners.
Next year the innovative swimming suits that are causing world records to fall at rapid pace will be banned. Michael Mandel wonders if this is the beginning of the counterrevolution against technological progress and Tyler argues “essentially on innovation we’re seeing a flipping of the burden of proof and I don’t think it is possible to easily fine-tune that flipping in a way to capture good innovations and rule out bad ones.” Believe it or not, Mandel really was talking about swimsuits. Tyler, however, was talking about high speed trading but is there much difference between the two? I don’t think so.
High-tech swimming suits and trading systems are primarily about distribution not efficiency. A small increase in speed over one’s rivals has a large effect on who wins the race but no effect on whether the race is won and only a small effect on how quickly the race is won. We get too much investment in innovations with big influences on distribution and small (or even negative) improvements in efficiency and not enough investment in innovations that improve efficiency without much influencing distribution (i.e. innovations in goods with big positive externalities).
One difference between swimsuits and trading systems is that the former are regulated by FINA, the federation that administers international competition in aquatic sports. We have some hope that a group like FINA can internalize the major externalities both because it encompasses the primary players in the market and because externalities outside of the market are likely to be small (swimming rules are unlikely to cause non-swimmers many problems.) Thus, FINAs rules on swimsuits have some claim to efficiency. Note that we see similar “anti-innovation” rules in many other sports such as car racing. NASCAR, for example, does not allow stock cars to use fuel injectors even though this innovation is now standard on production cars.
NASDAQ (and the other exchanges) are the logical equivalent to NASCAR and FINA in that they can internalize the externalities among the primary players. Thus, if the exchanges were to regulate various high-speed trading strategies I wouldn’t have any problems with that.
But would exchange regulation go far enough? Unfortunately we have learned that the exchanges don’t internalize systemic risk. Trading rules can cause non-traders many problems. As a result, I think there is a case to be made for greater regulation than the exchanges would provide. There is good reason to be skeptical about regulation in general but since this product, “financial innovation,” is primarily about distribution I’m less worried about regulation in finance than in fields where innovation is more closely tied to efficiency.
I do, it seems. Don’t tell my suppliers, but I am a big fan of zero price search. Mark Aguiar and Erik Hurst write:
Using scanner data and time diaries, we document how households substitute time for money through shopping and home production. We find evidence that there is substantial heterogeneity in prices paid across households for identical consumption goods in the same metro area at any given point in time. For identical goods, prices paid are highest for middleaged, rich, and large households, consistent with the hypothesis that shopping intensity is low when the cost of time is high. The data suggest that a doubling of shopping frequency lowers the price paid for a given good by approximately 10 percent. [TC: is that all????] From this elasticity and observed shopping intensity, we impute the shopper’s opportunity cost of time, which peaks in middle age at a level roughly 40 percent higher than that of retirees [emphasis added]. Using this measure of the price of time and observed time spent in home production, we estimate the parameters of a home production function. We find an elasticity of substitution between time and market goods in home production of close to 2. Finally, we use the estimated elasticities for shopping and home production to calibrate an augmented lifecycle consumption model. The augmented model predicts the observed empirical patterns quite well. Taken together, our results highlight the danger of interpreting lifecycle expenditure without acknowledging the changing demands on time and the available margins of substituting time for money.
Here is the paper, and thanks to Bruce Bartlett for the pointer. We also learn that people with children pay higher prices (presumably they have less time to search) and people in their forties with children pay the highest prices of all, six to eight percent more than people in their twenties or sixties.
I also take these results to imply that poor households, which shop more frequently and pay lower prices, are better off in material terms than CPI-based measures of real income will imply. That being said, they also have less time. Fans of the "happiness literature," which suggests more money above a certain level doesn’t make you better off, should favor less search. After all, we are told that people enjoy time spent with friends more than either money or sex. So does this view (not mine) suggest that we shut down discount outlets and induce more consumption of time? Are single price monopolies better than price discrimination? Is Marshall’s the true enemy of the middle class?