What I’ve been reading

by on June 21, 2016 at 1:03 am in Books | Permalink

1. Andrej Svorencik and Harro Maas, editors, The Making of Experimental Economics: Witness Seminar on the Emergence of a Field.  Transcribed dialogue on the origins and history of a field, including many of the key players including Vernon Smith and Charles Plott, among others.  There should be a book like this — or better yet a web site — for every movement, major debate, new method, and school of thought.

2. Adam Kucharski, The Perfect Bet: How Science and Math are Taking the Luck Out of Gambling.  The subtitle is an exaggeration, nonetheless this is an interesting topic and book.  There is invariably a frustrating element to such an investigation, because the best schemes are hard to uncover or verify.  Nonetheless have you not thought — as I have — that a determined, Big Data-crunching, super smart entity could in fact beat the basketball odds just ever so slightly?

3. Svetlana Alexievich, Secondhand Time: The Last of the Soviets.  A good book, and a good introduction to her writing.  I have to say though, I did not find this incredibly profound or original.  Chernobyl is deeper and more philosophical.

4. Srinath Raghavan, India’s War: World War II and the Making of Modern South Asia.  Consistently well-written and interesting, the title says it all.

Three useful country/topics books on Latin America are:

Lee J. Alston, Marcus Andre Melo, Bernardo Mueller, and Carlos Pereira, Brazil in Transition: Beliefs, Leadership, and Institutional Change.

Richard E. Feinberg, Open for Business: Building the New Cuban Economy.

Dickie Davis, David Kilcullen, Greg Mills, and David Spencer, A Great Perhaps?: Colombia: Conflict and Convergence.  After Uruguay, is Colombia not the longest standing democracy in South America?

1 too hot for MR June 21, 2016 at 1:32 am

2. Naive on Tyler’s part. Sports gambling is already a battle of quants vs quants, with PhDs and big data in full effect on both sides. The punters in the middle have no idea what they’re up against, similar to their brethren in retail investing.

2 Ray Lopez June 21, 2016 at 2:47 am

Pleeeze. The house has always won against the ‘punters in the middle’. But no matter how smart a punter you are, even if your name is Edward Thorpe (http://blackjacklife.com/blackjack-legends-edward-thorp/), you will, long term, lose.

As for quants in the financial field, they consistently underperform passive index funds, and only due to hype and excessive fees do hedge funds win. Inside information gives you a permanent edge on Wall Street, rarely math.

3 Michael Josem June 21, 2016 at 5:09 am

You’re obviously not going to win against the house at games like roulette or black jack.

But there certainly are opportunities to win against the house at contests like betting on real world events – if your information is better than the house’s, you’re going to fundamentally be able to make better decisions.

4 Unanimous June 21, 2016 at 5:23 am

People are regularly evicted from blackjack gambling in casinos because they can win consistently. Casinos win overall only by refusing to accept bets from good players.

5 Unanimous June 21, 2016 at 5:19 am

There are quants that consistently outperform in both gambling and finance. Many firms that market themselves as quants don’t, and those publically tracked and studied may not on average, but that isn’t the same as all quants not performing.

6 phil June 21, 2016 at 7:50 am

agree

—————-

Billy Walters, a core member of the Computer Group, has, however, stayed in the game; he now has a staff of consulting mathematicians who have built advanced predictive models to project scores. Walters, Kent and their syndicates stood basically alone until the late 1990s, when PCs became powerful enough to do the computation work required by predictive models, and more data became available to feed them.

Voulgaris was well aware of these predecessors. Analytics and predictive modeling had “always fascinated me,” he says. “I’d always wanted to have a model of sorts.” Throughout his career, Voulgaris had been what is known as a subjective bettor, albeit one so astute that he became a whale. Two huge bets — both for the Lakers to win the title in 1999 and 2000 — had turned about $80,000 in savings into more than $1 million, his first fat bankroll. As a purely subjective bettor, Voulgaris had been placing perhaps 350 individual wagers each season. But after the disastrous end to the 2004 season, with his edge gone, he decided that he should increase his betting frequency by an order of magnitude but decrease the sums he was putting at risk on each wager. It only made probabilistic sense. If his return on investment (ROI) fell from 20 percent to, say, 5 percent, that was okay. Five percent of $50 million is better than 20 percent of $5 million (all figures are hypothetical; Voulgaris is as cagey as any gambler about the true size of his bankroll). This new approach would require an enormous amount of research and analysis. It would require projecting a score for each and every game in an NBA regular season — all 1,230. A single human mind would be overwhelmed by the workload; only a computer program could handle it.

“If you think about it,” he says, “you’d be a slave to the game of basketball otherwise.”

Voulgaris chose the right moment to start building a predictive model for NBA games. Four years earlier, in the 2002-03 season, the league had for the first time made play-by-play information available to the public, whereas before only box scores were published. This trove of fresh information had no immediate practical value, except perhaps to assuage fan curiosity. But by 2006, a large enough sample of data had accumulated to employ it with scientific rigor.

To help him build his model, Voulgaris required a specialist in the field, a mind trained in the codes of statistics, mathematics and computer science. He started the search in 2005. It took him two years and six individual tryouts — most of those interviewees were found online, Voulgaris says, and two of them landed in NBA front offices — to find the right person. The right person was a literal math prodigy. As a preteen, he had won national math contests; he had been the subject of awestruck articles in major newspapers. He had scored a perfect 800 on the math portion of the SAT when he was in seventh grade. At the time of his interview with Voulgaris, he had just quit a high-paying job designing algorithms for an East Coast hedge fund with a roster of Nobel-grade quant talent. Voulgaris does not wish to have the name of this math whiz appear in print, presumably out of fear that some rival will attempt to find the whiz — let’s call him the Whiz — and poach him. When I visit Voulgaris at his rental in the Hollywood Hills, he tells me that he’s recently made the Whiz his partner. “50-50?” I ask.

“No.”

The relationship got off to a rocky start. In 2007 the Whiz basically spun his wheels striving to build a model on his own during his first offseason in Voulgaris’ employ. “He was optimistic that he’d be able to come up with something by the time basketball season started,” Voulgaris says, “and he just flailed away.” Voulgaris decided to shorten the leash, and together the two determined that what they needed was a program that could simulate a game of basketball between any two teams at any point in a season and spit out a projected score. To do so, they would have to break the game down into its basic unit, the possession. Each simulation would therefore be a series of mini-simulations. First, the program would have to predict the number of possessions each matchup would likely produce. Then it would need to judge the likeliest outcome of each possession: Score or no score; one point, two points or three; micro-forecasts ascertained from historical performance data. It would also have to take into account a vast number of potential occurrences, each missed shot or successful rebound creating the possibility of still other occurrences — a garden of explosively forking paths, as if in parallel universes. The program would run tens of thousands of simulations for each matchup, discarding the most outlandish or improbable results. It would be a black box — prophecy as output.

Between the statistical analysis, the algorithms and the programming, it took two years to create their first model, version 1.0. Voulgaris continued to bet subjectively, marking time until the model was ready. When they finished, they called it Ewing. (It wasn’t named after Patrick, per se, but after the “Ewing Theory,” a purported phenomenon famously described by Bill Simmons under which a team improves whenever its overrated superstar leaves the franchise.) At some point in the process of breaking the game down into its component parts, they realized that Ewing would also require a kind of feeder model, one that could forecast the lineups a team would most likely use each game and the minutes each player was likely to see on the court. They called that model Van Gundy. Van Gundy, in turn, required its own feeder tool, one that would track the overall roster patterns for each team, the trades, the draft picks, the midseason player-acquisition tendencies. That database, less intricate than the other two, they at times jokingly referred to as Morey, as in Daryl Morey, the quant-minded GM of the Rockets. Ewing, Van Gundy, Morey. Player, coach, GM. The names of each corresponding, of course, to the job of each tool.

In the summer of 2007, Voulgaris and the Whiz took Ewing on a dry run, testing the simulator against games from the previous season to see how accurately it could retroactively “predict.” But something funky was happening. Every score the model spit out was higher than the average lines produced by the bookmakers — the standard by which they would be judging themselves. The model, in other words, was recommending that Voulgaris bet the over in every single game. After weeks spent poring through code, Voulgaris finally caught the flaw. When assigning variables in the model, the Whiz had somehow assumed that the league-average free throw percentage was 88 percent, when in fact it’s around 75 percent — an absurd mistake on the part of the Whiz, whose basketball knowledge at the time was practically nil. In more advanced versions of Ewing, they would jettison this primitive free throw method. Now, says Voulgaris, they’ve adjusted Ewing so that it predicts the player most likely to be fouled on any given individual possession, then uses that player’s specific free throw percentage to run its simulation.

If Ewing has a secret sauce, it’s just this sort of thing: Finding scraps of information, sliced and diced ever more finely, that reveal something about how a system — in this case, a game of pro basketball — will operate in the future. The key is to find those scraps that are more predictive than others. Case in point: One of Ewing’s most important functions is to assign values to players. Each player has two values — on offense and as a defender — and those values are constantly changing. Ewing will also automatically adjust the value depending on who’s guarding whom. Oklahoma City’s Kendrick Perkins “is more valuable guarding Dwight Howard than he is guarding Shane Battier,” Voulgaris says. Why? “Because Howard is a unique player, and you need a big to defend him.” Likewise, according to Voulgaris, Celtics seven-footer Jason Collins is “useless every game, except when he’s guarding Howard, which he does really, really well.” Player values also change across a season and a career. So Voulgaris and the Whiz created, for Ewing, an aging component. Further number-crunching revealed that different types of players, based on position and size, will reach their zeniths at different ages and on trajectories that are possible to predict. Ewing now grasps the curve of the lifespan of the point guard, the shooting guard, the forwards, the center — and predicts the downslope and expiration date of every NBA career.

http://espn.go.com/blog/playbook/dollars/post/_/id/2935/undefined

——————

Tell me what those dudes can do if they have access to this data

http://grantland.com/features/the-toronto-raptors-sportvu-cameras-nba-analytical-revolution/

—————

unless you have a at least a 6 figure research budget (and I wonder if that might not be low), I would stay out of that market with anything you’re not extremely well positioned to lose

7 Jan June 21, 2016 at 8:32 am

Could you elaborate?

8 phil June 21, 2016 at 8:58 am

the best basketball bettors spend significantly on analytic talent

its possible that you might be analytically talented yourself, its possible that with a library card, and a fair amount of Adderall, you might reasonable replicate the analytic methods that they’re employing

its also likely that they’re using proprietary data, which in addition to the library card and Adderall, you better be ready to invest something on the order of 6 figures to replicate

no matter how analytically talented you are, you can’t analyze data you don’t have

someone who does have that data, has an advantage over you, even if your analytically chops are similar, or even better

9 j June 21, 2016 at 11:31 am

there’s enough publically available data to recreate haralabos’ models as they’re described in the article. heck, 538’s CARMELO does the exact same thing as EWING, and there are publically available line-up projectors for Fantasy Sports, so why go to the trouble of producing models yourself? tweak theirs if you have some special insight.

fwiw, i’ve been aware of haralabos for 10+ years (online poker, twitter) and i am skeptical he’s doing what he claims he’s doing. if he made a fortune from grinding (not luckboxing) out nba wagering at all (prior: 60-75%), i think he did it betting on systemic referee biases, not by having a super sophisticated model of the game of basketball.

10 phil June 21, 2016 at 12:16 pm

J – you’re skepticism of haralabos might be well-founded, idk

I’m extremely skeptical that someone using only public information plus tweaks, can compete with someone using private information

someone with access to the SportsVU data specifically is blowing them out of the water just on the data

11 mkt42 June 21, 2016 at 3:56 pm

Re systermic referee biases: aren’t the identities of referees kept secret until just before tipoff, for exactly this reason? So unless they have access to insider information about upcoming referees, they can’t use referee biases to place their bets?

As for the video information, there’s an active above-board market for the video data, some of it is basically data from motion capture whereas Synergy has teams of human beings who watch the video and encode what they see on the screen. Some of these data are available for free, some of it you have to pay for.

That’s the readily available stuff that anyone can get (although I think a whole season of video data might cost in the 6-figure range). Some bettors or teams might have their own highly private data sources, or ways of capturing and encoding the video data.

12 phil June 21, 2016 at 9:01 am

in economics-talk

it’s probably a game with fairly significant barriers to entry

the barrier is being able to afford the data to analyze

13 Richard Besserer June 21, 2016 at 1:33 pm

The house always wins. If it isn’t, its business model isn’t sustainable, so it quickly changes the rules to fix that.

That includes banning card-counting. In the future, that may involve banning bets from people who bet too much like robots.

14 too hot for MR June 21, 2016 at 4:05 pm

This is true except when it isn’t. Anybody smart enough to read and comment on this blog is smart enough to beat the house in Vegas. But most who are smart enough are also too lazy or defeatist or superstitious to understand this or actually act on it, so the business model still works just fine.

15 Unanimous June 21, 2016 at 6:08 pm

The house wins on average, not always. In sports betting, the house is an intermediary balancing the odds so that they make a reasonable profit. Individual gamblers can win by being better than most other gamblers. They are not beating the house. You do not have to be better than the best, just in the top 20%.

16 Todd Kreider June 21, 2016 at 1:46 am

Alexievich’s “Chernobyl” book was published half a year after the WHO put out a health assessment summary in advance of its 2006 full report. A 1-star reviewer pointed out that she didn’t understand nuclear science or radiation science at all. Alexievich wrote that 1) 3,600 workers died shortly after the accident 2) She quotes someone saying that “The explosion [had Chernobyl gone ‘critical’] would have been between three and five megatons. This would have meant that not only Kiev and Minsk, but a large part of Europe would have been uninhabitable. Can you imagine it? A European catastrophe.” (Sergei Vasilyevich Sobolev. Deputy head of the executive committee of the Shield of Chernobyl Association) The 1-star reviewer continues: “A nuclear reactor is not an atomic bomb. There can be a meltdown, but not a nuclear explosion. Check any source.”

The 2006 WHO report stated that about 50 had died due to the accident and that another 4,000 would likely eventually have their lives shortened. But the WHO was about 15 years behind radiation studies and 25 years behind with its 2013 Fukushima assessment as it assumes any trace of radiation can cause cancer based on a “linear no-threshold” (LNT) hypothesis. But for many years prior to Fukushima (2011), radiation health scientists have overwhelming stated that this hypothesis has been incorrect.

Total deaths from Chernobyl will likely be around 50 deaths by 1991 and maybe 100 shortened lives depending on coming cancer treatments. Total deaths from radiation emitted from the Fukushima accident will be zero. .

17 Ray Lopez June 21, 2016 at 2:50 am

Surely you jest. Only an industry hack writes what you do. In fact, the human body repairs itself very well (better than robots) from radiation, but any amount of radiation is bad, that’s why dental assistants will wear lead vests despite science saying a dental x-ray is less exposure than a cross-country air flight (and some pilots wear lead vests too). As for journalists getting science wrong, that’s routine, and having worked with medical researchers, I can tell you they too get science wrong (basic stuff like units of measurement for example).

18 Todd Kreider June 21, 2016 at 4:52 am

You might want to look it up first and *then* type a reply.

19 So Much For Subtlety June 21, 2016 at 5:44 am

There is no especially good reason to think that any amount of radiation is bad for you. The dentists stand behind their lead shields because they do it all day. You and I don’t get a lead apron.

It may be that a small amount of radiation is good for you.

20 Jan June 21, 2016 at 8:47 am

There is a lack of good data on long-term effects of LDR delivered intermittently, but the best evidence on long-term exposure to LDR says it has a very tiny but negative effect on leukemia risk.

http://www.nature.com/news/researchers-pin-down-risks-of-low-dose-radiation-1.17876

21 Axa June 21, 2016 at 7:49 am

@Todd, keywords: child thyroid cancer.

However, the last WHO reports state psychological effects seem to overrun all the physical effects combined. The survivors/exposed are slowly killing themselves through their habits influenced by fear and misinformation.

22 Todd Kreider June 21, 2016 at 2:04 pm

There was a spike in thyroid cancer among children around 1990/91. It was determined that the 4000 children drank highly contaminated milk that should have never gone to market, but that was 1986 Ukraine days. Thyroid cancer among children was around 99%+ curable but a few of the cancers weren’t caught in time. They are among the 56 considered dead.

In Fukushima, there was no such exposure so there will be no thyroid deaths.

@Jan Look at the comments section below the paper.

23 JWatts June 21, 2016 at 11:01 am

“Alexievich’s “Chernobyl” book was published half a year…”

“Alexievich wrote that 1) 3,600 workers died shortly after the accident 2) She quotes someone saying that “The explosion [had Chernobyl gone ‘critical’] would have been between three and five megatons. This would have meant that not only Kiev and Minsk, but a large part of Europe would have been uninhabitable. Can you imagine it? A European catastrophe.””

Wow, that’s some mindless drivel. There’s really nothing correct in that.

24 Harun June 21, 2016 at 12:30 pm

Mindless? Or sensationalism to sell books.

25 JWatts June 21, 2016 at 4:33 pm

Good point.

26 Ray Lopez June 21, 2016 at 2:53 am

“Richard E. Feinberg, Open for Business: Building the New Cuban Economy.” – a waste of time–unless you are a speed reader like TC is–to read such a ‘instant book’. Cuba had a foreigner ‘free trade zone’ I think since the early 1990s, and nobody sane has invested in that country unless you have connections with the Castro mafia. Cuba, having visited there twice, seems to me to be an expensive, geriatric, dead end.

27 mkt42 June 21, 2016 at 3:19 am

2: “Nonetheless have you not thought — as I have — that a determined, Big Data-crunching, super smart entity could in fact beat the basketball odds just ever so slightly?”

Isn’t that what Haralabos Voulgaris does? He presumably keeps his best stuff proprietary but he’s shared enough to show that he does know NBA stats.

http://www.sloansportsconference.com/people/haralabos-voulgaris/

28 Cameron June 21, 2016 at 6:44 am

“Nonetheless have you not thought — as I have — that a determined, Big Data-crunching, super smart entity could in fact beat the basketball odds just ever so slightly”

The problem is that beating the odds “just ever so slightly” still won’t win you any money. You need to win something like 53-55% of the time to break even because of the vig. Levitt has a good article about this: https://www.stat.berkeley.edu/~aldous/157/Papers/Levitt_Gambling_2004.pdf

29 rayward June 21, 2016 at 6:46 am

2. Isn’t this just an example of Hanson’s prediction markets-systems. After all, most gamblers rely on bias. This is a bit off, but Krugman often writes about “insight”, and how complex and cumbersome models lead to a loss of “insight”. http://krugman.blogs.nytimes.com/2016/06/20/tldr-and-modern-macroeconomics-wonkish/ I suppose his critics might argue that Krugman is confusing “insight” with “intuition”, but I would argue that models are no more accurate at predicting the future than intuition. My insight about economics is that it has reached its lofty place in policy because of its claims of accurately predicting the future, and people are obsessed about the future. My suggestion for economists is stick to the past. Predictions about the future run the risk of being wrong, terribly wrong, while predictions about the past are never wrong (even if, or because, history is constantly being rewritten by scholars).

30 rayward June 21, 2016 at 7:10 am

My favorite story of an economist predicting the future is from the Tom Clancy book The Hunt for Red October in the pivotal scene (in the movie) where Jack Ryan (yes, Ryan has degrees in economics as well as history) predicts that the Soviet submarine commander (played by Sean Connery) would go to Starboard in his next “crazy Ivan” because it was the bottom half of the hour, a prediction which, if wrong could result in Soviet domination of the seas and the world (because of the undetectable Soviet submarine). Of course, Ryan’s prediction was correct and saved the world from Soviet domination. But Ryan acknowledged it was just a guess, as he had a 50/50 chance of being right (the commander could either go either to starboard or to port). So it is with economists in predicting the future: they have a 50/50 chance of being right (economic growth is either going up or going down, unemployment is either going up or going down, etc.) but, unlike Jack Ryan, they (at least half anyway) are almost always wrong.

31 Ted Craig June 21, 2016 at 7:15 am

2. This sounds like the description of what De Niro’s character does for a living in the opening scene of “Casino.” Of course, some of his data wouldn’t be available to the average quant.

32 phil June 21, 2016 at 7:57 am

#2

A thought:

In the world of data crunchers

how quickly does the crunching become commoditized?

it seems like the real competitive advantage is having propitiatory data that your competitors don’t have, or can’t afford to have

seems like data collection is the real frontier

33 Bernard Yomtov June 21, 2016 at 10:52 am

Nonetheless have you not thought — as I have — that a determined, Big Data-crunching, super smart entity could in fact beat the basketball odds just ever so slightly?

What if the odds are set by “a determined, Big Data-crunching, super smart entity?” Even the same one?

I’m far from expert on basketball betting, but as I recall you give 11-10 odds. (Maybe it’s lower for big guys, but they must give something). So how does one dBD-csse get enough of an edge over another to overcome that?

34 middyfeek June 21, 2016 at 7:19 pm

Short answer, he doesn’t. That’s why the book exists. He’s trying to turn his knowledge(such as it is) into revenue. If he could do it by betting the book wouldn’t exist.

35 John Thacker June 22, 2016 at 9:47 am

There is invariably a frustrating element to such an investigation, because the best schemes are hard to uncover or verify.

Also there’s a few papers in the literature that show various betting biases, then with followup papers 5 or 10 years later showing that, unsurprisingly, published edges and schemes tend to go away. The very act of investigation and publication tends to destroy the best schemes, which is why people jealously guard them.

Nonetheless have you not thought — as I have — that a determined, Big Data-crunching, super smart entity could in fact beat the basketball odds just ever so slightly?

How different is this from claiming that someone could do this with the stock market?

People do this, and to some degree the odds adjust, but of course the most important thing is what the average bettor is doing. There have been occasions when the bookies produced Big Data analysis that found biases in the betting lines– but it would have been unprofitable (or miss out on arbitrage) for them to set more accurate lines since the average betting public hadn’t caught on. In general, yes, “smart money” that spends a lot of effort does indeed beat “dumb money” on average. Then again, the smart money spends a lot of time, effort, and energy to do so, so it’s not exactly free being a market maker.

36 akarlin June 23, 2016 at 12:11 am

Alexievich is a talentless hack who penned odes to the (Polish) founder of the Soviet secret police before Russia bashing became more profitable.

http://www.unz.com/akarlin/alexievich-likes-iron-felix/

Comments on this entry are closed.

Previous post:

Next post: