Monday assorted links

1. AEA vs. EJMR? (Bloomberg).

2. Chollet with some GPT skepticism.

3. Noma in Copenhagen is closing (NYT).  “…Mr. Redzepi admitted to bullying his staff verbally and physically, and has often acknowledged that his efforts to be a calmer, kinder leader have not been fully successful.”

4. “Using a 3-second sample of human speech, it can generate super-high-quality text-to-text speech from the same voice. Even emotional range and acoustic environment of the sample data can be reproduced. Here are some examples.”  Link here.

5. Joshua Kim comment on my higher education worries.  I think he is saying they don’t get enough money!?

6. It seems Mastodon is sinking?

The Extreme Shortage of High IQ Workers

At first glance it seems peculiar that semiconductors, a key item of national strategic interest, should be produced in only a few places in the world, most notably Taiwan, using devices produced only in Eindhoven in the Netherlands by one firm, ASML. Isn’t the United States big enough to be able to support all of these technologies domestically? Yes and no.

Semiconductor manufacturing is the most difficult and complicated manufacturing process ever attempted by human beings. A literal spec of dust can ruin an entire production run. How many people can run such a factory? Let’s look at the United States. The labor force is approximately 164 million people which sounds like a lot but half of the people in the labor force have IQs below 100. More specifically, although not everyone in semiconductor manufacturing requires a PhD, pretty much everyone has to be of above average intelligence and many will need to be in the top echelons of IQ.

In the entire US workforce there are approximately 3.7 million workers (2.3%) with an IQ greater than two standard deviations above the mean. (Mean 100, sd, 15, Normal dist.) Two standard deviations above the mean is pretty good but we are talking professor, physician, attorney level. At the very top of semiconductor manufacturing you are going to need workers with IQs at or higher than 1 in a 1000 people and there are only 164 thousand of these workers in the United States.

164 thousand very high-IQ workers are enough to run the entire semiconductor industry but you also want some of these workers doing fundamental research in mathematics, physics and computer science, running businesses, guiding the military and so forth. Moreover, we aren’t running a command economy. Many high-IQ workers won’t be interested in any of these fields but will want to study philosophy, music or English literature. Some of them will also be lazy! I’ve also assumed that we can identify all 164 thousand of these high-IQ workers but discrimination, poverty, poor health, bad luck and other factors will mean that many of these workers end up in jobs far below their potential–the US might be able to place only say 100,000 high-IQ workers in high-IQ professions, if we are lucky.

It’s very difficult to run a high-IQ civilization of 330 million on just 100,000 high-IQ workers–the pyramid of ability extends only so far. To some extent, we can economize on high-IQ workers by giving lower-IQ workers smarter tools and drawing on non-human intelligence. But we also need to draw on high-IQ workers throughout the world–which explains why some of the linchpins of our civilization end up in places like Eindhoven or Taiwan–or we need many more Americans.

What is an optimum degree of LLM hallucination?

Ideally you could adjust a dial and and set the degree of hallucination in advance.  For fact-checking you would choose zero hallucination, for poetry composition, life advice, and inspiration you might want more hallucination, to varying degrees of course.  After all, you don’t choose friends with zero hallucination, do you?  And you do read fiction, don’t you?

(Do note that you can ask the current version for references and follow-up — GPT is hardly as epistemically crippled as some people allege.)

In the meantime, I do not want an LLM with less hallucination.  The hallucinations are part of what I learn from.  I learn what the world would look like, if it were most in tune with the statistical model provided by text.  That to me is intrinsically interesting.  Does the matrix algebra version of the world not interest you as well?

The hallucinations also give me ideas and show me alternative pathways.  “What if…?”  They are a form of creativity.  Many of these hallucinations are simple factual errors, but many others have embedded in them alternative models of the world.  Interesting models of the world.  Ideas and inspirations.  I feel I know what question to ask or which task to initiate.

Oddly enough, for many queries what ChatGPT most resembles is…don’t laugh — blog comments.  Every time I pose a query it is like putting a blog post out there, or a bleg, and getting a splat of responses right away, and without having to clog up MR with all of my dozens of wonderings every day.  Many of those blog comment responses are hallucinations.  But I learn from the responses collectively, and furthermore some of them are very good and also very accurate.  I follow up on them on my own, as it should be.

LLMs are like giving everyone their own comments-open blog, with hallucinating super-infovores as the readers and immediate response and follow-up when desired.  Obviously, the people with some background in that sector, if I may put it that way, will be better at using ChatGPT than others.

(Not everyone is good at riding a horse either.)

Playing around with GPT has in fact caused me to upgrade significantly my opinion of MR blog comments — construed collectively — relative to other forms of writing.

Please do keep in mind my very special position.  The above may not apply to you.  I have an RA to fact-check my books, and this process is excellent and scrupulous.  Varied and very smart eyes look over my Bloomberg submissions.  MR readers themselves fact-check my MR posts, and so on.  Having blogged for more than twenty years, I am good at using Google and other methods of investigating reality.  At the margin, pre-LLM, I already was awash in fact-checking.  If GPT doesn’t provide me with that, I can cope.

And I don’t take psychedelics.  R-squared is never equal to one anyway, not in the actual world.  And yet models are useful.  Models too are hallucinations.

So if GPT is doing some hallucinating while at work, I say bring it on.

Sunday assorted links

1. Nathan Labenz on Gary Marcus and AI.  Here is Gary Marcus, responding and critical of GPT.

2. And top AI conference bans the use of AI to write papers for the conference.  And GPT in your email, and more, coming soon?  And a new open source LLM — how good is it?  And Stanford course on LLMs.

3. Classical music markets are pretty efficient! (the top-performed composers).

4. Ezra Klein on flying cars and the fear of energy (NYT).

5. Scott Aaronson skeptical about the latest quantum reports.

Do pay transparency laws raise wages?

It seems not:

Labour advocates champion pay-transparency laws on the grounds that they will narrow pay disparities. But research suggests that this is achieved not by boosting the wages of lower-paid workers but by curbing the wages of higher-paid ones. A forthcoming paper by economists at the University of Toronto and Princeton University estimates that Canadian salary-disclosure laws implemented between 1996 and 2016 narrowed the gender pay gap of university professors by 20-30%. But there is also evidence that they lower salaries, on average. Another paper by professors at Chapel Hill, Cornell and Columbia University found that a Danish pay-transparency law adopted in 2006 shrank the gender pay gap by 13%, but only because it curbed the wages of male employees. Studies of Britain’s gender-pay-gap law, which was implemented in 2018, have reached similar conclusions.

Another misconception about pay-transparency laws is that they strengthen the bargaining power of workers. A recent paper by Zoe Cullen of Harvard Business School and Bobby Pakzad-Hurson of Brown University analysed the effects of 13 state laws passed between 2004 and 2016 that were designed to protect the right of workers to ask about the salaries of their co-workers. The authors found that the laws were associated with a 2% drop in wages, an outcome which the authors attribute to reduced bargaining power. “Although the idea of pay transparency is to give workers the ability to renegotiate away pay discrepancies, it actually shifts the bargaining power from the workers to the employer,” says Mr Pakzad-Hurson. “So wages are more equal,” explains Ms Cullen, “but they’re also lower.”

Here is more from The Economist.

Nathan Labenz on AI pricing

I won’t double indent, these are all his words:

“I agree with your general take on pricing and expect prices to continue to fall, ultimately approaching marginal costs for common use cases over the next couple years.

A few recent data points to establish the trend, and why we should expect it to continue for at least a couple years…

  • StabilityAI has recently reduced prices on Stable Diffusion down to a base of $0.002 / image – now you get 500 images / dollar.  This is a >90% reduction from OpenAI’s original DALLE2 pricing.

Looking ahead…

  • the CarperAI “Open Instruct” project – also affiliated with (part of?) StabilityAI, aims to match OpenAI’s current production models with an open source model, expected in 2023
  • 8-bit and maybe even 4-bit inference – simply by rounding weights off to fewer significant digits, you save memory requirements and inference compute costs with minimal performance loss
  • mixture of experts techniques – another take on sparsity, allows you to compute only certain dedicated sub-blocks of the overall network, improving speed and cost
  • distillation – a technique by which larger, more capable models can be used to train smaller models to similar performance within certain domains – Replit has a great writeup on how they created their first release codegen model in just a few weeks this way!

And this is all assuming that the weights from a leading model never leak – that would be another way things could quickly get much cheaper… ”

TC again: All worth a ponder, I do not have personal views on these specific issues, of course we will see.  And here is Nathan on Twitter.

“Unveiling the Price of Obscenity”

Does legitimating sinful activities have a cost? This paper examines the relationship between housing demand and overt prostitution in Amsterdam. In our empirical design, we exploit the spatial discontinuity in the location of brothel windows created by canals, combined with a policy that forcibly closed some of the windows near these canals. To pin down their effect on housing prices, we apply a difference-in-discontinuity (DiD) estimator, which controls for the precise location of brothel windows and the effect of other policies and local developments. Our results show that the housing prices are discontinuous at the bordering canals, and this discontinuity nearly disappears after closures. The discontinuity is also found to decrease with the distance to brothels, disappearing after 300 yards. Our estimates indicate that homes right next to sex workers were 30 percent cheaper before the closures. This result seems unrelated to the presence of other businesses, such as bars and cannabis shops. Instead, the price discount is partly explained by petty crimes. However, 73 percent of the effect remains unexplained after controlling for many forms of crime and risk perception. Our findings suggest that households tend to be against the visible presence of sex workers and related nuisances, reaffirming their marginalization.

That is from a new paper by Erasmo Giambona and Rafael P. Ribas, via a highly reputable man.

Saturday assorted links

1. Books on Xi’s shelf.

2. Chat with historical figures, 20,000 of them.  When will they do economists?  And using GPT for therapy, how do you think it did?  People preferred the GPT, until they found out they were speaking with a machine.

3. What some top chess players won in prize money.

4. Claims about quantum computing.

5. Rasheed Griffith on where to eat in Panama.

6. “The use of a longitudinal database of Famine immigrants who initially settled in New York and Brooklyn indicates that the Famine Irish had far more occupational mobility than previously recognized. Only 25 percent of men ended their working careers in low-wage, unskilled labor; 44 percent ended up in white-collar occupations of one kind or another—primarily running saloons, groceries, and other small businesses.”  Link here.

7. AEA meeting update.

GPT and my own career trajectory

For any given output, I suspect fewer people will read my work.  You don’t have to think the GPTs can copy me, but at the very least lots of potential readers will be playing around with GPT in lieu of doing other things, including reading me.  After all, I already would prefer to “read GPT” than to read most of you.  I also can give it orders more easily.  At some point, GPT may substitute directly for some of my writings as well, but that conclusion is not required for what follows.

I expect I will invest more in personal talks, face to face, and also “charisma.”  Why not?

Well-known, established writers will be able to “ride it out” for long enough, if they so choose.  There are enough other older people who still care what they think, as named individuals, and that will not change until an entire generational turnover has taken place.

I expect the entire calculus here is very different for someone who is twenty years old, and I hope to write more on that soon.

Today, those who learn how to use GPT and related products will be significantly more productive.  They will lead integrated small teams to produce the next influential “big thing” in learning and also in media.  Most current contributors will miss that train almost entirely, just as so many people missed the importance of the internet for learning and also for media.  But we still don’t know how important this “next big thing” will be, for instance, compared to YouTube.

In the short run, using GPT for ideas and inspiration will be more important than using it for copy.  Like blogging, I am happy when people attack it, because that raises the moat surrounding it.

Overall the trajectory of change is very difficult to predict, as are the forthcoming technological developments themselves.

How long does a Roman emperor last for?

Of the 69 rulers of the unified Roman Empire, from Augustus (d. 14 CE) to Theodosius (d. 395 CE), 62% suffered violent death. This has been known for a while, if not quantitatively at least qualitatively. What is not known, however, and has never been examined is the time-to-violent-death of Roman emperors. This work adopts the statistical tools of survival data analysis to an unlikely population, Roman emperors, and it examines a particular event in their rule, not unlike the focus of reliability engineering, but instead of their time-to-failure, their time-to-violent-death. We investigate the temporal signature of this seemingly haphazardous stochastic process that is the violent death of a Roman emperor, and we examine whether there is some structure underlying the randomness in this process or not. Nonparametric and parametric results show that: (i) emperors faced a significantly high risk of violent death in the first year of their rule, which is reminiscent of infant mortality in reliability engineering; (ii) their risk of violent death further increased after 12 years, which is reminiscent of wear-out period in reliability engineering; (iii) their failure rate displayed a bathtub-like curve, similar to that of a host of mechanical engineering items and electronic components. Results also showed that the stochastic process underlying the violent deaths of emperors is remarkably well captured by a (mixture) Weibull distribution.

That is from a new paper by Joseph Homer Saleh.  Via Patrick Moloney.  And here are new results on why Roman concrete was so much more durable than the emperors.

What should I ask Noam Dworman?

I will be doing a Conversation with him.  Noam is the owner of Comedy Cellar, considered by many to be the world’s best comedy club, located in Greenwich Village, NYC.  There is a branch in Las Vegas too.  The Cellar also has its own TV show.

Here is Norm’s LinkedIn page.  Noam also makes music in a band, usually playing guitar.

So what should I ask him?

Friday assorted links

1. More Scott Sumner movie reviews.

2. “Why do they hate the children?”  Hat tip @pmarca.

3. Apple unveils AI-voiced audiobooks.  And some insights into how ChatGPT models work.  And can ChatGPT do analogical reasoning without explicit training?

4. Is Garett Jones channeling the Lord of the Vineyard?

5. Self-perceived attractiveness reduces face mask-wearing intention.

6. 41% of NYC school students were chronically absent last year.

7. Was Vermeer a Jesuit?  And it seems he may have used a camera obscura.

ChatGPT and the revenge of history

I have been posing it many questions about Jonathan Swift, Adam Smith, and the Bible.  Chat does very well in all those areas, and rarely hallucinates.  Is it because those are settled, well-established texts, with none of the drama “still in action”?

I suspect Chat is a boon for the historian and the historian of ideas.  You can ask Chat about obscure Swift pamphlets and it knows more about them than Google does, or Wikipedia does, by a long mile.  Presumably it “reads” them for you?

When I ask about current economists or public intellectuals, however, more errors creep in.  Hallucinations become common rather than rare.  The most common hallucination I find is that Chat invents co-authorships and conference co-sponsorships like crazy.  If you ask it about two living people, and whether they have worked together, the fantasy life version will be rather active, maybe fifty percent of the time?

Presumably that bug will be fixed, but still it seems that for the time being Chat has shifted some real intellectual heft back in antiquarian directions.  Perhaps it is harder for statistical estimation to predict words about events that are still going on?

Here are some tips for using ChatGPT.

Of course Chat is already a part of my regular research and learning routine.  Woe be unto those who cannot or do not use it effectively!  I feel sorry for them, get with the program people…