Category: Uncategorized

*The Kural*

Or Tiruvalluvar’s Tirukkural, translated with a preface by Thomas Hitoshi Pruiksma, just published by Beacon Press.  This poem, a staple (the staple?) of Tamil literature, is believed to have been composed between the third and fifth centuries AD.  The poem covers issues of wisdom, ethics, virtue, wealth, and love.  I cannot compare this version to the Tamil original, but it reads nicely and is attractively placed on the page.

Excerpt:

Is she a siren a rare peacock a woman in jewels —

My heart quakes

As if on attack with an army of sirens — the look

She gives when I look

I did not know death but now I do —

Fierce feminine eyes

They don’t fit this young woman — these eyes

That kill those looking

Is it death an eye a deer — her gaze

Contains all three

There should be some extra indentation after each two lines.  And does anyone has a link to the whole thing recited in Tamil, I cannot readily find this on YouTube.

Why I think graduate programs should keep macro in the first-year sequence

I have heard that MIT is pushing macro out of their required first-year sequence, noting I am not sure what the ex post regime will look like.  But in general I am macro-sympathetic, for the following reasons:

1. Many economics graduate students are from emerging (or retrogressing) economies, and macro issues are truly important for them.

2. Many graduate students are from “developed” economies (with apologies to Peter Thiel), and macro issues are truly important for them.  In America we had a major financial crisis in 2008-2009 and rampant inflation more recently.  It is hardly the case that all the problems have been solved.

3. Macro is the main vehicle for teaching people about economic growth, which is probably the most important topic in economics.

4. The Fed has a very good economics staff, and probably that tradition will be harder to continue if macro is taken out of first-year sequences.

5. You might argue that standards in macro are looser, by the nature of the field.  I would suggest it is easier to advance a new idea in macro, perhaps for the same reason?  Along related lines, macro still has more singly-authored papers, a sign that the field requires less conformity of ideas.

6. If you write down a short list of the candidates of “Greatest Economist ever,” did they not all do macro?  Doesn’t that tell us something?

That all said, I would make macro sequences “more practical,” more about economic growth, more about economic history, and less about dynamic programming than is often currently the case.

Saturday assorted links

1. When is impulsive behavior adaptive?

2. Robert Colville on the morality of economic growth.

3. Do “fairy circles” come from plant competition and self-organization?

4. “YouTubers said they destroyed over 100 VHS tapes of an obscure 1987 movie to increase the value of their final copy. They sold it on eBay for $80,600.”  Link here.

5. Transcript of my St. Andrews talk on EA.  Recommended.  And Nietzsche on EA, as done by GPT.

6. Who self-identifies as Native American?

7. Alas Melvyn Krauss has passed away, RIP.  He was generous enough to let me sit in on his NYU class way back when.

8. Ross D. on euthanasia, more than just his prior column (NYT).

Friday assorted links

1. Orthodox Jews 18 percent of U.S. kidney donors.

2. So much Adderall!  Has to be screwy.  And presumably many people get it without prescriptions.

3. Redux: My 2021 talk at OpenAI — recommended.  And Arnold Kling on ChatGPT.

4. The Economist on whether economics has run out of big new ideas.  And should MIT be abolishing first-year macro in the graduate sequence?

5. “Contactless Real-Time Heart Rate Predicts the Performance of Elite Athletes.

6. “Norway is a wealthy and egalitarian country with a homogeneous educational system, yet achievement gaps between students at the 90th and 10th percentiles of parental income and between students whose parents have at least a master and at most a high school degree are found to be large (0.55–0.93 and 0.70–0.99 SD), equivalent to about 2 to 2.5 years of schooling, and increasing by grade level. ”  Link here.  vtekl.

7. “A pair of Indian men broke a Guinness World Record by visiting all seven continents in 3 days, 1 hour, 5 minutes and 4 seconds.”  Like wise men, they started in Antarctica.

8. Paul Johnson, RIP (NYT).

The economics of non-competes

That is the topic of my latest Bloomberg column, here is one excerpt:

The value of noncompete clauses is easy to illustrate. Say you run a hedge fund. Many members of your trading team will have partial access to your firm’s trading secrets, and if they leave they can take those secrets with them. In the absence of noncompete agreements, firms would be more likely to “silo” information — becoming less efficient and less able to pay higher wages.

Nondisclosure agreements for workers in such positions are common, but they are difficult to enforce — making noncompete agreements more relevant. How exactly might you find out if some other newly created hedge fund is using your trading algorithms?

Or say you are a sales company with a customer list, or a nonprofit with a donor list. An employee who sees those lists could use that information to start a competing business, or take that information to competitors. It seems reasonable in such cases to restrict the ability of employees to “jump ship.”

If noncompetes are banned outright, to repeat, the effect will be less information-sharing within the company. New workers in particular, who have not demonstrated their long-term loyalty, will have a hard time getting access to information and getting ahead. More senior employees will have the advantage, hardly a formula for boosting economic opportunity.

I call for federalism and piecemeal regulation of the practice, not the all-out federal ban proposed by Lina Khan’s once again overreaching FTC.

Thursday assorted links

1. Could we trade with ants?

2. Five trends that will shape urban Africa.

3. Is the Fed losing money on QE?

4. Do Asian-Americans have the world’s highest life expectancy?

5. On immigration (and authority) Daniel Klein is correct.

6. Can ChatGPT pass medical licensing examsSpeculative GPT thoughts.  And “I taught ChatGPT a language.” And “The return of the Socratic method, at scale and on demand.”  Get ready people!  “Rewrite this article so it sounds like a human.”  And Chat GPT recommends the best books on AI.  Get with the program!  Yes, I mean you.  Only yesterday I found the beast very helpful in prepping questions for Glenn Loury (thanks Glenn, that was a great episode!).

7. Roman vs. modern concrete, updated and revised.

My excellent Conversation with Katherine Rundell

One of my favorite CWTs, here is the audio, video, and transcript.  Here is part of the summary:

She joined Tyler to discuss how she became obsessed with John Donne, the power of memorizing poetry, the political implications of suicide in the 17th century, the new evidence of Donne’s faith, the contagious intensity of thought in 17th century British life, the effect of the plague on national consciousness, the brutality of boys’ schooling, the thrills and dangers of rooftop walking, why children should be more mischievous, why she’d like to lower the voting age to 16, her favorite UK bookshop, the wonderful weirdness of Diana Wynne Jones, why she has at least one joke about Belgium in every book, what T.S. Eliot missed about John Donne, what it’s like to eat tarantula, the Kafka book she gives to toddlers, why The Book of Common Prayer is underrated, and more.

Here is one excerpt:

COWEN: Now, you have two books, Rooftoppers and Skysteppers, about rooftop walking. Some might call them children’s books. I’m not sure that’s exactly the right description, but what is the greatest danger with rooftop walking?

RUNDELL: Oh, it’s falling off.

COWEN: What leads you to fall off? If you’re rooftop walking, if you were to fall off, what would be the proximate cause of that event?

RUNDELL: Philippe Petit, who is, of course, one of the great roof walkers of the world and the man who strung the wire between the Twin Towers in 1977, talks about vertigo as a beast that has to be tamed piece by piece, that can never be overcome all at once.

Vertigo, he says, is not the fear that you will fall. It is the fear that you will jump. That, of course, is the thing that, when you are roof walking, you are taming. You are trying to unmoor your sense of danger and of not being able to trust yourself not to jump from your sense of beauty and the vision of a city that you get up high.

I roof-walk for very practical reasons: to see views that would otherwise be not really available to me in an increasingly privatized City of London.

And:

COWEN: For you, what is most interesting in Donne’s sermons?

RUNDELL: The thing I find most interesting would be the radical honesty that he has — that you will find in so few other sermons of the time — about the difficulty of finding God. He is a man who writes often with certainty about the idea of reaching the infinite, the divine. But he also writes this famous passage where he says, “I summon God and my angels, and when God and the angels are there, I neglect them for . . .” I forget what it is. “The sound of a carriage, a straw under my knee, a thought, a chimera, and nothing and everything.”

That sense that, even though he had a brain that could control incredibly rigorous poetry, he did not have a brain that would control itself in prayer. He offered that to his congregation as a vulnerability and a piece of honesty that so few sermoners of the time — who thought of themselves more as a regulatory ideal that should never admit vulnerability — would offer.

Definitely recommended.  And Katherine’s recent book Super-Infinite: The Transformations of John Donne was perhaps my favorite book of last year.

Wednesday assorted links

1. Arnold Kling on the Great Re-Evaluation.

2. Lynne Kiesling now has the Knowledge Problem Substack.  And Matt Yglesias profile.

3. Some new nasal entry results from Fast Grants-funded research.

4. “A Chinese state-owned bank in Hong Kong is offering customers one shot of an mRNA vaccine if they make a deposit of HK$4mn ($512,000)…”  (FT link)

5. An Indian view on who benefits and loses from GPT.

6. Good thread on recent growth miracles.

7. The Americans at Stanford pull the “harmful language” list.

8. Blake Hounshell triibute, RIP, he was great to work with.

My ChinaTalk podcast with Jordan Schneider

Here is the transcript, here is the podcast.  Excerpt:

Jordan Schneider: You mentioned growing up reading classic novels and scholarship. What do you think will be relevant and not relevant about that sort of stuff in our new AI world?

Tyler Cowen: I suspect the classic texts will re-emerge in value. Reading Plato, Kant, or Adam Smith gives you a sense of a vision and big-picture thinking that the ais won’t be able to give us for some while — maybe never. [If] simply scanning the internet for facts, the AI might give you a very good digest — which you’ll consume in less time — and you’ll then seek out the thing the AI can’t give you at all.

That will, again, be radically original big-picture thinking.

Recommended, interesting throughout.  We also talk about education, therapy, China, the person I envy most, the demand for pets, working for the Aztec empire, my own secret book project, and much more.

Do markets expect unaligned AGI risk?

Here is a new essay by Trevor Chow, Basil Halperin, and J. Zachary Mazlish, all favorite thinkers of mine, excerpt and these are their words I will not double indent:

“In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:

  1. Long(er) timelines. Financial markets are often highly effective information aggregators (the “efficient market hypothesis”), and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next 30-50 years.
  2. Market inefficiency. Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are therefore too low. There is thus an opportunity for philanthropists to borrow while real rates are low to cheaply do good today; and/or an opportunity for anyone to earn excess returns by betting that real rates will rise.

In the rest of this post we flesh out this argument.

  1. Both intuitively and under every mainstream economic model, the “explosive growth” caused by aligned AI would cause high real interest rates.
  2. Both intuitively and under every mainstream economic model, the existential risk caused by unaligned AI would cause high real interest rates.
  3. We show that in the historical data, indeed, real interest rates have been correlated with future growth.
  4. Plugging the Cotra probabilities for AI timelines into the baseline workhorse model of economic growth implies substantially higher real interest rates today.
  5. In particular, we argue that markets are decisively rejecting the shortest possible timelines of 0-10 years.
  6. We argue that the efficient market hypothesis (EMH) is a reasonable prior, and therefore one reasonable interpretation of low real rates is that since markets are simply not forecasting short timelines, neither should we be forecasting short timelines.
  7. Alternatively, if you believe that financial markets are wrong, then you have the opportunity to (1) borrow cheaply today and use that money to e.g. fund AI safety work; and/or (2) earn alpha by betting that real rates will rise.

An order-of-magnitude estimate is that, if markets are getting this wrong, then there is easily $1 trillion lying on the table in the US treasury bond market alone – setting aside the enormous implications for every other asset class.”

TC again: I am pleased that they wrote a separate companion piece on Cowen’s Third Law.

Tuesday assorted links

1. There are no elected officials left in Haiti.

2. Was the T. Rex actually pretty smart?

3. The Microsoft deal with OpenAI will be big.

4. Non-drinking is on the rise amongst the English.

5. Current problems in Laos.

6. Anti-war Russians, living in Latin America.

7. How the human walk evolved for endurance, not speed (Wired).

8. “A Dutch supermarket chain introduced slow checkouts for people who enjoy chatting, helping many people, especially the elderly, deal with loneliness. The move has proven so successful that they installed the slow checkouts in 200 stores.”  Link here.

Why did the gender wage gap stop narrowing?

During the 1980s, the wage gap between white women and white men in the US declined by approximately 1 percentage point per year. In the decades since, the rate of gender wage convergence has stalled to less than one-third of its previous value. An outstanding puzzle in economics is “why did gender wage convergence in the US stall?” Using an event study design that exploits the timing of state and federal family-leave policies, we show that the introduction of the policies can explain 94% of the reduction in the rate of gender wage convergence that is unaccounted for after controlling for changes in observable characteristics of workers. If gender wage convergence had continued at the pre-family leave rate, wage parity between white women and white men would have been achieved as early as 2017.

That is from a new NBER working paper by Peter Q. Blair and Benjamin Posmanick.  Might the gender wage gap be one economics topic where a naive, mood-affiliated view on it best predicts a bunch of other bad views on totally separate topics?

How much did pre-ACA Medicaid expansions matter?

This paper examines the impact of Medicaid expansions to parents and childless adults on adult mortality. Specifically, we evaluate the long-run effects of eight state Medicaid expansions from 1994 through 2005 on all-cause, healthcare-amenable, non-healthcare-amenable, and HIV-related mortality rates using state-level data. We utilize the synthetic control method to estimate effects for each treated state separately and the generalized synthetic control method to estimate average effects across all treated states. Using a 5% significance level, we find no evidence that Medicaid expansions affect any of the outcomes in any of the treated states or all of them combined. Moreover, there is no clear pattern in the signs of the estimated treatment effects. These findings imply that evidence that pre-ACA Medicaid expansions to adults saved lives is not as clear as previously suggested.

That is a new NBER working paper from Charles J. Courtemanche, Jordan W. Jones, Antonios M. Koumpias, and Daniela Zapata.

Here are some relevant pictures.  Now, would you expect subsequent Medicaid expansions to have higher, lower, or the same marginal value?

Monday assorted links

1. AEA vs. EJMR? (Bloomberg).

2. Chollet with some GPT skepticism.

3. Noma in Copenhagen is closing (NYT).  “…Mr. Redzepi admitted to bullying his staff verbally and physically, and has often acknowledged that his efforts to be a calmer, kinder leader have not been fully successful.”

4. “Using a 3-second sample of human speech, it can generate super-high-quality text-to-text speech from the same voice. Even emotional range and acoustic environment of the sample data can be reproduced. Here are some examples.”  Link here.

5. Joshua Kim comment on my higher education worries.  I think he is saying they don’t get enough money!?

6. It seems Mastodon is sinking?

What is an optimum degree of LLM hallucination?

Ideally you could adjust a dial and and set the degree of hallucination in advance.  For fact-checking you would choose zero hallucination, for poetry composition, life advice, and inspiration you might want more hallucination, to varying degrees of course.  After all, you don’t choose friends with zero hallucination, do you?  And you do read fiction, don’t you?

(Do note that you can ask the current version for references and follow-up — GPT is hardly as epistemically crippled as some people allege.)

In the meantime, I do not want an LLM with less hallucination.  The hallucinations are part of what I learn from.  I learn what the world would look like, if it were most in tune with the statistical model provided by text.  That to me is intrinsically interesting.  Does the matrix algebra version of the world not interest you as well?

The hallucinations also give me ideas and show me alternative pathways.  “What if…?”  They are a form of creativity.  Many of these hallucinations are simple factual errors, but many others have embedded in them alternative models of the world.  Interesting models of the world.  Ideas and inspirations.  I feel I know what question to ask or which task to initiate.

Oddly enough, for many queries what ChatGPT most resembles is…don’t laugh — blog comments.  Every time I pose a query it is like putting a blog post out there, or a bleg, and getting a splat of responses right away, and without having to clog up MR with all of my dozens of wonderings every day.  Many of those blog comment responses are hallucinations.  But I learn from the responses collectively, and furthermore some of them are very good and also very accurate.  I follow up on them on my own, as it should be.

LLMs are like giving everyone their own comments-open blog, with hallucinating super-infovores as the readers and immediate response and follow-up when desired.  Obviously, the people with some background in that sector, if I may put it that way, will be better at using ChatGPT than others.

(Not everyone is good at riding a horse either.)

Playing around with GPT has in fact caused me to upgrade significantly my opinion of MR blog comments — construed collectively — relative to other forms of writing.

Please do keep in mind my very special position.  The above may not apply to you.  I have an RA to fact-check my books, and this process is excellent and scrupulous.  Varied and very smart eyes look over my Bloomberg submissions.  MR readers themselves fact-check my MR posts, and so on.  Having blogged for more than twenty years, I am good at using Google and other methods of investigating reality.  At the margin, pre-LLM, I already was awash in fact-checking.  If GPT doesn’t provide me with that, I can cope.

And I don’t take psychedelics.  R-squared is never equal to one anyway, not in the actual world.  And yet models are useful.  Models too are hallucinations.

So if GPT is doing some hallucinating while at work, I say bring it on.