Category: Science

My Conversation with Ada Palmer

Here is the audio, video, and transcript.  Here is the episode summary:

Ada Palmer is a Renaissance historian at the University of Chicago who studies radical free thought and censorship, composes music, consults on anime and manga, and is the author of the acclaimed Terra Ignota sci-fi series, among many other things.

Tyler sat down with Ada to discuss why living in the Renaissance was worse than living during the Middle Ages, how art protected Florence, why she’s reluctant to travel back in time, which method of doing history is currently the most underrated, whose biography she’ll write, how we know what old Norse music was like, why women scholars helped us understand Viking metaphysics, why Diderot’s Jacques the Fatalist is an interesting work, what people misunderstand about the inquisition(s), why science fiction doesn’t have higher social and literary status, which hive she would belong to in Terra Ignota, what the new novel she’s writing is about, and more.

Here is one excerpt:

COWEN: De Sade — where does that come from? What are the influences on de Sade as a writer?

PALMER: Thomas Aquinas. No, lots and lots of things, but he’s very interested in the large philosophical milieu in the period. Remember that the 18th century is a moment when the clandestine bookshop is a major, major thing. And if anyone enjoys and is interested in the history of censorship and clandestine publishing, I can’t recommend enough the work of Robert Darnton, a brilliant, brilliant historian of clandestine literature.

But the same underground bookshops sell all underground materials, which means an underground bookshop sells pornography, and it also sells Voltaire and Rousseau, and it also sells diatribes criticizing the king, and it also sells radical Jansenist theological pamphlets about whether the Holy Spirit derives from the Father and Son equally or from the Father alone.

The same kinds of people frequent these shops, and the same kinds of people buy things. So, think about how, when you go into a Barnes & Noble, the science fiction and fantasy section is one section, even though science fiction and fantasy are different things. But they have a lot of overlap, both in the overlap of readership and in overlap in books that have both science fiction and fantasy elements. It was perfectly natural, in the same way, for clandestine bookshops to generate these works that are pornography and radical philosophy at the same time. They’re printed by the same printers, sold to the same audiences, and circulate in the same places.

De Sade uses his extreme pornography to get at questions of morality, ethics, and artificiality. What are the ethics of hurting each other? Why do we feel that way about hurting each other? What are so-called natural impulses, as John Locke and Hobbes were very dominant at the time, or Descartes, who is differently dominant at the time in rivalry with them? They make claims about the natural human impulses or the natural character of a human being. What does extreme sexuality show us about how that character might be broader than it is?

I mean it when I say Thomas Aquinas, right? One of Thomas Aquinas’s traditional proofs of the existence of God is that everything he sees around him in nature — this also is one that Aristotle uses, but Aquinas articulates it in the most famous way for de Sade’s period — that when we look around us, it’s clear that everything is designed to work.

Interesting throughout.

Sentences to ponder, scientific fraud edition

In 2015, a big team of researchers tried to redo 100 psychology studies, and about 60% failed to replicate.

This finding made big waves and headlines, and it’s already been cited nearly 8,000 times.

But the next time someone brings it up, ask them to name as many of the 100 studies as they can. My bet is they top out at zero. I’m basically at zero myself, and I’ve written about that study at length. (I asked a few of my colleagues in case I’m just uniquely stupid, and their answers were: 0, 0, 0, 0, 1, and 3.)

This is really weird. Imagine if someone told you that 60% of your loved ones had died in a plane crash. Your first reaction might be disbelief and horror—“Why were 60% of my loved ones on the same plane? Were they all hanging out without me?”—but then you would want to know who died. Because that really matters! The people you love are not interchangeable! Was it your mom, your best friend, or what? It would be insane to only remember the 60% statistic and then, whenever someone asked you who died in that horrible plane crash, respond, “Hmm, you know, I never really looked into it. Maybe, um, Uncle Fred? Or my friend Clarissa? It was definitely 60% of my loved ones, though, whoever it was.”

So if you hear that 60% of papers in your field don’t replicate, shouldn’t you care a lot about which ones? Why didn’t my colleagues and I immediately open up that paper’s supplement, click on the 100 links, and check whether any of our most beloved findings died? The answer has to be, “We just didn’t think it was an important thing to do.” We heard about the plane crash and we didn’t even bother to check the list of casualties. What a damning indictment of our field!

Here is more from Adam Mastroianni.

A new estimate of costs from global warming

The paper, by David J. Winter and Manuela Kiehl, is titled “Long-term Macroeconomic Effects of Shifting Temperature Anomaly Distributions.”  I’ve posted a few papers showing results like “5 to 10 percent of global gdp by 2100” (try here and here), and I promised I would pass along further and different estimates.  Here is the abstract:

This paper uses panel data on 201 countries from 1960 to 2019 to estimate the long-term macroeconomic effects of shifting temperature anomaly distributions. We find that rising average temperature anomalies from historical norms caused by global warming have negative, non-linear impacts on GDP growth. By additionally accounting for volatility and tail composition of the temperature anomaly distribution across a geospatial grid and across time, our approach is a methodological step towards quantifying the macroeconomic impacts of broader climate change. Projected damages are far greater than estimated in previous studies that have focussed on quantifying the macroeconomic impacts of average temperature levels only. Furthermore, in contrast to these studies which suggest that cooler countries would benefit from global warming, our damage forecasts see all countries face significant losses in productivity growth beyond optimum global warming levels of 0.3°C. Against a counterfactual scenario in which temperatures are held flat at today’s levels, 2 to 2.6°C of warming versus pre-industrial levels by 2050 has the potential to reduce projected global output by 30 to 50%. Warming in the range of 4-5°C by 2100 would lead to economic annihilation, consistent with scientific research on mass extinction thresholds and tipping points.

Now I am not sure I understand this paper correctly, but the authors don’t seem to take mitigation or adjustment into account, which would be far greater for sustained global warming than they would be for periodic, earlier temperature anomalies (Lucas critique!).  And I don’t see they have any real empirical argument, from existing data, that “economic annihilation” would occur in some of their scenarios.

So I am skeptical.  Nonetheless I promised you all further reports, and here is one of them.  At the very least you can see what “moves” are needed to get the projected costs of global warming to go higher than are currently estimated.  I would gladly consider more papers in this vein, and this is an important and underdiscussed question, at least from a rational point of view.

Via tekl.

The importance of mentorship

Einstein believed that mentors are especially influential in a protégé’s intellectual development, yet the link between mentorship and protégé success remains a mystery. We marshaled genealogical data on nearly 40,000 scientists who published 1,167,518 papers in biomedicine, chemistry, math, or physics between 1960 and 2017 to investigate the relationship between mentorship and protégé achievement. In our data, we find groupings of mentors with similar records and reputations who attracted protégés of similar talents and expected levels of professional success. However, each grouping has an exception: One mentor has an additional hidden capability that can be mentored to their protégés. They display skill in creating and communicating prizewinning research. Because the mentor’s ability for creating and communicating celebrated research existed before the prize’s conferment, protégés of future prizewinning mentors can be uniquely exposed to mentorship for conducting celebrated research. Our models explain 34–44% of the variance in protégé success and reveals three main findings. First, mentorship strongly predicts protégé success across diverse disciplines. Mentorship is associated with a 2×-to-4× rise in a protégé’s likelihood of prizewinning, National Academy of Science (NAS) induction, or superstardom relative to matched protégés. Second, mentorship is significantly associated with an increase in the probability of protégés pioneering their own research topics and being midcareer late bloomers. Third, contrary to conventional thought, protégés do not succeed most by following their mentors’ research topics but by studying original topics and coauthoring no more than a small fraction of papers with their mentors.

That is from a new paper by Yifang Ma, Satyam Mukherjee, and Brian Uzzi.  How much of that is mentor value-added, how much that good mentors are amazing talent scouts/magnets, and how much is it that scientists on the rise are very good at mobilizing the highest-value mentors to help them?  Via PC, who pulls out some key pictures.

How Credible is the Credibility Revolution?

When economists analyze a well-conducted RCT or natural experiment and find a statistically significant effect, they conclude the null of no effect is unlikely to be true. But how frequently is this conclusion warranted? The answer depends on the proportion of tested nulls that are true and the power of the tests. I model the distribution of t-statistics in leading economics journals. Using my preferred model, 65% of narrowly rejected null hypotheses and 41% of all rejected null hypotheses with |t|<10 are likely to be false rejections. For the null to have only a .05 probability of being true requires a t of 5.48.

That is from a new NBER working paper by Kevin Lang.

Ames, Iowa, underrated mecca of science

Ames is only the 9th largest city in Iowa, and yet:

1. The city (and university) has supported three Nobel Laureates in economics, namely Schultz, Hurwicz, and Stigler.

2. The plutonium for the first atomic bomb was synthesized there.

3. The first electronic digital computer was built there.

4. George Washington Carver worked and taught there.

5. Neal Stephenson is from there.

6. Russ Roberts spent 1957 there.

7. The library of Iowa State University has some large (and very good) pro-science murals by American regionalist painter Grant Wood.

Hail Ames, Iowa!

Who Runs the AEA?

That is a new JEL publication (gated) by Kevin D. Hoover and Andrej Svorenčík, here is the abstract:

The leadership structure of the American Economic Association is documented using a biographical database covering every officer and losing candidate for AEA offices from 1950 to 2019. The analysis focuses on institutional affiliations by education and employment. The structure is strongly hierarchical. A few institutions dominate the leadership, and their dominance has become markedly stronger over time. Broadly two types of explanations are explored: that institutional dominance is based on academic merit or that it is based on self-perpetuating privilege. Network effects that might explain the dynamic of increasing concentration are also investigated.

And this:

The current paper is based on an extensive prosopographical database covering the entire leadership of the AEA over the
1950–2019 period, including all Presidents, Presidents-elect, Vice Presidents, ordinary members of the Executive Committee, as well as the losing candidates for all elective offices, and members of the Nominating Committee.

The results?:

The 14 institutions in the table account for almost more than 80 percent of the positions for the whole 1950–2019 period. Even within this select group, the distribution is highly skewed with Harvard, the top supplying institution over the period accounting for more than a fifth of the total, and the last five universities accounting for around 2 percent each. The top five institutions, Harvard, MIT, Chicago, Columbia, and Stanford, which we designate as the first tier, account for over half (57.1 percent) of the positions over the whole period…

The authors summarize their findings:

The most obvious lessons are, perhaps, hardly surprising: the AEA leadership is overwhelmingly drawn from a small group of elite, private research universities—in the sense that its leaders were educated at these universities and, to a lesser degree, employed by them. What is less well-known is that for much of the past 70 years, the AEA leadership has been drawn predominantly from just three universities—Harvard, MIT, and Chicago.

By the way, institutional concentration has become more pronounced over time, not less.  But since about eighty percent of U.S. students go to state schools, most of those large state schools, I guess we can reconfigure all these panels to have eighty percent state school representation, rather than 80 percent elite school representation.  Right?  Right?

You may or may not like these facts (I for one am willing to admit to more elitism than are many people), for the time being I will say only this: “Do not listen to what they say, watch what they do!”

Driverless Cars May Already Be Safer Than Human Drivers

Tim Lee runs the numbers:

Waymo and Cruise have driven a combined total of 8 million driverless miles, including more than 4 million in San Francisco since the start of 2023.

And because California law requires self-driving companies to report every significant crash, we know a lot about how they’ve performed.

For this story, I read through every crash report Waymo and Cruise filed in California this year, as well as reports each company filed about the performance of their driverless vehicles (with no safety drivers) prior to 2023. In total, the two companies reported 102 crashes involving driverless vehicles. That may sound like a lot, but they happened over roughly 6 million miles of driving. That works out to one crash for every 60,000 miles, which is about five years of driving for a typical human motorist.

These were overwhelmingly low-speed collisions that did not pose a serious safety risk. A large majority appeared to be the fault of the other driver. This was particularly true for Waymo, whose biggest driving errors included side-swiping an abandoned shopping cart and clipping a parked car’s bumper while pulling over to the curb.

Cruise’s record is not impressive as Waymo’s, but there’s still reason to think its technology is on par with—and perhaps better than—a human driver.

Human beings drive close to 100 million miles between fatal crashes, so it’s going to take hundreds of millions of driverless miles for 100 percent certainty on this question. But the evidence for better-than-human performance is starting to pile up, especially for Waymo. And so it’s important for policymakers to allow this experiment to continue. Because at scale, safer-than-human driving technology would save a lot of lives.

Driverless cars never break the speed limit, the driver is never drunk, nor distracted by their cell phone or the fight they had with their spouse. Another advantage that people might not think of is that these cars are far better for cyclists as Parker Conrad notes:

It’s so, so obvious to anyone riding a bike in SF that autonomous vehicles are WAY safer for bicyclists than human drivers. They see me every time; human drivers constantly turn right into the bike lane without thinking.

Why? Because driverless cars literally have eyes in the back of their heads.

Driverless cars are in general less good at edge cases but the advantages add up.

I would qualify this only slightly by noting that some locations are more difficult than others and while San Francisco is quite difficult terrain, Phoenix, Arizona was chosen because of flat terrain and sunny weather. Still, the bottom line is absolutely correct. Driverless cars are safer and more capable than many people think and we should always measure their defects relative to realistic alternatives and not to some idealized notion of perfection.

The Real UFO Story

Erik Hoel writes that the “the UFO craze was created by government nepotism and incompetent journalism” which makes a lot more sense to me than the other explanation. Here’s a key bit:

To sum up the story as far as I understand its convoluted depths: diehard paranormal believers scored 22 million in Defense spending via what looks like nepotism from Harry Reid by submitting a grant to do bland general “aerospace research” and being the “sole bidder” for the contract. They then reportedly used that grant, according to Lacatski himself, the head of the program, to study a myriad of paranormal phenomenon at Skinwalker Ranch including—you may have guessed it by now—dino-beavers. Viola! That’s how there was a “government-funded program to study UFOs.”

Our current journalistic class, unwilling or unable to do the research I can do in my boxers in about five hours, instead did a big media oopsie in The New York Times, running the story and lending credibility to the idea the Pentagon did create a real serious task force to investigate UFO claims. The fervor in response to these “revelations” memed into existence a real agency at the DoD that now does actually study UFOs, simply because everyone “demanded answers”—which is totally understandable, given the journalistic coverage. However, the current UFO task force is staffed by, well, the people willing to be on a UFO task force. According to the Post:

And who was in charge, during the Trump administration, when the Pentagon created a UFO Task Force to investigate incursions of unknown objects over America?

Stratton—who believes the ghosts and creatures of Skinwalker Ranch are real—officially headed up these Pentagon investigations for years.

The “chief scientist” of this Pentagon task force was Travis Taylor, who is and was a co-star of “Ancient Aliens” on the History Channel. He currently stars on “The Secret of Skinwalker Ranch” on the same network.

This official embedding makes it difficult to break the veneer of legitimacy unless you know the whole story, simply because there’s likely a lot of coordination by professional UFO enthusiasts behind the scenes, which is why you’ll occasionally read stuff about how anonymous sources from other insiders confirm the accounts.

See also my previous post on Uri Geller and the government’s Stargate Project.

Speeding up Science

Writing in the Washington Post, Heidi Williams has good suggestions for making the NIH and NSF move faster. Namely:

  • Give the NIH the option to bypass peer review, as can the NSF.
  • Give the NSF the option to “desk-reject”, as can the NIH.
  • Give the NIH and the NSF more authority to fund scientists and not just projects.

Straightforward, actionable reforms that have a good chance of being implemented.

Read the whole thing for justification, details and background.

Impact of major awards on the subsequent work of their recipients

To characterize the impact of major research awards on recipients’ subsequent work, we studied Nobel Prize winners in Chemistry, Physiology or Medicine, and Physics and MacArthur Fellows working in scientific fields. Using a case-crossover design, we compared scientists’ citations, publications and citations-per-publication from work published in a 3-year pre-award period to their work published in a 3-year post-award period. Nobel Laureates and MacArthur Fellows received fewer citations for post- than for pre-award work. This was driven mostly by Nobel Laureates. Median decrease was 80.5 citations among Nobel Laureates (p = 0.004) and 2 among MacArthur Fellows (p = 0.857). Mid-career (42–57 years) and senior (greater than 57 years) researchers tended to earn fewer citations for post-award work. Early career researchers (less than 42 years, typically MacArthur Fellows) tended to earn more, but the difference was non-significant. MacArthur Fellows (p = 0.001) but not Nobel Laureates (p = 0.180) had significantly more post-award publications. Both populations had significantly fewer post-award citations per paper (p = 0.043 for Nobel Laureates, 0.005 for MacArthur Fellows, and 0.0004 for combined population). If major research awards indeed fail to increase (and even decrease) recipients’ impact, one may need to reassess the purposes, criteria, and impacts of awards to improve the scientific enterprise.

That is from a newly published paper by Andrew Nepomuceno, Hilary Bayer, and John P.A. Ioannidis, via Michelle Dawson.

Unintended Geoengineering

In my post SuperFreakonomics on Geoengineering, Revisited I noted that regulations requiring ships to reduce sulfur have increased global warming. Science has a new piece on the phenomena and the implications for intended geoengineering:

Regulations imposed in 2020 by the United Nations’s International Maritime Organization (IMO) have cut ships’ sulfur pollution by more than 80% and improved air quality worldwide. The reduction has also lessened the effect of sulfate particles in seeding and brightening the distinctive low-lying, reflective clouds that follow in the wake of ships and help cool the planet. The 2020 IMO rule “is a big natural experiment,” says Duncan Watson-Parris, an atmospheric physicist at the Scripps Institution of Oceanography. “We’re changing the clouds.”

By dramatically reducing the number of ship tracks, the planet has warmed up faster, several new studies have found. That trend is magnified in the Atlantic, where maritime traffic is particularly dense. In the shipping corridors, the increased light represents a 50% boost to the warming effect of human carbon emissions. It’s as if the world suddenly lost the cooling effect from a fairly large volcanic eruption each year, says Michael Diamond, an atmospheric scientist at Florida State University.

The natural experiment created by the IMO rules is providing a rare opportunity for climate scientists to study a geoengineering scheme in action—although it is one that is working in the wrong direction. Indeed, one such strategy to slow global warming, called marine cloud brightening, would see ships inject salt particles back into the air, to make clouds more reflective. In Diamond’s view, the dramatic decline in ship tracks is clear evidence that humanity could cool off the planet significantly by brightening the clouds. “It suggests pretty strongly that if you wanted to do it on purpose, you could,” he says.

Transcript of taped conversations among German nuclear physicists (1945)

Here is one excerpt:

> HEISENBERG: […] I believe this uranium business will give the Anglo–Saxons such tremendous power that EUROPE will become a bloc under Anglo–Saxon domination. If that is the case it will be a very good thing. I wonder whether STALIN will be able to stand up to the others as he has done in the past.
[…]
> WIRTZ: It seems to me that the political situation for STALIN has changed completely now.
> WEIZSÄCKER: I hope so. STALIN certainly has not got it yet. If the Americans and the British were good Imperialists they would attack STALIN with the thing tomorrow, but they won’t do that, they will use it as a political weapon. Of course that is good, but the result will be a peace which will last until the Russians have it, and then there is bound to be war.[…]

> KORSHING: That shows at any rate that the Americans are capable of real cooperation on a tremendous scale. That would have been impossible in Germany. Each one said that the other was unimportant.

Here is the link, via Fernand Pajot.

Clean Hands, Clear Conclusions: Ignaz Semmelweis as a Pioneer of Causal Inference

My next series is a few things at once. It’s a reminder that sometimes people will do everything in their power to present evidence supporting scientific facts, be unpersuasive and be sent to a mental hospital by their best friend where they spend the next two weeks being beaten by guards mercilessly and then die. But it’s also a discussion of difference-in-differences, and perhaps the challenges of estimation if you’re not entirely clear what the treatment is. And the last thing is just a puzzle I wanted to share in the context of trying to make a broader point about precisely what is implied by the identification elements of a traditional difference-in-differences design.

An excellent introduction to Ignaz Semmelweis, a pioneer in causal inference and medicine, by Scott Cunningham.