Category: Science
The new Roger Penrose biography
The author is Patchen Barss, and the title is The Impossible Man: Roger Penrose and the Cost of Genius. I liked this book very much, and feel there should be more works like this. It was made with the full cooperation of Penrose himself, though he had no veto over the final work. Here is one bit:
Many relativists had a powerful feel for formal math: errors in calculations leapt out at them the way off-key notes rankle a musician’s ear. Not Roger. Equations required too much mental labour and restricted his creativity. His “magic” came from the shape of things. He preferred to run his fingers along the curves and twists of space and time and find in those graceful lines the story of how every particle, force, and phenomenon acquired its properties.
The book covers Penrose’s personal life as well:
Judith encouraged him to sort out his relationship with Joan independently of his feelings for her. He wasn’t sure that made sense. In a deterministic universe, could he really take ownership of his unhappy marriage? The idea felt strange to him.
Returning to physics:
Roger’s curiosity about consciousness came from many places — the extreme mental feats of his father and brothers, the speed of decision making in racquet sports, the human ability to transcend Kurt Gödel’s incompleteness theorem, and his own capacity to discover new mathematical insights.
Then again, one might return to matters of his personal life:
…he [Penrose] offhandedly observed how many scientists — not just him — seem to have “troubled marriages.” He implied that a solitary life might be an inevitable consequence, a necessary price, for his kind of success. True, Wolfgang and Ted [friends] had long, happy marriages. Then again, neither of them had won a Nobel Prize.
His tone was one of justification rather than regret. He didn’t see how it could be any other way.
Recommended. Here is a good NYT review.
My Conversation with the excellent Neal Stephenson
Here is the audio, video, and transcript. Here is part of the episode summary:
In Neal’s second appearance, Tyler asks him why he sometimes shifts from envisioning the future to illustrating the past, the rise of history autodidacts, the implications of leaked secrets from the atomic age to today’s AI, the logistics of faking one’s death, why he still drafts novels in longhand, Soviet idealism among Western intellectuals, which Soviet achievements he admires, the lag in AR development, how LLMs might boost AR, whether social media is increasingly giving way to private group chats, his continuing influence on technologists, why AI-generated art might struggle to connect with readers, the primer from The Diamond Age in light of today’s LLMs, the prospect of AGI becoming an unnoticed background tool, what Neal believes the world really needs more of, what lies ahead in Polostan and the broader “Bomb Light” series, and more.
Excerpt:
COWEN: How effectively could you stage your own death? You. Say you really want to do it, and you’re willing to do it.
STEPHENSON: To fake it or to actually —
COWEN: Fake it, but everyone thinks it’s real. I read about it in the papers. “Neal is gone.” I nod my head, I weep, and then I forget about it. I don’t mean I forget about you, but you understand what I’m saying.
STEPHENSON: Wait, there’s not that many circumstances under which all physical traces of someone can be obliterated. That’s a fairly hard thing to do. It would have been easier a hundred years ago, but now we’ve got cameras everywhere, and we’ve got DNA testing and other ways to prove or to disprove that somebody’s actually dead. I guess it would have to be something like a plane crash into the ocean.
COWEN: But then how do you survive it?
STEPHENSON: Oh, yes. Okay.
COWEN: To kill yourself is one thing, but to pretend you’ve killed yourself and stay alive seems harder.
STEPHENSON: You could parachute out if it was a small plane, not a jet airline full of people, but a single-seater. I guess that might work.
COWEN: So, hire a private plane, have it crash, parachute out into somewhere where you —
STEPHENSON: You’re witnessed getting into the plane and taking off, but then there’s no way to recover the evidence for some reason. It’s pretty hard to do. If someone really wanted to, if they were just determined to go and find the . . . You see the efforts that people have gone to to go down to the Titanic. Well, if you can go find that thing and check it out with a submarine, then it’s pretty hard to really find a place that can’t be accessed in that way.
I very much enjoyed Neal’s new book Polostan. And here is my first Conversation with Neal Stephenson.
Difficult to pronounce names
We test for labor market discrimination based on an understudied characteristic: name fluency. Analysis of recent economics PhD job candidates indicates that name difficulty is negatively related to the probability of landing an academic or tenure-track position and research productivity of initial institutional placement. Discrimination due to name fluency is also found using experimental data from prior audit studies. Within samples of African Americans (Bertrand and Mullainathan 2004) and ethnic immigrants (Oreopoulos 2011), job applicants with less fluent names experience lower callback rates, and name complexity explains roughly between 10 and 50 percent of ethnic name penalties. The results are primarily driven by candidates with weaker résumés, suggesting that cognitive biases may contribute to the penalty of having a difficult-to-pronounce name.
That is from a new AEJ piece by Qi Ge and Stephen Wu.
Causal claims in economics
From a new and very interesting web site on that topic:
-
Significant Increase in Causal Claims: The average proportion of causal claims in papers rose significantly from approximately 5% in 1990 to around 28% in 2020, reflecting the impact of the credibility revolution in economics.
-
Growth in Causal Inference Methods and decline in Theoretical and Simulation Methods…
-
Intricate Causal Narratives Enhance Publication and Citation Impact: Papers featuring intricate and interconnected causal narratives are more likely to be published in top-tier journals, particularly the top 5 journals, and receive more citations, especially within those journals.
-
Key Measures of Causal Narrative Complexity: Increases in the number of unique paths and the longest path length in causal knowledge graphs are positively associated with both publication in leading economics journals and higher citation counts. This highlights the value placed on depth and complexity in causal narratives.
-
Depth Over Quantity in Causal Claims: While the overall number of claims made is positively correlated with top journal publications, the number of causal edges alone does not show the same positive association with publication outcomes or citation counts. This suggests that depth over breadth in causal claims is valued...
-
Novel Causal Relationships Enhance Publication but Not Citation Impact: Papers introducing novel causal relationships that have not been previously documented are more likely to be published in top 5 journals, indicating a premium on originality for publication success. However, this does not necessarily translate into higher citation counts once published.
Here is an associated tweet storm,
Obviously it can be argued either way, but I see these results as more negative for the causality revolution than positive? It seems there is too much emphasis on generating a defensible results from a hitherto unused data set. I understand the signaling value here, but the social value is not always obvious. I am struck by how often I meet economics graduate students who can reason about programming more effectively than they can reason about the real world.
Specialization trends in economics
This article conducts a comprehensive analysis of specialization trends within and across fields of economics research. We collect data on 24,273 articles published between 1970 and 2016 in general research economics outlets and employ machine learning techniques to enrich the collected data. Results indicate that theory and econometric methods papers are becoming increasingly specialized, with a narrowing scope and steady or declining citations from outside economics and from other fields of economics research. Conversely, applied papers are covering a broader range of topics, receiving more extramural citations from fields like medicine, and psychology. Trends in applied theory articles are unclear.
That is from a new paper by Sebastian Galiani, Ramiro H. Gálvez, and Ian Nachman, via Robin Hanson.
The evolution of nepotism in academia, 1088-1800
We have constructed a comprehensive database that traces the publications of father–son pairs in the premodern academic realm and examined the contribution of inherited human capital versus nepotism to occupational persistence. We find that human capital was strongly transmitted from parents to children and that nepotism declined when the misallocation of talent across professions incurred greater social costs. Specifically, nepotism was less common in fields experiencing rapid changes in the knowledge frontier, such as the sciences and within Protestant institutions. Most notably, nepotism sharply declined during the Scientific Revolution and the Enlightenment, when departures from meritocracy arguably became both increasingly inefficient and socially intolerable.
That is from a new paper by David de la Croix & Marc Goñi. Via the excellent Kevin Lewis.
My Conversation with the excellent Christopher Kirchhoff
Here is the audio, video, and transcript. Here is the intro:
Christopher Kirchhoff is an expert in emerging technology who founded the Pentagon’s Silicon Valley office. He’s led teams for President Obama, the Chairman of the Joint Chiefs of Staff, and CEO of Google. He’s worked in worlds as far apart as weapons development and philanthropy. His pioneering efforts to link Silicon Valley technology and startups to Washington has made him responsible for $70 billion in technology acquisition by the Department of Defense. He’s penned many landmark reports, and he is the author of Unit X: How the Pentagon and Silicon Valley are Transforming the Future of War.
Tyler and Christopher cover the ascendancy of drone warfare and how it will affect tactics both off and on the battlefield, the sobering prospect of hypersonic weapons and how they will shift the balance of power, EMP attacks, AI as the new arms race (and who’s winning), the completely different technology ecosystem of an iPhone vs. an F-35, why we shouldn’t nationalize AI labs, the problem with security clearances, why the major defense contractors lost their dynamism, how to overcome the “Valley of Death” in defense acquisition, the lack of executive authority in government, how Unit X began, the most effective type of government commission, what he’ll learn next, and more.
Excerpt:
COWEN: Now, I never understand what I read about hypersonic missiles. I see in the media, “China has launched the world’s first nuclear-capable hypersonic, and it goes 10x the speed of sound.” And people are worried. If mutual assured destruction is already in place, what exactly is the nature of the worry? Is it just we don’t have enough response time?
KIRCHHOFF: It’s a number of things, and when you add them up, they really are quite frightening. Hypersonic weapons, because of the way they maneuver, don’t necessarily have to follow a ballistic trajectory. We have very sophisticated space-based systems that can detect the launch of a missile, particularly a nuclear missile, but right then you’re immediately calculating where it’s going to go based on its ballistic trajectory. Well, a hypersonic weapon can steer. It can turn left, it can turn right, it can dive up, it can dive down.
COWEN: But that’s distinct from hypersonic, right?
KIRCHHOFF: Well, ICBMs don’t have the same maneuverability. That’s one factor that makes hypersonic weapons different. Second is just speed. With an ICBM launch, you have 20 to 25 minutes or so. This is why the rule for a presidential nuclear decision conference is, you have to be able to get the president online with his national security advisers in, I think, five or seven minutes. The whole system is timed to defeat adversary threats. The whole continuity-of-government system is upended by the timeline of hypersonic weapons.
Oh, by the way, there’s no way to defend against them, so forget the fact that they’re nuclear capable — if you want to take out an aircraft carrier or a service combatant, or assassinate a world leader, a hypersonic weapon is a fantastic way to do it. Watch them very carefully because more than anything else, they will shift the balance of military power in the next five years.
COWEN: Do you think they shift the power to China in particular, or to larger nations, or nations willing to take big chances? At the conceptual level, what’s the nature of the shift, above and beyond whoever has them?
KIRCHHOFF: Well, right now, they’re incredibly hard to produce. Right now, they’re essentially in a research and development phase. The first nation that figures out how to make titanium just a little bit more heat resistant, to make the guidance systems just a little bit better, and enables manufacturing at scale — not just five or seven weapons that are test-fired every year, but 25 or 50 or 75 or 100 — that really would change the balance of power in a remarkable number of military scenarios.
COWEN: How much China has them now? Are you at liberty to address that? They just have one or two that are not really that useful, or they’re on the verge of having 300?
KIRCHHOFF: What’s in the media and what’s been discussed quite a bit publicly is that China has more successful R&D tests of hypersonic weapons. Hypersonic weapons are very difficult to make fly for long periods. They tend to self-destruct at some point during flight. China has demonstrated a much fuller flight cycle of what looks to be an almost operational weapon.
COWEN: Where is Russia in this space?
KIRCHHOFF: Russia is also trying. Russia is developing a panoply of Dr. Evil weapons. The latest one to emerge in public is this idea of putting a nuclear payload on a satellite that would effectively stop modern life as we know it by ending GPS and satellite communications. That’s really somebody sitting in a Dr. Evil lair, stroking their cat, coming up with ideas that are game-changing. They’ve come up with a number of other weapons that are quite striking — supercavitating torpedoes that could take out an entire aircraft carrier group. Advanced states are now coming up with incredibly potent weapons.
Intelligent and interesting throughout. Again, I am happy to recommend Christopher’s recent book Unit X: How the Pentagon and Silicon Valley are Transforming the Future of War, co-authored with Raj M. Shah.
Holy Frak
Dialogue between an economist and a physicist
Interesting, but I think highly flawed on both sides. Here is one excerpt from the physicist:
Physicist: True enough. So we would likely agree that energy growth will not continue indefinitely. But two points before we continue: First, I’ll just mention that energy growth has far outstripped population growth, so that per-capita energy use has surged dramatically over time—our energy lives today are far richer than those of our great-great-grandparents a century ago [economist nods]. So even if population stabilizes, we are accustomed to per-capita energy growth: total energy would have to continue growing to maintain such a trend [another nod].
Second, thermodynamic limits impose a cap to energy growth lest we cook ourselves. I’m not talking about global warming, CO2 build-up, etc. I’m talking about radiating the spent energy into space. I assume you’re happy to confine our conversation to Earth, foregoing the spectre of an exodus to space, colonizing planets, living the Star Trek life, etc…
At that 2.3% growth rate, we would be using energy at a rate corresponding to the total solar input striking Earth in a little over 400 years. We would consume something comparable to the entire sun in 1400 years from now. By 2500 years, we would use energy at the rate of the entire Milky Way galaxy—100 billion stars! I think you can see the absurdity of continued energy growth.
I think it is easy enough for the economist to argue that energy, at some margin, has diminishing returns for creating utility. So we then have dematerialized economic growth, not an ever-growing population (oscillation back and forth?), and thus we do not fry the planet, or for that matter the galaxy. A general lesson of national income statistics is that if you play out exponentials for long enough, over centuries you are simply talking about very different things, rather than a simple exponential growth of present conditions.
Science and politics podcast
From the Institute for Progress, here is the link, the participants were Caleb Watney, Dylan Matthews, Alexander Berger, and myself. Excerpt:
Tyler Cowen: I would stress just how decentralized science funding is in the United States. The public universities are run at the state level. We have tax incentives for donations where you have to give to a nonprofit, but there’s otherwise very little control over what counts as a viable nonprofit.
One specific issue that I think has become quite large is how much we run our universities through an overhead system. On federal grants and many other kinds of grants, an overhead is charged. The overhead rates are very high, and well above what the actual marginal overhead costs.
You might think that’s a crazy system, and in some ways it is crazy. It means there’s intense pressure on professors to bring in contracts, regardless of the quality of the work. That’s clearly a major negative. Everyone complains about this.
But the hidden upside is that when universities fund themselves through overhead, there’s a kind of indirect free speech privilege because they can spend the overhead how they want. Now, I actually think they are violating the implicit social contract right now by spending the overhead poorly. But for a long while, this was why our system worked well. You had very indirect federal appropriations: some parts of which went to science, other parts of which went to education. It was done on a free speech basis.
But like many good systems, it doesn’t last forever. It gets abused. If we try to clean up the mess — which now in my view clearly is a mess — well, I’m afraid we’ll get a system where Congress or someone else is trying to dictate all the time how the funds actually should be allocated.
That’s a question I’ve thought through a good amount: how or whether we should fix the overhead system? I feel we’ve somehow painted ourselves into a corner where there is no good political way out in any direction. But I think you’ll find case by case that the specifics are really going to matter.
Dylan Matthews: Let’s get into some of the specifics. Do you have an example of the overhead system breaking down that is motivating for you here?
Tyler Cowen: Well, universities are spending more and more of their surplus on staff and facilities — on ends that even if you think they’re defensible in some deep sense like “Oh, we need this building,” it’s about the university. It’s about what leads to long run donations, but it’s seen as a violation of public trust.
The money is neither being spent on possibly useful research, nor educating students. The backlash against universities is huge, most of all in Florida, Texas, and North Carolina. It seems to me that where we are at isn’t stable. How we fund science through universities is, in some ways, collapsing in bad ways. The complaints are often justified, but odds are that we’ll end up with something worse.
Recommended, interesting throughout.
Scott Alexander on the Progress Studies conference
Here is one excerpt:
Over-regulation was the enemy at many presentations, but this wasn’t a libertarian conference. Everyone agreed that safety, quality, the environment, etc, were important and should be regulated for. They just thought existing regulations were colossally stupid, so much so that they made everything worse including safety, the environment, etc. With enough political will, it would be easy to draft regulations that improved innovation, price, safety, the environment, and everything else.
For example, consider supersonic flight. Supersonic aircraft create “sonic booms”, minor explosions that rattle windows and disturb people underneath their path. Annoyed with these booms, Congress banned supersonic flight over land in 1973. Now we’ve invented better aircraft whose booms are barely noticeable, or not noticeable at all. But because Congress banned supersonic flight – rather than sonic booms themselves – we’re stuck with normal boring 6-hour coast-to-coast flights. If aircraft progress had continued at the same rate it was going before the supersonic ban, we’d be up to 2,500 mph now (coast-to-coast in ~2 hours). Can Congress change the regulation so it bans booms and not speed? Yes, but Congress is busy, and doing it through the FAA and other agencies would take 10-15 years of environmental impact reports.
Or consider solar power. The average large solar project is delayed 5-10 years by bureaucracy. Part of the problem is NEPA, the infamous environmental protection law saying that anyone can sue any project for any reason if they object on environmental grounds. If a fossil fuel company worries about a competition from solar, they can sue upcoming solar plants on the grounds that some ants might get crushed beneath the solar panels; even in the best-case where the solar company fights and wins, they’ve suffered years of delay and lost millions of dollars. Meanwhile, fossil fuel companies have it easier; they’ve had good lobbyists for decades, and accrued a nice collection of formal and informal NEPA exemptions.
Even if a solar project survives court challenges, it has to get connected to the grid. This poses its own layer of bureaucracy and potential pitfalls.
Do read the whole thing. And congratulations to Jason Crawford and Heike Larson for pulling off this event.
Metascience podcast on science and safety
From the Institute for Progress. There are four of us, namely Dylan Matthews, Matt Clancy, and Jacob Trefethen as well. There is a transcript, and here is one very brief excerpt:
Tyler Cowen: I see the longer run risks of economic growth as primarily centered around warfare. There is lots of literature on the Industrial Revolution. People were displaced. Some parts of the country did worse. Those are a bit overstated.
But the more productive power you have, you can quite easily – and almost always do – have more destructive power. The next time there’s a major war, which could be many decades later, more people will be killed, there’ll be higher risks, more political disorder. That’s the other end of the balance sheet. Now, you always hope that the next time we go through this we’ll do a better job. We all hope that, but I don’t know.
And:
Tyler Cowen: But the puzzle is why we don’t have more terror attacks than we do, right? You could imagine people dumping basic poisons into the reservoir or showing up at suburban shopping malls with submachine guns, but it really doesn’t happen much. I’m not sure what the binding constraint is, but since I don’t think it’s science, that’s one factor that makes me more optimistic than many other people in this area.
Dylan Matthews: I’m curious what people’s theories are, since I often think of things that seem like they would have a lot of potential for terrorist attacks. I don’t Google them because after Edward Snowden, that doesn’t seem safe.
I live in DC, and I keep seeing large groups of very powerful people. I ask myself, “Why does everyone feel so safe? Why, given the current state of things, do we not see much more of this?” Tyler, you said you didn’t know what the binding constraint was. Jacob, do you have a theory about what the binding constraint is?
Jacob Trefethen: I don’t think I have a theory that explains the basis.
Tyler Cowen: Management would be mine. For instance, it’d be weird if the greatest risk of GPT models was that they helped terrorists have better management, just giving them basic management tips like those you would get out of a very cheap best-selling management book. That’s my best guess.
I would note that this was recorded some while ago, and on some of the AI safety issues I would put things differently now. Maybe some of that is having changed my mind, but most of all I simply would present the points in a very different context.
What predicts success in science?
How does a person’s childhood socioeconomic status (SES) influence their chances to participate and succeed in science? To investigate this question, we use machine-learning methods to link scientists in a comprehensive biographical dictionary, the American Men of Science (1921), with their childhood home in the US Census and with publications. First, we show that children from low-SES homes were already severely underrepresented in the early 1900s. Second, we find that SES influences peer recognition, even conditional on participation: Scientists from high-SES families have 38% higher odds of becoming stars, controlling for age, publications, and disciplines. Using live-in servants as an alternative measure for SES confirms the strong link between childhood SES and becoming a star. Applying text analysis to assign scientists to disciplines, we find that mathematics is the only discipline in which SES influences stardom through the number and the quality of a scientist’s publications. Using detailed data on job titles to distinguish academic from industry scientists, we find that industry scientists have lower odds of being stars. Controlling for industry employment further strengthens the link between childhood SES and stardom. Elite undergraduate degrees explain more of the correlation between SES and stardom than any other control. At the same time, controls for birth order, family size, foreign-born parents, maternal education, patents, and connections with existing stars leave estimates unchanged, highlighting the importance of SES.
That is from a new NBER working paper by Anna Airoldi and Petra Moser.
The early history of peer review
By the 1950s, the Royal Society was asking reviewers to respond to standardized questions, including whether a study contained “contributions to knowledge of sufficient scientific interest” and simply whether the society should publish it.
These questions could prompt brief responses even to significant pieces of work. Chemist Dorothy Hodgkin wrote barely 50 words when asked to review the full manuscript of the structure of DNA by Francis Crick and James Watson in 1953, which was published in Proceedings of the Royal Society in April 19541. (A shorter paper announcing the discovery had already appeared in Nature2.)
In her sole comment, beyond a series of yes and no answers, Hodgkin suggests the duo should “touch up” photographs to eliminate distracting reflections of “chairs in the perspex rod” — a technical fix that modern cameras perform routinely. Crick and Watson seemed to follow the advice.
The archive is also littered with long reports, many in handwritten scrawl. In 1877, reviewer Robert Clifton finished a 24-page report on two related papers on optics, with an apology: “How you will hate me for bothering you with this tremendously long letter, but I hope before we meet time will have softened your anger.”
Ferlier says that the introduction of the standardized referee questions significantly reduced the amount of time and effort put in by reviewers. “There’s really this understanding in the nineteenth century and very early twentieth century that the peer review is a real discussion,” she says. “After that, it becomes a way of managing the influx of papers for the journal.”
The article, by David Adam in Nature, is interesting throughout. Via Mike Rosenwald.
Letters of recommendation
We analyze 6,400 letters of recommendation for more than 2,200 economics and finance Ph.D. graduates from 2018 to 2021. Letter text varies significantly by field of interest, with significantly less positive and shorter letters for Macroeconomics and Finance candidates. Letters for female and Black or Hispanic job candidates are weaker in some dimensions, while letters for Asian candidates are notably less positive overall. We introduce a new measure of letter quality capturing candidates that are recommended to “top” departments. Female, Asian, and Black or Hispanic candidates are all less likely to be recommended to top academic departments, even after controlling for other letter characteristics. Finally, we examine early career outcomes and find that letter characteristics, especially a “top” recommendation have meaningful effects on initial job placements and journal publications.
That is from a new paper by Beverly Hirtle and Anna Kovner. Via the excellent Kevin Lewis.