Category: Science

Ho-hum

After nearly two decades of hardcore drug addiction — after overdoses and rehabs and relapses, homelessness and dead friends and ruined lives — Gerod Buckhalter had one choice left, and he knew it.

He could go on the same way and die young in someone’s home or a parking lot, another casualty in a drug epidemic that has claimed nearly 850,000 people like him.

Or he could let a surgeon cut two nickel-size holes in his skull and plunge metal-tipped electrodes into his brain.

More than 600 days after he underwent the experimental surgery, Buckhalter has not touched drugs again — an outcome so outlandishly successful that neither he nor his doctors dared hope it could happen. He is the only person in the United States to ever have substance use disorder relieved by deep brain stimulation. The procedure has reversed Parkinson’s disease, epilepsy and a few other intractable conditions, but had never been attempted for drug addiction here.

The device, known as a deep brain stimulator, also is recording the electrical activity in Buckhalter’s brain — another innovation that researchers hope will help locate a biomarker for addiction and allow earlier intervention with other people.

Here is the full story.

Long-term gene–culture coevolution and the human evolutionary transition

It has been suggested that the human species may be undergoing an evolutionary transition in individuality (ETI). But there is disagreement about how to apply the ETI framework to our species, and whether culture is implicated as either cause or consequence. Long-term gene–culture coevolution (GCC) is also poorly understood. Some have argued that culture steers human evolution, while others proposed that genes hold culture on a leash. We review the literature and evidence on long-term GCC in humans and find a set of common themes. First, culture appears to hold greater adaptive potential than genetic inheritance and is probably driving human evolution. The evolutionary impact of culture occurs mainly through culturally organized groups, which have come to dominate human affairs in recent millennia. Second, the role of culture appears to be growing, increasingly bypassing genetic evolution and weakening genetic adaptive potential. Taken together, these findings suggest that human long-term GCC is characterized by an evolutionary transition in inheritance (from genes to culture) which entails a transition in individuality (from genetic individual to cultural group). Thus, research on GCC should focus on the possibility of an ongoing transition in the human inheritance system.

That is by Timothy M. Waring and Zachary T. Wood, via a loyal MR reader.

*Unsettled*, by Steven E. Koonin

A few of you have asked me to review this book, sometimes presented as a clinching case for climate contrarianism.  I thought it was fine, but not a great revelation, and ultimately disappointing on one very major point of contention.  On the latter angle, on p.2 Koonin writes:

The net economic impact of human-induced climate change will be minimal through at least the end of this century.

That is presented as a big deal, and yes it would be.  But “minimal”?  The economist wishes to ask “how much.”  The more concrete discussion comes on pp.178-179, which looks at twenty studies (all or most of them bad), and reports they estimate that by 2100 global gdp is three percent less due to climate change, or perhaps the damages are smaller yet.  Those estimates are then graphed, and there is a bit of numerical analysis of what that means for growth rates working backwards.  There is not much more than that on the question, and no attempt to provide an independent estimate of the economic costs of global warming, or to tell us which might be the best study or what it might be missing.  Koonin seems more interested in discrediting the hypocritical or innumerate climate change researchers than finding out the best answer to the question of cost.

To be sure, this is all a useful corrective to those who think global warming will destroy the earth or create major existential risk.  I am happy to praise the book for that and for all of its other corrections of hysteria.

But I just don’t find the Koonin discussion of economic costs to be useful.  The best estimate I know estimates global welfare costs of six percent, with some poorer countries suffering losses of up to fifteen percent, and some of the colder regions gaining.  There is high uncertainty about average effects, so you also can debate what kind of risk premium can be considered.  (I have myself written about how climate change may induce stupid policy responses, thus perhaps boosting the costs further yet.)  You may or may not agree with those numbers, but the above-linked paper provides plenty of structure for considering the problem further, such as modeling migration and adjustment effects across different parts of the world.  The Koonin brief meta-survey does not, it simply tells you that the junky papers don’t have the numbers to justify the panic.

So in what sense is the Koonin book useful for furthering my understanding of my number one question of concern?  Of course not every book has to be written for me, but at the end of the day it didn’t cause me to update my views much at all.

What we learned doing Fast Grants

Here is my new piece with Patrick Collison and Patrick Hsu.  The title says it all, here is one excerpt:

…we recently ran a survey of Fast Grants recipients, asking how much their Fast Grant accelerated their work. 32% said that Fast Grants accelerated their work by “a few months”, which is roughly what we were hoping for at the outset given that the disease was killing thousands of Americans every single day.

In addition to that, however, 64% of respondents told us that the work in question wouldn’t have happened without receiving a Fast Grant.

For example, SalivaDirect, the highly successful spit test from Yale University, was not able to get timely funding from its own School of Public Health, even though Yale has an endowment of over $30 billion. Fast Grants also made numerous grants to UC Berkeley researchers, and the UC Berkeley press office itself reported in May 2020: “One notably absent funder, however, is the federal government. While federal agencies have announced that researchers can apply to repurpose existing funds toward Covid-19 research and have promised new emergency funds to projects focused on the pandemic, disbursement has been painfully slow. …Despite many UC Berkeley proposals submitted to the National Institutes of Health since the pandemic began, none have been granted.” [Emphasis ours.]

And:

57% of respondents told us that they spend more than one quarter of their time on grant applications. This seems crazy. We spend enormous effort training scientists who are then forced to spend a significant fraction of their time seeking alms instead of focusing on the research they’ve been hired to pursue.

The adverse consequences of our funding apparatus appear to be more insidious than the mere imposition of bureaucratic overhead, however.

In our survey of the scientists who received Fast Grants, 78% said that they would change their research program “a lot” if their existing funding could be spent in an unconstrained fashion. We find this number to be far too high: the current grant funding apparatus does not allow some of the best scientists in the world to pursue the research agendas that they themselves think are best.

And:

Some of the other Fast Grants investments were speculative, and may (or may not) pay dividends in the future, or for the next pandemic. Examples include:

  • Work on a possible pan-coronavirus vaccine at Caltech.
  • Work on a possible pan-enterovirus (another class of RNA virus) drug at Stanford University that has now raised subsequent funding.
  • Multiple grants going to different labs working on CRISPR-based COVID-19 at-home testing. One example is smartphone-based COVID-19 detection, being worked on at UC Berkeley and Gladstone Institutes.

Self-recommending…

A Half Dose of Moderna is More Effective Than a Full Dose of AstraZeneca

Today we are releasing a new paper on dose-stretching, co-authored by Witold Wiecek, Amrita Ahuja, Michael Kremer, Alexandre Simoes Gomes, Christopher M. Snyder, Brandon Joel Tan and myself.

The paper makes three big points. First, Khoury et al (2021) just published a paper in Nature which shows that “Neutralizing antibody levels are highly predictive of immune protection from symptomatic SARS-CoV-2 infection.” What that means is that there is a strong relationship between immunogenicity data that we can easily measure with a blood test and the efficacy rate that it takes hundreds of millions of dollars and many months of time to measure in a clinical trial. Thus, future vaccines may not have to go through lengthy clinical trials (which may even be made impossible as infections rates decline) but can instead rely on these correlates of immunity.

Here is where fractional dosing comes in. We supplement the key figure from Khoury et al.’s paper to show that fractional doses of the Moderna and Pfizer vaccines have neutralizing antibody levels (as measured in the early phase I and phase II trials) that look to be on par with those of many approved vaccines. Indeed, a one-half or one-quarter dose of the Moderna or Pfizer vaccine is predicted to be more effective than the standard dose of some of the other vaccines like the AstraZeneca, J&J or Sinopharm vaccines, assuming the same relationship as in Khoury et al. holds. The point is not that these other vaccines aren’t good–they are great! The point is that by using fractional dosing we could rapidly and safely expand the number of effective doses of the Moderna and Pfizer vaccines.

Second, we embed fractional doses and other policies such as first doses first in a SIER model and we show that even if efficacy rates for fractional doses are considerably lower, dose-stretching policies are still likely to reduce infections and deaths (assuming we can expand vaccinations fast enough to take advantage of the greater supply, which is well within the vaccination frontier). For example, a half-dose strategy reduces infections and deaths under a variety of different epidemic scenarios as long as the efficacy rate is 70% or greater.

Third, we show that under plausible scenarios it is better to start vaccination with a less efficacious vaccine than to wait for a more efficacious vaccine. Thus, Great Britain and Canada’s policies of starting First Doses first with the AstraZeneca vaccine and then moving to second doses, perhaps with the Moderna or Pfizer vaccines is a good strategy.

It is possible that new variants will reduce the efficacy rate of all vaccines indeed that is almost inevitable but that doesn’t mean that fractional dosing isn’t optimal nor that we shouldn’t adopt these policies now. What it means is that we should be testing and then adapting our strategy in light of new events like a battlefield commander. We might, for example, use fractional dosing in the young or for the second shot and reserve full doses for the elderly.

One more point worth mentioning. Dose stretching policies everywhere are especially beneficial for less-developed countries, many of which are at the back of the vaccine queue. If dose-stretching cuts the time to be vaccinated in half, for example, then that may mean cutting the time to be vaccinated from two months to one month in a developed country but cutting it from two years to one year in a country that is currently at the back of the queue.

Read the whole thing.

The Becker-Friedman center also has a video discussion featuring my co-authors, Nobel prize winner Michael Kremer and the very excellent Witold Wiecek.

Why are the less educated in democracies especially anti-science?

This is not exactly what I was hoping to hear, but at this point along the road you know none of the stories are going to be pretty:

…less educated citizens in democracies are considerably less trustful of science than their counterparts in non-democracies. Further analyses suggest that, instead of being the result of stronger religiosity or lower science literacy, the increase in skepticism in democracies is mainly driven by a shift in the mode of legitimation, which reduces states’ ability and willingness to act as key public advocates for science. These findings help shed light on the institutional sources of “science-bashing” behaviors in many long-standing democracies.

In particular:

…democracies are significantly less likely to make references to science in their constitutions, and award a smaller share of high state honors to scientists.

Lower democratic trust in government, as found in democracies, also translates into lower trust in science, at least among the less educated citizens.  An autocratic regime is more likely to invoke modernization and science as a form of attempted legitimization.

For poorly educated individuals, the countries where trust in science is highest are Kuwait, China, Kazakhstan, Spain, Tanzania, Gambia, Tajikistan, Myanmar, UAE, and Uzbekistan, three of those being former Soviet Union.  For college degree and above, the countries where trust in science is highest are Philippines, India, Belgium, Denmark, Norway, Ireland, Finland, Spain, Tajikistan, and Czech Republic.

Sad!

Here is the full paper by Jiang and Wan, via the excellent Kevin Lewis.

Chris Anderson of TED interviews me

Here is the full podcast series and also other ways to listen.

My Conversation with the very very smart David Deutsch

I think this episode came off as “weird and testy,” as I described it to one friend, but I like weird and testy!  Here is the audio, video, and transcript.  Here is one excerpt:

COWEN: How do you think the many-worlds interpretation of quantum mechanics relates to the view that, just in terms of space, the size of our current universe is infinite, and therefore everything possible is happening in it?

DEUTSCH: It complicates the discussion of probability, but there’s no overlap between that notion of infinity and the Everettian notion of infinity, if we are infinite there, because the differentiation (as I prefer to call what used to be called splitting) — when I perform an experiment which can go one of two ways, the influence of that spreads out. First, I see it. I may write it down; I may write a scientific paper. When I write a paper about it and report the results, that will cause the journal to split or to differentiate into two journals, and so on. This influence cannot spread out faster than the speed of light.

So an Everett universe is really a misnomer because what we see in real life is an Everett bubble within the universe. Everything outside the bubble is as it was; it’s undifferentiated, or, to be exact, it’s exactly as differentiated as it was before. Then, as the bubble spreads out, the universe becomes or the multiverse becomes more differentiated, but the bubble is always finite.

COWEN: How do your views relate to the philosophical modal realism of David Lewis?

DEUTSCH: There are interesting parallels. As a physicist, I’m interested in what the laws of physics tell us is so, rather than in philosophical reasoning about things, unless they impinge on a problem that I have. So yes, I’m interested in, for example, the continuity of the self — whether, if there’s another version of me a very large number of light-years away in an infinite universe, and it’s identical, is that really me? Are there two of me, one of me? I don’t entirely know the answer to that. It’s why I don’t entirely know the answer to whether I would go in a Star Trek transporter.

The modal realism certainly involves a lot of things that I don’t think exist — at least, not physically. I’m open to the idea that nonphysical things do exist: like the natural numbers, I think, exist. There’s a difference between the second even prime, which doesn’t exist, and the infinite number of prime numbers, which I think do exist. I think that there is more than one mode of existence, but the theory that all modes of existence are equally real — I see no point in that. The overlap between Everett and David Lewis is, I think, more coincidental than illuminating.

COWEN: If the universe is infinite and if David Lewis is correct, should I feel closer to the David Lewis copies of me? The copies or near copies of me in this universe? Or the near copies of me in the multiverse? It seems very crowded all of a sudden. Something whose purpose was to be economical doesn’t feel that way to me by the end of the metaphysics.

DEUTSCH: It doesn’t feel like that to you. . . . Well, as Wittgenstein is supposed to have said (I don’t know whether he really did), if it were true, what would it feel like? It would feel just like this.

Much more at the link.  And:

COWEN: Are we living in a simulation?

DEUTSCH: No, because living in a simulation is precisely a case of there being a barrier beyond which we cannot understand. If we’re living in a simulation that’s running on some computer, we can’t tell whether that computer is made of silicon or iron, or whether it obeys the same laws of computation, like Turing computability and quantum computability and so on, as ours. We can’t know anything about the physics there.

Well, we can know that it is at least a superset of our physics, but that’s not saying very much; it’s not telling us very much. It’s a typical example of a theory that can be rejected out of hand for the same reason that the supernatural ones — if somebody says, “Zeus did it,” then I’m going to say, “How should I respond? If I take that on board, how should I respond to the next person that comes along and tells me that Odin did it?”

COWEN: But it seems you’re rejecting an empirical claim on methodological grounds, and I get very suspicious. Philosophers typically reject transcendental arguments like, “Oh, we must be able to perceive reality, because if we couldn’t, how could we know that we couldn’t perceive reality?” It doesn’t prove you can perceive reality, right?

And this:

COWEN: A few very practical questions to close. Given the way British elections seem to have been running, that the Tories win every time, does that mean the error-correction mechanism of the British system of government now is weaker?

DEUTSCH: No. Unfortunately, the — so, as you probably know, I favor the first-past-the-post system in the purest possible form, as it is implemented in Britain. I think that is the most error-correcting possible electoral system, although I must add that the electoral system is only a tiny facet of the institutions of criticism and consent. In general, it’s just a tiny thing, but it is the best one.

It’s not perfect. It has some of the defects of, for example, proportional representation. Proportional representation has the defect that it causes coalitions all the time. Coalitions are bad.

COWEN: You have a delegated monitor with the coalition, right? With a coalition, say in the Netherlands (which is richer than the United Kingdom), you typically have coalition governments. Some parties in the coalition are delegated monitors of the other parties. Parties are better informed than voters. Isn’t that a better Popperian mechanism for error correction?

I also tried to sum up what I think he is all about, and he reacted with scorn.  That was an excellent part of the conversation.  And here is a good Twitter thread from Michael Nielsen about the Conversation.

The Ford F-150: An Electric Vehicle for Red America

The Ford F-150 truck has been America’s best selling vehicle for forty years! (Bubble test: Do you own one or know someone who does?) The new version, the F-150 Lightning, goes into production in 2022 and it’s electric. Even today there is still the whiff of “liberal America” around electric vehicles but what’s impressive about the Lightning isn’t that it’s electric, it’s that it’s a better truck. The Lightning, for example, can power a home and work appliances from its 11 outlets including a 240 volt outlet! Look at this brilliant ad campaign:

Security and peace of mind are invaluable during severe weather and unpredictable events. That’s why Ford helps ensure you never have to worry about being left in the dark…

Security, peace of mind, don’t be left alone in the dark…all great conservative selling points. Note the truck in the picture is powering the house and the chain saw. The husband and wife, their home and their truck, project independence, success and confidence–a power couple–even with a nod to diversity.

The Lightning is also fast with 0-60mph times in line with those of a Porsche 911 circa 2005, it has more carrying capacity (thanks to the smaller electric motors) than a similar gas vehicle, and it can tow a respectable maximum of 10,000 pounds with all the options.

The Lightning might succeed or it might fail but it won’t fail on politics, this is a vehicle a red-blooded, meat-eating skeptic of global warming could love.

Why the lab leak theory matters

Here is Ross Douthat at the NYT:

…there’s a pretty big difference between a world where the Chinese regime can say, We weren’t responsible for Covid but we crushed the virus and the West did not, because we’re strong and they’re decadent, and a world where this was basically their Chernobyl except their incompetence and cover-up sickened not just one of their own cities but also the entire globe.

The latter scenario would also open a debate about how the United States should try to enforce international scientific research safeguards, or how we should operate in a world where they can’t be reasonably enforced.

I agree, and would add one point about why this matters so much.  “Our wet market was low quality and poorly governed” is a story consistent with the Chinese elites not being entirely at fault.  Wet markets, after all, are a kind of atavism, and China knows the country is going to evolve away from them over time.  They represent the old order.  You can think of the CCP as both building infrastructure and moving the country’s food markets into modernity (that’s infrastructure too, isn’t it?), albeit with lags.  “We waited too long to get rid of the wet markets” is bad, but if anything suggests the CCP should have done all the more to revolutionize and modernize China.  In contrast, the story of “our government-run research labs are low quality and poorly governed”…that seems to place the blame entirely on the shoulders of the CCP and also on its technocratic, modernizing tendencies.  Under that account, the CCP spread something that “the earlier China” did not, and that strikes strongly at the heart of CCP legitimacy.  Keep in mind how much the Chinese apply a historical perspective to everything.

A number of you have asked me what I think of the lab leak hypothesis.  A few months ago I placed the chance of it at 20-30%, as a number of private correspondents can attest.  Currently I am up to 50-60%.

“Why economics is failing us”

That is the topic of my latest Bloomberg column, here is one excerpt:

Here’s the dirty little secret that few of my fellow economics professors will admit: As those “perfect” research papers have grown longer, they have also become less relevant. Fewer people — including academics — read them carefully or are influenced by them when it comes to policy.

Actual views on politics are more influenced by debates on social media, especially on such hot topics such as the minimum wage or monetary and fiscal policy. The growing role of Twitter doesn’t have to be a bad thing. Social media is egalitarian, spurs spirited debate and enables research cooperation across great distances.

Still, an earlier culture of “debate through books” has been replaced by a new culture of “debate through tweets.” This is not necessarily progress.

To use a bit of economic terminology, economists haven’t fully internalized the lessons of the Laffer Curve. By demanding so much rigor in academic research, they’ve created an environment in which most of the economics people actually see is less rigorous.

There is also a political effect. Twitter is a relatively left-wing social medium, and so the tenor of popular economic discourse has moved to the left.

Recommended, and it is one of those pieces where the reaction to the piece itself confirms the thesis of the piece…

Emergent Ventures winners, 14th cohort

Center for Indonesian Policy Studies, Jakarta, to hire a new director.

Zach Mazlish, recent Brown graduate in philosophy, for travel and career development.

Upsolve.org, headed by Rohan Pavuluri, to support their work on legal reform and deregulation of legal services for the poor.

Madison Breshears, GMU law student, to study the proper regulation of cryptocurrencies.

Quest for Justice, to help Californians better navigate small claims court without a lawyer.

Cameron Wiese, Progress Studies fellow, to create a new World’s Fair.

Jimmy Alfonso Licon, philosopher, visiting position at George Mason University, general career development.

Tony Morley, Progress Studies fellow, from Ngunnawal, Australia, to write the first optimistic children’s book on progress.

Michelle Wang, Sophomore at the University of Toronto, Canada, to study the causes and cures of depression, and general career development, and to help her intern at MIT.

Here are previous cohorts of winners.

It feels like we are living in a science fiction novel

That is the theme of my latest Bloomberg column, here is one excerpt:

Now, for the first time in my life, I feel like I am living in a science fiction serial.

The break point was China’s landing of an exploratory vehicle on Mars. It’s not just the mere fact of it, as China was one of the world’s poorest countries until relatively recently. It’s that the vehicle contains a remarkable assemblage of software and artificial intelligence devices, not to mention lasers and ground-penetrating radar.

There is a series of science fiction novels about China in which it colonizes Mars. Published between 1988 and 1999, David Wingrove’s Chung Kuo series is set 200 years in the future. It describes a corrupt and repressive China that rules the world and enforces rigid racial hierarchies.

It is striking to read the review of the book published in the New York Times in 1990. It notes that in the book “the Chinese somehow regained their sense of purpose in the latter half of the 21st century” — which hardly sounds like science fiction, the only question at this point being why it might have taken them so long. The book is judged unrealistic and objectionable because its “vision of a Chinese-dominated future seems arbitrary, ungrounded in historical process.” The Chung Kuo books don’t reflect my predictions either, but it does seem that reality has exceeded the vision of at least one book critic.

I also consider Asimov, Dogecoin, and Stephenson at the link.

Ezra Klein on UFOs

What if they turn out to be “a thing”?  Here is one excerpt, to be clear this is not the only view or possibility he is putting forward:

One immediate effect, I suspect, would be a collapse in public trust. Decades of U.F.O. reports and conspiracies would take on a different cast. Governments would be seen as having withheld a profound truth from the public, whether or not they actually did. We already live in an age of conspiracy theories. Now the guardrails would truly shatter, because if U.F.O.s were real, despite decades of dismissals, who would remain trusted to say anything else was false? Certainly not the academics who’d laughed them off as nonsense, or the governments who would now be seen as liars.

And this:

One lesson of the pandemic is that humanity’s desire for normalcy is an underrated force, and there is no single mistake as common to political analysis as the constant belief that this or that event will finally change everything. If so many can deny or downplay a disease that’s killed millions, dismissing some unusual debris would be trivial. “An awful lot of people would basically shrug and it’d be in the news for three days,” Adrian Tchaikovsky, the science fiction writer, told me. “You can’t just say, ‘still no understanding of alien thing!’ every day. An awful lot of people would be very keen on continuing with their lives and routines no matter what.”

Excellent column, do read the whole thing (NYT).

Four “dark horse” stories for 2021

From my Bloomberg column, here is one of them:

possible Chinese move against Taiwan has received a lot of attention, but a Russian union with Belarus could be a greater danger. Belarus might even agree to such a proposition, so it would be hard for NATO or the U.S. to decry it as a coercive invasion. Yet such a Russian expansion could upend political stability in Europe.

If Russia and Belarus became a single political unit, there would be only a thin band of land, called the Suwalki Gap, connecting the Baltics to the rest of the European Union. Unfortunately, that same piece of territory would stand in the way of the new, larger Russia connecting with the now-cut off Russian region of Kaliningrad. Over the long term, could the Baltics maintain their independence? If not, the European Union would show it is entirely a toothless entity, unable to guarantee the sovereignty of its members.

Even if there were no formal political union between Russia and Belarus, the territorial continuity and integrity of the EU could soon be up for grabs. The EU has more at stake in an independent Belarus than it likes to admit.

You will find three more undervalued possible news stories at the link.