Month: April 2023

The pro-immigration argument that everyone hates

Fortunately people hate it because it is wrong, otherwise they would have to hate it for less intellectually honest reasons.  The basic context of course is that native rates of fertility are in irreversible decline.  Here goes:

Immigration is not going so well today in terms of assimilation.  Yet in the future it will go worse yet, because the native-borns will be smaller in number and also older and less energetic.  Nonetheless we need to take in a lot of immigrants today, as a kind of practice, so we can get used to the much greater number of immigrants we will need to take in a generation or two from now.  It is better to be a crummy country than a country of 33,000 people.  And so we must become crummier now, so that later on our rise in crumminess is modestly tempered, though it still will happen.  Open the gates!

See?

Using AI in politics

Could AI be used to generate strategic advantage in politics and elections?

Without doubt. We used it to improve prediction of the true critical voters in 2016 (but not to improve the execution of digital marketing, per the Cadwalladr conspiracy) and the true critical voters and true marginal seats in 2019. Competent campaigns everywhere could already, pre-GPT, use AI tools to improve performance.

We did some simple experiments last year to see if you could run ‘synthetic’ focus groups and ‘synthetic’ polls inside a LLM. Yes you can. We interrogated synthetic swing voters and synthetic MAGA fans on, for example, Trump running again. Responses are indistinguishable from real people as you might expect. And polling experiments similarly produced results very close to actual polls. Some academic papers have been published showing similar ideas to what we experimented with. There is no doubt that a competent team could use these emerging tools to improve tools for politics and perform existing tasks faster and cheaper. And one can already see this starting (look at who David Shor is hiring).

It’s a sign of how fast AI is moving that this idea was new last summer (I first heard it discussed among top people roughly July), we and others tested it, and focus has moved to new ideas without ~100% of those in mainstream politics today having any idea these possibilities exist.

That is from Dominic Cummings (paid) Substack.

“Almost space” markets in everything

The space race just got a new entrant. France’s Zephalto is offering passengers the chance to travel to the stratosphere in a balloon, starting at €120,000 ($132,000) per person in 2025.

“I partnered with the French space agency, and we worked on the concept of the balloon together,” says Zephalto founder and aerospace engineer Vincent Farret d’Astiès.

He tells Bloomberg that he’s planning on 60 flights a year, with just six passengers on board each flight. The company aims to provide an experience that brings the best bits of French hospitality—fine food, wine and design—to the edges of space for those who can afford the six-figure ticket.

Balloons filled with helium or hydrogen will depart from France with two pilots on board and rise 25 kilometers (15.5 miles) into the stratosphere for 1 1/2 hours. Once at peak altitude, which is about three times higher than for a commercial airliner, the balloon will stay for three hours, giving guests a chance to take in views previously seen only by astronauts. The descent will take a further hour and a half, for a six-hour round trip.

Here is more from Sarah Rappaport at Bloomberg.  Via Daniel Lippman.

Thursday assorted links

1. How much are people spending on dates.

2. Why not buy an abandoned Japanese house?  The price is right (NYT).  Soon they may be cheaper than repeated dating.

3. At the local level, employment concentration is falling.

4. “Who is crazier? Me or them?” (Ukraine issues)

5. “We’re Not Going to Die,” Robin Hanson video.  And ask the experts: good common sense from Tom Tugendhat, UK security minister, on AI safety.  “China, with its vast datasets and fierce determination, is a strong rival.”  Keep in mind we need to stay ahead of them for a long while, not just a few years.  The fact that the Chinese might heavily regulate their private sector AI tells you nothing about what their government will do, or if anything it tells you they will emphasize developments in the military direction.

6. Data on female-to-female mentoring.

A Mosquito Factory?!

A “mosquito factory” might sound like the last thing you’d ever want, but Brazil is constructing a facility capable of producing five billion mosquitoes annually. The twist? The factory will breed mosquitoes carrying a special bacteria that significantly reduces their ability to transmit viruses. As far as I can tell, however, the new mosquitoes still suck your blood.

Nature: The bacterium Wolbachia pipientis naturally infects about half of all insect species. Aedes aegypti mosquitoes, which transmit dengue, Zika, chikungunya and other viruses, don’t normally carry the bacterium, however. O’Neill and his colleagues developed the WMP mosquitoes after discovering that A. aegypti infected with Wolbachia are much less likely to spread disease. The bacterium outcompetes the viruses that the insect is carrying.

When the modified mosquitoes are released into areas infested with wild A. aegypti, they slowly spread the bacteria to the wild mosquito population.

Several studies have demonstrated the insects’ success. The most comprehensive one, a randomized, controlled trial in Yogyakarta, Indonesia, showed that the technology could reduce the incidence of dengue by 77%1, and was met with enthusiasm by epidemiologists.

In Brazil, where the modified mosquitoes have so far been tested in five cities, results have been more modest. In Niterói, the intervention was associated with a 69% decrease of dengue cases2. In Rio de Janeiro, the reduction was 38%3.

Wolbachia-infected mosquitoes have already been approved by Brazilian regulatory agencies. But the technology has not yet been officially endorsed by the World Health Organization (WHO), which could be an obstacle to its use in other countries. The WHO’s Vector Control Advisory Group has been evaluating the modified mosquitoes, and a discussion about the technology is on the agenda for the group’s next meeting later this month.

Do older economists write differently?

The scholarly impact of academic research matters for academic promotions, influence, relevance to public policy, and others. Focusing on writing style in top-level professional journals, we examine how it changes with age, and how stylistic differences and age affect impact. As top-level scholars age, their writing style increasingly differs from others’. The impact (measured by citations) of each contribution decreases, due to the direct effect of age and the much smaller indirect effects through style. Non-native English-speakers write in different styles from others, in ways that reduce the impact of their research. Nobel laureates’ scholarly writing evinces less certainty about the conclusions of their research than that of other highly productive scholars.

Here is the full NBER paper by Lea-Rachel and Daniel S. Hamermesh.

The dilemma of 2023 banking, in a nutshell

Motivated by the regional bank crisis of 2023, we model the impact of interest rates on the liquidity risk of banks. Prior work shows that banks hedge the interest rate risk of their assets with their deposit franchise: when interest rates rise, the value of the assets falls but the value of the deposit franchise rises. Yet the deposit franchise is only valuable if depositors remain in the bank. This creates run incentives for uninsured depositors. We show that a run equilibrium is absent at low interest rates but appears when rates rise because the deposit franchise comes to dominate the value of the bank. The liquidity risk of the bank thus increases with interest rates. We provide a formula for the bank’s optimal risk management policy. The bank should act as if its deposit rate is more sensitive to market rates than it really is, i.e., as if its “deposit beta” is higher. This leads the bank to shrink the duration of its assets. Shortening duration has a downside, however: it exposes the bank to insolvency if interest rates fall. The bank thus faces a dilemma: it cannot simultaneously hedge its interest rate risk and liquidity risk exposures. The dilemma disappears only if uninsured deposits do not contribute to the deposit franchise (if they have a deposit beta of one). The recent growth of low-beta uninsured checking and savings accounts thus poses stability risks to banks. The risks increase with interest rates and are amplified by other exposures such as credit risk. We show how they can be addressed with an optimal capital requirement that rises with interest rates.

That is from a new paper by Itamar Drechsler, Alexi Savov, Philipp Schnabl, and Olivier Wang.

Wednesday assorted links

1. Scott Aronson with some AGI risk sanity.

2. The revise and resubmit process in economics.

3. “A mayor in South Korea is under fire for dumping tons of garbage on an area beach so that volunteer clean-up crews had something to remove.

4. Taylor Swift did due diligence on FTX.

5. Haiti fact of the day: “Ticket from Miami to Port-au-Prince: $124 Ticket from Port-au-Prince to Miami: $1,000-3,000”

6. Magnus playing poker (WSJ).

My Conversation with Anna Keay

A very good episode, here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler sat down with Anna to discuss the most plausible scenario where England could’ve remained a republic in the 17th century, what Robert Boyle learned from Sir William Petty, why some monarchs build palaces and other don’t, how renting from the Landmark Trust compares to Airbnb, how her job changes her views on wealth taxes, why neighborhood architecture has declined, how she’d handle the UK’s housing shortage, why giving back the Koh-i-Noor would cause more problems than it solves, why British houses have so little storage, the hardest part about living in an 800-year-old house, her favorite John Fowles book, why we should do more to preserve the Scottish Enlightenment, and more.

And here is one excerpt:

COWEN: Which are the old buildings that we have too many of in Britain? There’s a lot of Christopher Wren churches. I think there’s over 20.

KEAY: Too many?

COWEN: What if they were 15? They’re not all fantastic.

KEAY: They’re not all fantastic? Tell me one that isn’t fantastic.

COWEN: The Victorians knocked down St. Mildred. I’ve seen pictures of it. I don’t miss it.

KEAY: Well, you don’t miss something that’s not there. I think it’d be pretty hard to convince me that any Christopher Wren church wasn’t worth hanging on to. But your point is right, which is to say that not everything that was ever built is worth retaining. There are things which are clearly of much less interest or were poorly built, which are not serving a purpose anymore in a way that they need to. To me, it’s all about assessing what matters, what we care about.

It’s incredibly important to remember how you have to try and take the long view because if you let things go, you cannot later retrieve them. We look at the decisions that were made in the past about things that we really care about that were demolished — wonderful country houses, we’ve mentioned. It’s fantastic, for example, Euston Station, one of the great stations of the world, built in the middle of the 19th century, demolished in the ’60s, regretted forever since.

So, one of the things you have to be really careful about is to make a distinction between the fashion of the moment and things which we are going to regret, or our children or our grandchildren are going to curse us for having not valued or not thought about, not considered.

Which is why, in this country, we have this thing called the listing system, where there’s a process of identifying buildings which are important, and what’s called listing them — putting them on a list — which means that if you own them, you can’t change them without getting permission, which is a way of ensuring that things which you as an owner or I as an owner might not treat with scorn, that the interest of generations to come are represented in that.

COWEN: Why were so many big mistakes made in the middle part of the 20th century? St. Pancras almost was knocked down, as I’m sure you know. That would have been a huge blunder. There was something about that time that people seem to have become more interested in ugliness. Or what’s your theory? How do you explain the insanity that took all of Britain for, what, 30 years?

KEAY: Well, I think this is such a good question because this is, to me, what the study of history is all about, which is, you have to think about what it was like for that generation. You have to think of what it was like for people in the 1950s and ’60s, who had experienced, either firsthand or very close at hand, not just one but two catastrophic world wars in which numbers had been killed, places had been destroyed. The whole human cost of that time was so colossal, and the idea for that generation that something really fundamental had to change if we were going to be a society that wasn’t going to be killing one another at all time.

This has a real sort of mirror in the 17th century, during the Civil War in the 17th century. There’s a real feeling that something had to be done. Otherwise, God was going to strike down this nation, this errant nation. I think for that generation in the ’50s and ’60s, the sense that we simply have to do things differently because this pattern of life, this pattern of existence, this way we’ve operated as a society has been so destructive.

Although lots of things were done — when it comes to urban planning and so on — that we really regret now, I think you have to be really careful not to diminish the seriousness of intent of those people who were trying to conceive of what that world might be — more egalitarian, more democratic, involving more space, more air, more light, healthier — all these kinds of things.

Definitely recommended, with numerous interesting parts.  And I am very happy to recommend Anna’s latest book The Restless Republic: Britain Without a Crown.

*The Two-Parent Privilege*

A new and great book, authored by Melissa S. Kearney of the University of Maryland.  The subtitle is How Americans Stopped Getting Married and Started Falling Behind, and here is one excerpt of the summary points:

Two-parent families are beneficial for children.

The class divide in marriage and family structure has exacerbated inequality and class gaps.

Places that have more two-parent families have higher rates of upward mobility.

Not talking about these facts is counterproductive.

The marshaled evidence is convincing, and I will be blogging more about this book.  While some stiff competition is coming, this could be the most important economics and policy book of this year.  And yes it is remarkable that such a book is so needed, but yes it is.  And here is Melissa on Twitter.

Ideas for regulating AI safety

Noting these come from Luke Muelhhauser, and he is not speaking for Open Philanthropy in any official capacity:

  1. Software export controls. Control the export (to anyone) of “frontier AI models,” i.e. models with highly general capabilities over some threshold, or (more simply) models trained with a compute budget over some threshold (e.g. as much compute as $1 billion can buy today). This will help limit the proliferation of the models which probably pose the greatest risk. Also restrict API access in some ways, as API access can potentially be used to generate an optimized dataset sufficient to train a smaller model to reach performance similar to that of the larger model.
  2. Require hardware security features on cutting-edge chips. Security features on chips can be leveraged for many useful compute governance purposes, e.g. to verify compliance with export controls and domestic regulations, monitor chip activity without leaking sensitive IP, limit usage (e.g. via interconnect limits), or even intervene in an emergency (e.g. remote shutdown). These functions can be achieved via firmware updates to already-deployed chips, though some features would be more tamper-resistant if implemented on the silicon itself in future chips.
  3. Track stocks and flows of cutting-edge chips, and license big clusters. Chips over a certain capability threshold (e.g. the one used for the October 2022 export controls) should be tracked, and a license should be required to bring together large masses of them (as required to cost-effectively train frontier models). This would improve government visibility into potentially dangerous clusters of compute. And without this, other aspects of an effective compute governance regime can be rendered moot via the use of undeclared compute.
  4. Track and require a license to develop frontier AI models. This would improve government visibility into potentially dangerous AI model development, and allow more control over their proliferation. Without this, other policies like the information security requirements below are hard to implement.
  5. Information security requirements. Require that frontier AI models be subject to extra-stringent information security protections (including cyber, physical, and personnel security), including during model training, to limit unintended proliferation of dangerous models.
  6. Testing and evaluation requirements. Require that frontier AI models be subject to extra-stringent safety testing and evaluation, including some evaluation by an independent auditor meeting certain criteria. [footnote in the original]
  7. Fund specific genres of alignment, interpretability, and model evaluation R&D. Note that if the genres are not specified well enough, such funding can effectively widen (rather than shrink) the gap between cutting-edge AI capabilities and available methods for alignment, interpretability, and evaluation. See e.g. here for one possible model.
  8. Fund defensive information security R&D, again to help limit unintended proliferation of dangerous models. Even the broadest funding strategy would help, but there are many ways to target this funding to the development and deployment pipeline for frontier AI models.
  9. Create a narrow antitrust safe harbor for AI safety & security collaboration. Frontier-model developers would be more likely to collaborate usefully on AI safety and security work if such collaboration were more clearly allowed under antitrust rules. Careful scoping of the policy would be needed to retain the basic goals of antitrust policy.
  10. Require certain kinds of AI incident reporting, similar to incident reporting requirements in other industries (e.g. aviation) or to data breach reporting requirements, and similar to some vulnerability disclosure regimes. Many incidents wouldn’t need to be reported publicly, but could be kept confidential within a regulatory body. The goal of this is to allow regulators and perhaps others to track certain kinds of harms and close-calls from AI systems, to keep track of where the dangers are and rapidly evolve mitigation mechanisms.
  11. Clarify the liability of AI developers for concrete AI harms, especially clear physical or financial harms, including those resulting from negligent security practices. A new framework for AI liability should in particular address the risks from frontier models carrying out actions. The goal of clear liability is to incentivize greater investment in safety, security, etc. by AI developers.
  12. Create means for rapid shutdown of large compute clusters and training runs. One kind of “off switch” that may be useful in an emergency is a non-networked power cutoff switch for large compute clusters. As far as I know, most datacenters don’t have this.[6] Remote shutdown mechanisms on chips (mentioned above) could also help, though they are vulnerable to interruption by cyberattack. Various additional options could be required for compute clusters and training runs beyond particular thresholds.

I am OK with some of these, provided they are applied liberally — for instance, new editions of the iPhone require regulatory consent, but that hasn’t thwarted progress much.  That may or may not be the case for #3 through #6, I don’t know how strict a standard is intended or who exactly is to make the call.  Perhaps I do not understand #2, but it strikes me as a proposal for a complete surveillance society, at least as far as computers are concerned — I am opposed!  And furthermore it will drive a lot of activity underground, and in the meantime the proposal itself will hurt the EA brand.  I hope the country rises up against such ideas, or perhaps more likely that they die stillborn.  (And to think they are based on fears that have never even been modeled.  And I guess I can’t bring in a computer from Mexico to use?)  I am not sure what “restrict API access” means in practice (to whom? to everyone who might be a Chinese spy? and does Luke favor banning all open source? do we really want to drive all that underground?), but probably I am opposed to it.  I am opposed to placing liability for a General Purpose Technology on the technology supplier (#11), and I hope to write more on this soon.

Finally, is Luke a closet accelerationist?  The status quo does plenty to boost AI progress, often through the military and government R&D and public universities, but there is no talk of eliminating those programs.  Why so many regulations but the government subsidies get off scot-free!?  How about, while we are at it, banning additional Canadians from coming to the United States?  (Canadians are renowned for their AI contributions.)  After all, the security of our nation and indeed the world is at stake.  Canada is a very nice country, and since 1949 it even contains Newfoundland, so this seems like less of an imposition than monitoring all our computer activity, right?  It might be easier yet to shut down all high-skilled immigration.  Any takers for that one?

Women’s colleges and economics majors

Many observers argue that diversity in Economics and STEM fields is critical, not simply because of egalitarian goals, but because who is in a field may shape what is studied by it. If increasing the rate of majoring in mathematically-intensive fields among women is a worthy goal, then understanding whether women’s colleges causally affect that choice is important. Among all admitted applicants to Wellesley College, enrollees are 7.2 percentage points (94%) more likely to receive an Economics degree than non-enrollees (a plausible lower bound given negative selection into enrollment on math skills and major preferences). Overall, 3.2 percentage points—or 44% of the difference between enrollees and non-enrollees—is explained by college exposure to female instructors and students, consistent with a wider role for women’s colleges in increasing female participation in Economics.

Here is the full NBER paper by Kristin F. ButcherPatrick McEwan Akila Weerapana.  And here is a new paper about the value of HBCUs.

Tuesday assorted links

1. More on Sudan (correct link).

2. Atlas Fellowship, for 19 and younger.  Many very good winners.

3. Improving GPT models with self-reflection.  And you can now interrogate your pdfs.

4. Ahem: “A proposal to strip Disney World of its ability to self-inspect its rides and monorails could also alter its participation in an agreement that allows major theme parks to self-report injuries on their attractions.

But the other big theme parks, including Universal and SeaWorld, would still retain those privileges, Gov. Ron DeSantis said Monday.”  Link here.  Not unrelated to a lot of other regulatory issues as well.

5. AI “photograph” wins prestigious photography award.

6. The Collinses.

7. Ernie Tedeschi now head of the CEA.