Category: Science

Career advice in climate change

From a loyal MR reader:

Advice question for you and MR readers, riffing on one of your Conversations themes, if you would indulge me.

What advice would you give to someone wishing to build a career in climate change mitigation as a non-scientist? 

Two advice scenarios: 1) the person is 16; 2) the person is mid-career. Assume no constraints with respect to skill development or self-directed study. That is, what should these people teach themselves? To whom should they reach out to for mentorship?

Your suggestions?

Free Will and the Brain

Tyler and I have been arguing about free will for decades. One of the strongest arguments against free-will is an empirical argument due to physiologist Benjamin Libet. Libet famously found that the brain seems to signal a decision to act before the conscious mind makes an intention to act. Brain scans can see a finger tap coming 500 ms before the tap but the conscious decision seems to be made nly 150 ms before the tap. Libet’s results, however, are now being reinterpreted:

The Atlantic: To decide when to tap their fingers, the participants simply acted whenever the moment struck them. Those spontaneous moments, Schurger reasoned, must have coincided with the haphazard ebb and flow of the participants’ brain activity. They would have been more likely to tap their fingers when their motor system happened to be closer to a threshold for movement initiation.

This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices.

…In a new study under review for publication in the Proceedings of the National Academy of Sciences, Schurger and two Princeton researchers repeated a version of Libet’s experiment. To avoid unintentionally cherry-picking brain noise, they included a control condition in which people didn’t move at all. An artificial-intelligence classifier allowed them to find at what point brain activity in the two conditions diverged. If Libet was right, that should have happened at 500 milliseconds before the movement. But the algorithm couldn’t tell any difference until about only 150 milliseconds before the movement, the time people reported making decisions in Libet’s original experiment.

In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.

The Atlantic piece with more background is here. A scientific piece summarizing some of the new experiments is here. Of course, the philosophical puzzles remain. Tyler and I will continue to argue.

Special Emergent Ventures tranche to study the nature and causes of progress

I am pleased to announce the initiation of a new, special tranche of the Emergent Ventures fund, namely to study the nature and causes of progress, economic and scientific progress yes but more broadly too, including social and cultural factors.  This has been labeled at times “Progress Studies.

Simply apply at the normal Emergent Ventures site and follow the super-simple instructions.  Feel free to mention the concept of progress if appropriate to your idea and proposal.  Here is the underlying philosophy of Emergent Ventures.

And I am pleased to announce that an initial award from this tranche has been made to the excellent Pseudoerasmus, for blog writing on historical economic development and also for high-quality Twitter engagement and for general scholarly virtue and commitment to ideas.

Pseudoerasmus has decided to donate this award to the UK Economic History Society.  Hail Pseudoerasmus!

Has a more beautiful “Progress Studies” book introduction ever been written?

On January 5, 1845, the Prussian cultural minister Karl Friedrich von Eichorn received a request from a group of six young men to form a new Physical Society in Berlin.  By the time their statutes were approved in March, they numbered forty-nine and were meeting biweekly to discuss the latest developments in the physical sciences and physiology.  They were preparing to write critical reviews for a new journal, Die Fortschritte der Physik (Advances in physics), and from the beginning they set out to define what constituted progress and what did not.  Their success in this rather aggressive endeavor has long fascinated historians of science.  In fields from thermodynamics, mechanics, and electromagnetism to animal electricity, ophthalmology, and psychophysics, members of this small group established leading positions in what only thirty years later had become a new landscape of physical science populated by large institutes and laboratories of experiment and precision measurement.

How was this possible?  How could a bunch of twenty-somethings, without position or recognition, and possessed of little more than their outsized confidence and ambition, succeed in seizing the future?  What were their resources?

That is the opening passage from M. Norton Wise, Aesthetics, Industry, and Science: Hermann von Helmholtz and the Berlin Physical Society.

Marty Weitzman’s Noah’s Ark Problem

Marty Weitzman passed away suddenly yesterday. He was on many people’s shortlist for the Nobel. His work is marked by high-theory applied to practical problems. The theory is always worked out in great generality and is difficult even for most economists. Weitzman wanted to be understood by more than a handful of theorists, however, and so he also went to great lengths to look for special cases or revealing metaphors. Thus, the typical Weitzman paper has a dense middle section of math but an introduction and conclusion of sparkling prose that can be understood and appreciated by anyone for its insights.

The Noah’s Ark Problem illustrates the model and is my favorite Weitzman paper. It has great sentences like these:

Noah knows that a flood is coming. There are n existing species/libraries, indexed i = 1, 2,… , n. Using the same notation as before, the set of all n species/libraries is denoted S. An Ark is available to help save some species/libraries. In a world of unlimited resources, the entire set S might be saved. Unfortunately, Noah’s Ark has a limited capacity of B. In the Bible, B is given as 300 x 50 x 30 = 450,000 cubits. More generally, B stands for the total size of the budget available for biodiversity preservation.

…If species/library i is boarded on the Ark, and thereby afforded some protection, its survival probability is enhanced to Pi. Essentially, boarding on the Ark is a metaphor for investing in a conservation project, like habitat protection, that improves survivability of a particular species/library. A particularly grim version of the Noah’s Ark Problem would make the choice a matter of life or death, meaning that Pi= 0 and Pi= 1. This specification is perhaps closest to the old testament version, so I am taking literary license here by extending the metaphor to less stark alternatives.

Weitzman first shows that the solution to this problem has a surprising property:

The solution of the Noah’s Ark Problem is always “extreme” in the following sense…In an optimal policy, the entire budget is spent on a favored subset of species/libraries that is afforded maximal protection. The less favored complementary subset is sacrificed to a level of minimal protection in order to free up to the extreme all possible scarce budget dollars to go into protecting the favored few.

Weitzman offers a stark example. Suppose there are two species with probabilities of survival of .99 and .01. For the same cost, we can raise the probability of either surviving by .01. What should we do?

We should save the first species and let the other one take its chances. The intuition comes from thinking about the species or libraries as having some unique features but also sharing some genes or books. When you invest in the first species you are saving the unique genes associated with that species and you are also increasing the probability of saving the genes that are shared by the two species. But when you put your investment in the second species you are essentially only increasing the probability of saving the unique aspects of species 2 because the shared aspects are likely saved anyway. Thus, on the margin you get less by investing in species 2 than by investing in species 1 even though it seems like you are saving the species that is likely to be saved anyway.

The math establishing the result is complex and, of course, there are caveats such as linearity assumptions which might reverse the example in a particular case but the thrust of the result is always operating: Putting all your eggs in one basket is a good idea when it comes to saving species.

Weitzman gets the math details right, of course!, but he knows that Noah isn’t a math geek.

Noah is a practical outdoors man. He needs robustness and rugged performance “in the field.” As he stands at the door of the ark, Noah desires to use a simple priority ranking list from which he can check off one species at a time for boarding. Noah wishes to have a robust rule….Can we help Noah? Is the concept of an ordinal ranking system sensible? Can there exist such a simple myopic boarding rule, which correctly prioritizes each species independent of the budget size? And if so, what is the actual formula that determines Noah’s ranking list for achieving an optimal ark-full of species?

So working the problem further, Weitzman shows that there is a relatively simple rule which is optimal to second-order, namely:

Where R is an index of priority. Higher R gets you on the ark, lower R bars entrance. D is a measure of a species distinctiveness–this could be measured, for example, by the nearest common ancestor metric. U is a measure of the special utility of a species beyond its diversity (Pandas are cute, goats are useful etc.) C is the cost of a project to increase the probability of survival and Delta P is the increase in the probability of survival so Delta P/C is the cost of increasing the probability of survival per dollar. Put simply we should invest our dollars where they have the most survival probability per dollar multiplied by a factor taking into account diversity and utility.

The rule is simple and sensible and and it has been used occasionally. Much more could be done, however, to optimize dollars spent on conservation and Weitzman’s rule gives us the necessary practical guidance. RIP.

Watch out for the weakies!: the O-Ring model in scientific research

Team impact is predicted more by the lower-citation rather than the higher-citation team members, typically centering near the harmonic average of the individual citation indices. Consistent with this finding, teams tend to assemble among individuals with similar citation impact in all fields of science and patenting. In assessing individuals, our index, which accounts for each coauthor, is shown to have substantial advantages over existing measures. First, it more accurately predicts out-of-sample paper and patent outcomes. Second, it more accurately characterizes which scholars are elected to the National Academy of Sciences. Overall, the methodology uncovers universal regularities that inform team organization while also providing a tool for individual evaluation in the team production era.

That is part of the abstract of a new paper by Mohammad Ahmadpoor and Benjamin F. Jones.

Lie to Me

From A Test of the Micro Expressions Training Tool:

Image result for lie to meThe theory behind micro‐expressions posits that when people attempt to mask their true emotional state, expressions consistent with their actual state will appear briefly on their face. Thus, while people are generally good at hiding their emotions, some facial muscles are more difficult to control than others and automatic displays of emotion will produce briefly detectable emotional “leakage” or micro‐expressions (Ekman, 1985). When a person does not wish to display his or her true feelings s/he will quickly suppress these expressions. Yet, there will be an extremely short time between the automatic display of the emotion and the conscious attempt to conceal it, resulting in the micro‐expression(s) that can betray a true feeling and according to theory, aid in detecting deception.

…The METT Advanced programme, marketed by the Paul Ekman Group (2011), coined an “online training to increase emotional awareness and detect deception” and promoted with claims that it “… enables you to better spot lies” and “is meant for those whose work requires them to evaluate truthfulness and detect deception—such as police and security personnel” (Paul Ekman Group, METT Advanced‐Online only, para. 2). The idea that micro‐expression recognition improves lie detection has also been put forth in the scientific literature (Ekman, 2009; Ekman & Matsumoto, 2011; Kassin, Redlich, Alceste, & Luke, 2018) and promoted in the wider culture. One example of this is its use as a focal plot device in the crime drama television series Lie to Me, which ran for three seasons (Baum, 2009). Though a fictional show, Lie to Me was promoted as being based on the research of Ekman. Ekman himself had a blog for the show in which he discussed the science of each episode (Ekman, 2010). Micro‐expression recognition training is not only marketed for deception detection but, more problematically, is actually used for this purpose by the United States government. Training in recognising micro‐expressions is part of the behavioural screening programme, known as Screening Passengers by Observation Technique (SPOT) used in airport security (Higginbotham, 2013; Smith, 2011; Weinberger, 2010). The SPOT programme deploys so‐called behaviour detection officers who receive various training in detecting deception from nonverbal behaviour, including training using the METT (the specific content of this programme is classified, Higginbotham, 2013). Evidently, preventing terrorists from entering the country’s borders and airports is an important mission. However, to our knowledge, there is no research on the effectiveness of METT in improving lie detection accuracy or security screening efficacy.

…Our findings do not support the use of METT as a lie detection tool. The METT did not improve accuracy any more than a bogus training protocol or even no training at all. The METT also did not improve accuracy beyond the level associated with guessing. This is problematic to say the least given that training in the recognition of micro‐expressions comprises a large part of a screening system that has become ever more pervasive in our aviation security (Higginbotham, 2013; Weinberger, 2010).

Note that the online training failed but micro-expressions are real and better, more intensive training or maybe an AI could do better though on that last I wouldn’t accept the hype.

Hat tip the excellent Rolf Degen on twitter.

Alexey Guzey on progress in the life sciences

I already linked to this piece, but wanted to recommend it again.  I don’t agree with all of the points, but it has many excellent arguments, here is one excerpt from the opening section:

I think that the perception of stagnation in science – and in biology specifically – is basically fake news, driven by technological hedonic treadmill and nostalgia. We rapidly adapt to technological advances – however big they are – and we always idealize the past – however terrible it was.

I mean – we can just go to Wikipedia’s 2018 in science (a) and see how much progress we made last year:

  • first bionic hand with a sense of touch that can be worn outside a laboratory
  • development of a new 3D bioprinting technique, which allows the more accurate printing of soft tissue organs, such as lungs
  • a method through which the human innate immune system may possibly be trained to more efficiently respond to diseases and infections
  • a new form of biomaterial based delivery system for therapeutic drugs, which only release their cargo under certain physiological conditions, thereby potentially reducing drug side-effects in patients
  • an announcement of human clinical trials, that will encompass the use of CRISPR technology to modify the T cells of patients with multiple myeloma, sarcoma and melanoma cancers, to allow the cells to more effectively combat the cancers, the first of their kind trials in the US
  • a blood test (or liquid biopsy) that can detect eight common cancer tumors early. The new test, based on cancer-related DNA and proteins found in the blood, produced 70% positive results in the tumor-types studied in 1005 patients
  • a method of turning skin cells into stem cells, with the use of CRISPR
  • the creation of two monkey clones for the first time
  • a paper which presents possible evidence that naked mole-rats do not face increased mortality risk due to aging

Doesn’t seem like much? Here’s the kicker: this is not 2018. This is January 2018.

The Greening Earth

The earth is getting greener, in large part due to increased CO2 in the atmosphere. Surprisingly, however, another driver is programs in China to increase and conserve forests and more intensive use of cropland in India. A greener China and India isn’t the usual story and pollution continues to be a huge issue in India but contrary to what many people think urbanization increases forestation as does increased agricultural productivity. Here’s the abstract from a recent paper in Nature Sustainability.

Satellite data show increasing leaf area of vegetation due to direct factors (human land-use management) and indirect factors (such as climate change, CO2 fertilization, nitrogen deposition and recovery from natural disturbances). Among these, climate change and CO2 fertilization effects seem to be the dominant drivers. However, recent satellite data (2000–2017) reveal a greening pattern that is strikingly prominent in China and India and overlaps with croplands world-wide. China alone accounts for 25% of the global net increase in leaf area with only 6.6% of global vegetated area. The greening in China is from forests (42%) and croplands (32%), but in India is mostly from croplands (82%) with minor contribution from forests (4.4%). China is engineering ambitious programmes to conserve and expand forests with the goal of mitigating land degradation, air pollution and climate change. Food production in China and India has increased by over 35% since 2000 mostly owing to an increase in harvested area through multiple cropping facilitated by fertilizer use and surface- and/or groundwater irrigation. Our results indicate that the direct factor is a key driver of the ‘Greening Earth’, accounting for over a third, and probably more, of the observed net increase in green leaf area. They highlight the need for a realistic representation of human land-use practices in Earth system models.

Pig semen nationalism protectionism

Two pig farmers in Western Australia will be jailed after being convicted of illegally importing Danish pig semen concealed in shampoo bottles.

Torben Soerensen has been sentenced to three years in prison, while Henning Laue faces a two-year sentence after pleading guilty to breaching quarantine and biosecurity laws.

The Perth district court was told boar semen had been illegally imported from Denmark multiple times between May 2009 and March 2017. The semen was used in GD Pork’s artificial breeding program and several breeding sows were direct offspring of Danish boars.

Federal agriculture minister Bridget McKenzie said breaches of biosecurity laws would not be tolerated.

“This case shows a disturbing disregard for the laws that protect the livelihoods of Australia’s 2,700 pork producers, and the quality of the pork that millions of Australians enjoy each year,” McKenzie said.

“GD Pork imported the semen illegally in an attempt to get an unfair advantage over its competitors, through new genetics.”

Western Australian Farmers Federation spokeswoman Jessica Wallace said the offences was “a selfish act” that could cripple an entire industry.

Here is more from Lisa Martin, via Art J.

Bosco Verticale

I’d like to see the cost-benefit analysis on this one before signing up, but an intriguing idea:

Vertical Forest is a model for a sustainable residential building, a project for metropolitan reforestation contributing to the regeneration of the environment and urban biodiversity without the implication of expanding the city upon the territory. It is a model of vertical densification of nature within the city that operates in relation to policies for reforestation and naturalization of large urban and metropolitan borders. The first example of the Vertical Forest consisting of two residential towers of 110 and 76 m height, was realized in the centre of Milan, on the edge of the Isola neighborhood, hosting 800 trees (each measuring 3, 6 or 9 meters), 4,500 shrubs and 15,000 plants from a wide range of shrubs and floral plants distributed according to the sun exposure of the facade. On flat land, each Vertical Forest equals, in amount of trees, an area of 20,000 square meters  of forest. In terms of urban densification it is the equivalent of an area of a single family dwelling of nearly 75,000 sq.m. The vegetal system of the Vertical Forest contributes to the construction of a microclimate, produces humidity, absorbs CO2 and dust particles and produces oxygen.

Here is the link, here are other links.

Genetic Endowments and Wealth Inequality

That new paper by Daniel Barth, Nicholas W. Papageorge and Kevin Thom is attracting a great deal of attention and also some controversy.  Here is the first sentence of the abstract:

We show that genetic endowments linked to educational attainment strongly and robustly predict wealth at retirement.

But it’s not mainly about IQ.  I found this to be the most interesting part of the paper, noting that EA is a polygenic score:

Our use of the EA score as a measure of biological traits linked to human capital is related to previous attempts in the literature to measure ability through the use of tests scores such as IQ or the AFQT…We note two important differences between the EA score and a measure like IQ that make it valuable to study polygenic scores. First, a polygenic score like the EA score can overcome some interpretational challenges related to IQ and other cognitive test scores. Environmental factors have been found to influence intelligence test results and to moderate genetic influences on IQ (Tucker-Drob and Bates, 2015). It is true that differences in the EA score may reflect differences in environments or investments because parents with high EA scores may also be more likely to invest in their children. However, the EA score is fixed at conception, which means that post-birth investments cannot causally change the value of the score. A measure like IQ suffers from both of these interpretational challenges. High IQ parents might have high IQ children because of the genes that they pass on, but also because of the positive investments that they make…Compared to a cognitive test score like IQ, the EA score may also measure a wider variety of relevant endowments. This is especially important given research, including relatively recent papers in economics, emphasizing the importance of both cognitive and non-cognitive skills in shaping life-cycle outcomes (Heckman and Rubinstein, 2001). Existing evidence suggests a correlation of approximately 0.20 between a cognitive test score available for HRS respondents and the EA score (Papageorge and Thom, 2016). This relatively modest correlation could arise if both variables measure the same underlying cognitive traits with error, or if they measure different traits. However, Papageorge and Thom (2016) find that the relationship between the EA score and income differs substantially from the relationship between later-life cognition scores and income, suggesting that the EA score contains unique information…

…we interpret the EA score as measuring a basket of genetic factors that influence traits relevant for human capital accumulation.

If I understand the paper correctly, the polygenic score is what predicts well from the genetic data set, it is not a “thing with a known nature.”  And I believe the results are drawn from the same 1.1 million person data set as is used in this Nature paper.

A Toolkit of Policies to Promote Innovation

That is the new Journal of Economic Perspectives article by Nicholas Bloom, John Van Reenen, and Heidi Williams.  Most of all, such articles should be more frequent and receive greater attention and higher status, as Progress Studies would suggest.  Here is one excerpt:

…moonshots may be justified on the basis of political economy considerations. To generate significant extra resources for research, a politically sustainable vision needs to be created. For example, Gruber and Johnson (2019) argue that increasing federal funding of research as a share of GDP by half a percent—from 0.7 percent today to 1.2 percent, still lower than the almost 2 percent share observed in 1964 in Figure 1—would create a $100 billion fund that could jump-start new technology hubs in some of the more educated but less prosperous American cities (such as Rochester, New York, and Pittsburgh, Pennsylvania). They argue that such a fund could generate local spillovers and, by alleviating spatial inequality, be more politically sustainable than having research funds primarily flow to areas with highly concentrated research, such as Palo Alto, California, and Cambridge, Massachusetts.

In general I agree with their points, but would have liked to have seen more on freedom to build, and of course on culture, culture, culture.  At the very least, policy is endogenous to culture, and culture shapes many economic outcomes more directly as well.  I’m fine with tax credits for R&D, but I just don’t see them as in the driver’s seat.

Highly decentralized solar geoengineering

Nonstate actors appear to have increasing power, in part due to new technologies that alter actors’ capacities and incentives. Although solar geoengineering is typically conceived of as centralized and state-deployed, we explore highly decentralized solar geoengineering. Done perhaps through numerous small high-altitude balloons, it could be provided by nonstate actors such as environmentally motivated nongovernmental organizations or individuals. Conceivably tolerated or even covertly sponsored by states, highly decentralized solar geoengineering could move presumed action from the state arena to that of direct intervention by nonstate actors, which could in turn, disrupt international politics and pose novel challenges for technology and environmental policy. We conclude that this method appears technically possible, economically feasible, and potentially politically disruptive. Decentralization could, in principle, make control by states difficult, perhaps even rendering such control prohibitively costly and complex.

That is from Jesse L. Reynolds & Gernot Wagner, and injecting fine aerosols into the air, as if to mimic some features of volcanic eruptions, seems to be one of the major possible approaches.  I am not able to judge the scientific merits of their claims, but it has long seemed to me evident that some version of this idea would prove possible.

Solve for the equilibrium!  What is it?  Too much enthusiasm for correction and thus disastrous climate cooling?  Preemptive government regulation?  It requires government subsidy?  It becomes controlled by concerned philanthropists?  It starts a climate war between America/Vietnam and Russia/Greenland?  Goldilocks?  I wonder if we will get to find out.

Via the excellent Kevin Lewis.