The United States hasn’t had a year of above-average growth since 2005.
Addendum: With a smoothed series we haven’t had an above average year of growth in the entire 21st century.
Here’s my podcast on Macro Musings with David Beckworth. One bit on China.
Tabarrok: The perspective which we’re getting today is that we’re in competition with China, but actually, when it comes to ideas, we’re in cooperation with China, because the more scientists and engineers that there are in China, then the better that is for us, actually. As you pointed out, if a Chinese researcher comes up with a cure for cancer, great! That’s fantastic! I mean, ideally I would come up with a cure for cancer, but the second best is my neighbor comes up with a cure for cancer, right?
Tabarrok: So, increasing the size of the Chinese market, with wealthier Chinese consumers, wealthier Indian consumers, that is going to increase the demand to do research and development, and that is going to have tremendous impacts not only in health, but in any field of endeavor which relies on these big, fixed costs. So, any time you have an idea-centered industry, which is a lot of industries today. All of high tech is idea centered. More R&D means more ideas. That comes from having bigger, richer markets.
Tyler and I have been arguing about free will for decades. One of the strongest arguments against free-will is an empirical argument due to physiologist Benjamin Libet. Libet famously found that the brain seems to signal a decision to act before the conscious mind makes an intention to act. Brain scans can see a finger tap coming 500 ms before the tap but the conscious decision seems to be made nly 150 ms before the tap. Libet’s results, however, are now being reinterpreted:
The Atlantic: To decide when to tap their fingers, the participants simply acted whenever the moment struck them. Those spontaneous moments, Schurger reasoned, must have coincided with the haphazard ebb and flow of the participants’ brain activity. They would have been more likely to tap their fingers when their motor system happened to be closer to a threshold for movement initiation.
This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices.
…In a new study under review for publication in the Proceedings of the National Academy of Sciences, Schurger and two Princeton researchers repeated a version of Libet’s experiment. To avoid unintentionally cherry-picking brain noise, they included a control condition in which people didn’t move at all. An artificial-intelligence classifier allowed them to find at what point brain activity in the two conditions diverged. If Libet was right, that should have happened at 500 milliseconds before the movement. But the algorithm couldn’t tell any difference until about only 150 milliseconds before the movement, the time people reported making decisions in Libet’s original experiment.
In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.
Suppose we had stable money. It’s then obvious that long-term savers would prefer long-bonds to rolling over short bonds. If held to maturity, the long-bond guarantees a known rate of return and payout at the time it is bought while the rolling of short-term bonds exposes you to risk. Thus, in a regime of stable money, long-term savers should prefer long-bonds and the yield curve should normally be inverted. That’s the essence of an excellent post by John Cochrane:
If inflation is steady, long-term bonds are a safer way to save money for the long run. If you roll over short-term bonds, then you do better when interest rates rise, and do worse when interest rates fall, adding risk to your eventual wealth. The long-term bond has more mark-to-market gains and losses, but you don’t care about that. You care about the long term payout, which is less risky. (Throw out the statements and stop worrying.) So, in an environment with varying real rates and steady inflation, we expect long rates to be less than short rates, because short rates have to compensate investors for extra risk.
If, by contrast, inflation is volatile and real rates are steady, then long-term bonds are riskier. When inflation goes up, the short term rate will go up too, and preserve the real value of the investment, and vice versa. The long-term bond just suffers the cumulative inflation uncertainty. In that environment we expect a rising yield curve, to compensate long bond holders for the risk of inflation.
So, another possible reason for the emergence of a downward sloping yield curve is that the 1970s and early 1980s were a period of large inflation volatility. Now we are in a period of much less inflation volatility, so most interest rate variation is variation in real rates. Markets are figuring that out.
Most of the late 19th century had an inverted yield curve. UK perpetuities were the “safe asset,” and short term lending was risky. It also lived under the gold standard which gave very long-run price stability.
Yesterday in Active Learning Works But Students Don’t Like It I pointed out that student evaluations do not correlate well with teacher effectiveness and may discourage teachers from using more effective but less student-preferred methods of teaching. Coincidentally the American Sociological Association issued a statement yesterday discouraging student evaluations for tenure and promotion decisions.
SETs are weakly related to other measures of teaching effectiveness and student learning (Boring, Ottoboni, and Stark 2016; Uttl, White, and Gonzalez 2017); they are used in statistically problematic ways (e.g., categorical measuresare treated as interval, response rates are ignored, small differences are given undue weight, and distributionsare not reported) (Boysen 2015; Stark and Freishtat 2014); and they can be influenced by course characteristics like time of day, subject, class size, and whether the course is required, all of which are unrelated to teaching effectiveness. In addition, in both observational studies and experiments, SETs have been found to be biased against women and people of color (for recent reviews of the literature, see Basow and Martin 2012 and Spooren, Brockx, and Mortelmans 2015).
Student evaluations mostly evaluate entertainment value but, as my colleague Bryan Caplan notes, given how boring most classes are, entertainment value is worth something! Thus, the ASA notes that student evaluations can be useful as a form of feedback.
Questions on SETs should focus on student experiences, and the instruments should be framed as an opportunity for student feedback, rather than an opportunity for formal ratings of teaching effectiveness.
Student evaluations are on their way out in Canada, as a tool for tenure and promotion. I expect the same here, at least on paper. The ASA is big on “holistic” measures of teacher evaluation as a replacement which strikes me as weaselly. I’d prefer more objective measures such as value added scores.
A carefully done study that held students and teachers constant shows that students learn more in active learning classes but they dislike this style of class and think they learn less. It’s no big surprise–active learning is hard and makes the students feel stupid. It’s much easier to sit back and be entertained by a great lecturer who makes everything seem simple.
Despite active learning being recognized as a superior method of instruction in the classroom, a major recent survey found that most college STEM instructors still choose traditional teaching methods. This article addresses the long-standing question of why students and faculty remain resistant to active learning. Comparing passive lectures with active learning using a randomized experimental approach and identical course materials, we find that students in the active classroom learn more, but they feel like they learn less. We show that this negative correlation is caused in part by the increased cognitive effort required during active learning. Faculty who adopt active learning are encouraged to intervene and address this misperception, and we describe a successful example of such an intervention.
The authors say that it can help to tell students in advance that they should expect to feel flustered but it will all work out in the end.
The success of active learning will be greatly enhanced if students accept that it leads to deeper learning—and acknowledge that it may sometimes feel like exactly the opposite is true.
I am dubious that this will bring students around. An alternative that might help is to discount student evaluations so that teachers don’t feel that they must entertain in order to do well on evaluations. As Brennan and Magness point out in their excellent Cracks in the Ivory Tower:
Using student evaluations to hire, promote, tenure, or determine raises for faculty is roughly on a par with reading entrails or tea leaves to make such decisions. (Actually, reading tea leaves would be better; it’s equally bullshit but faster and cheaper.)… the most comprehensive research shows that whatever student evaluations (SETs) measure, it isn’t learning caused by the professor.
Indeed, the correlation between student evaluations and student learning is at best close to zero and at worst negative. Student evaluations measure how well liked the teacher is. Students like to be entertained. Thus, to the extent that they rely on student evaluations, universities are incentivizing teachers to teach in ways that the students like rather than in ways that promote learning.
It’s remarkable that student evaluations haven’t already been lawsuited into oblivion given that student evaluations are both useless and biased.
It’s not Hayek v. Keynes but both Hamilton and Satoshi lay down some hard lyric in their rap battle, a project of Reid Hoffman.
In an earlier post, Do Boys Have a Comparative Advantage in Math and Science? I pointed to evidence showing that boys have a comparative advantage in math because they are much worse than girls at reading. (Boys do not have a large absolute advantage in math.) If people specialize in their personal comparative advantage this can easily lead to more boys than girls entering math training even if girls are equally or more talented. As I wrote earlier:
[C]onsider what happens when students are told: Do what you are good at! Loosely speaking the situation will be something like this: females will say I got As in history and English and B’s in Science and Math, therefore, I should follow my strengthens and specialize in drawing on the same skills as history and English. Boys will say I got B’s in Science and Math and C’s in history and English, therefore, I should follow my strengths and do something involving Science and Math.
A new paper in PNAS by Breda and Napp finds more evidence for the comparative advantage hypothesis. Breda and Napp look at intention to study math in ~300,000 students worldwide taking the PISA.
PISA2012 includes questions related to intentions to pursue math-intensive studies and careers. These intentions are measured through a series of five questions that ask students if they are willing (i) to study harder in math versus English/reading courses, (ii) to take additional math versus English/reading courses after school finishes, (iii) to take a math major versus a science major in college, (iv) to take a maximum number of math versus science classes, and (v) to pursue a career that involves math versus science. Our main measure of math intentions is an index constructed from these five questions and available for more than 300,000 students. It captures the desire to do math versus both reading and other sciences.
What they find is that comparative advantage (math ability relative to reading ability) explains math intentions better than actual math or reading ability. Comparative advantage is also a better predictor of math intentions than perceptions of math ability (women do perceive lower math ability relative to true ability than do men but the effect is less important than comparative advantage). In another data set the authors show that math intentions predict math education.
Thus, accumulating evidence shows that over-representation of males in STEM fields is perhaps better framed as under-representation of males in reading fields and the latter is driven by relatively low reading achievement among males.
As the gender gap in reading performance is much larger than that in math performance, policymakers may want to focus primarily on the reduction of the former. Systematic tutoring for low reading achievers, who are predominantly males, would be a way, for example, to improve boys’ performance in reading. A limitation of this approach, however, is that it will lower the gender gap in math-intensive fields mostly by pushing more boys in humanities, hence reducing the share of students choosing math.
The authors don’t put it quite so bluntly but another approach is to stop telling people to do what they are good at and instead tell them to do what pays! STEM fields pay more than the humanities so if people were to follow this advice, more women would enter STEM fields. I believe that education spillovers are largest in the STEM fields so this would also benefit society. It is less clear whether it would benefit the women.
Hat tip: Mary Clare Peate.
To show their devotion to Murugan, the Hindu God of War, devotees in South India and Sri Lanka (all males) are pierced with large hooks and then hung on a festival float, as if they were toys on a nightmarish baby mobile. It’s an amazing and horrifying display not unlike Christian devotees in the Philippines who are nailed to crosses.
But what are the effects of these practices on those who undergo them? Surprisingly, positive. In, Xygalatas et al. (2019), Effects of Extreme Ritual Practices on Psychophysiological Well-Being, a group of anthropologists, biologists and religious studies scholars compared measures of physiological, psychological and social well being in a small group of devotees compared to a matched sample. The group performing the ritual had no long lasting health harms but did appear to benefit psychologically through feelings of euphoria and greater self-regard and socially through higher status.
Despite their potential risks, extreme rituals in many contexts are paradoxically associated with health and healing (Jilek 1982; Ward 1984). Our findings suggest that within those contexts, such rituals may indeed convey certain psychological benefits to their performers. Our physiological measurements show that the kavadi is very stressful and high in energetic demands (fig. 2C, 2D). But the ostensibly dangerous ordeal had no detectable persistent harmful effects on participants, who in fact showed signs of improvement in their perceived health and quality of life. We suggest that the effects of ritual participation on psychological well-being occur through two distinct but mutually compatible pathways: a bottom-up process triggered by neurological responses to the ordeal and a top-down process that relies on communicative elements of ritual performance (Hobson et al. 2017).
Specifically, the bottom-up pathway involves physical aspects of ritual performance related to emotional regulation. Ritual is a common behavioral response to stress (Lang et al. 2015; Sosis 2007), and anthropological evidence shows that in many cultures dysphoric rituals involving intense and prolonged exertion and/or altered states of consciousness are considered as efficient ways of dealing with various illnesses (Jilek 1982). In our study, those who suffered from chronic illnesses engaged in more painful forms of participation by enduring more piercings. Notably, higher levels of pain during the ritual were associated with improvements in self-assessed health post-ritual. Although the pain was relatively short-lived, there is evidence that the social and individual effects of participation can be long-lasting (Tewari et al. 2012; Whitehouse and Lanman 2014).
The sensory, physiological, and emotional hyperarousal involved in strenuous ordeals can produce feelings of euphoria and alleviation from pain and anxiety (Fischer et al. 2014; Xygalatas 2008), and there is evidence of a neurochemical basis for these effects via endocrine alterations in neurotransmitters such as endorphins (Boecker et al. 2008; Lang et al. 2017) or endocannabinoids (Fuss et al. 2015). These endocrine effects are amplified when performed collectively, as shown by studies of communal chanting, dancing, and other common aspects of ritual (Tarr et al. 2015). While it is uncertain how long-lasting these effects are, such euphoric experiences may become self-referential for future well-being assessment.
At the same time, a top-down pathway involves social-symbolic aspects of ritual. Cultural expectations and beliefs in the healing power of the ritual may act as a placebo (McClenon 1997), buffering stress-induced pressures on the immune system (Rabin 1999). In addition, social factors can interact with and amplify the low-level effects of physiological arousal (Konvalinka et al. 2011). Performed collectively, these rituals can provide additional comfort through forging communal bonds, providing a sense of community and belonging, and building social networks of support (Dunbar and Shultz 2010; Xygalatas et al. 2013). The Thaipusam is the most important collective event in the life of this community, and higher investments in this ritual are ostensibly perceived by other members as signs of allegiance to the group, consequently enhancing participants’ reputation (Watson-Jones and Legare 2016) and elevating their social status (Bulbulia 2004; Power 2017a). Multiple lines of research suggest that individuals are strongly motivated to engage in status-seeking efforts (Cheng, Tracy, and Henrich 2010; Willard and Legare 2017) and that there is a strong positive relationship between social rank and subjective well-being (Anderson et al. 2012; Barkow et al. 1975). Indeed, we found that individuals of lower socioeconomic status were more motivated to invest in the painful activities that can function as costly signals of commitment. Recent evidence from a field study in India shows that those who partake in these rituals indeed reap the cooperative benefits that result from increased status (Power 2017b).
In addition, the cost of participation can have important self-signaling functions. On the one hand, it can boost performers’ perceived fitness and self-esteem, which positively affects mental health (Barkow et al. 1975). On the other hand, through a process of effort justification, such costs can strengthen one’s attachment to the group and sense of belonging (Festinger 1962; Sosis 2003). This role of costly rituals in generating positive subjective states (Bastian et al. 2014b; Fischer et al. 2014; Wood 2016) and facilitating social bonding (Bastian, Jetten, and Ferris 2014a; Whitehouse and Lanman 2014) may offer insights into the functions of painful religious practices.
The mind has an amazing ability to turn what would be torture under some scenarios into something else.
Hat tip: Kevin Lewis.
Cash bail and bounty hunters can be an important and useful part of the criminal justice system. The practice in New Orleans, however, of funding court and judicial benefits with a tax on bail is obnoxious. In recent years, the tax on bail has funded 20-25% of the Judicial Expense Fund which is used to pay staff and office supplies, travel and other costs. The 5th U.S. Circuit Court of Appeal was right to affirm that this tax violates a defendant’s due process rights because it gives judges an incentive to require bail for their own benefit rather than to incentivize the defendant’s court appearance.
“No man can be judge in his own case.” Edward Coke, INSTITUTES OF THE LAWS OF ENGLAND, § 212, 141 (1628). That centuries-old maxim comes from Lord Coke’s ruling that a judge could not be paid with the fines he imposed. Dr. Bonham’s Case, 8 Co. Rep. 107a, 118a, 77 Eng. Rep. 638, 652 (C.P. 1610). Almost a century ago, the Supreme Court recognized that principle as part of the due process requirement of an impartial tribunal. Tumey v. Ohio, 273 U.S. 510, 523 (1927).
This case does not involve a judge who receives money based on the decisions he makes. But the magistrate in the Orleans Parish Criminal United States Court of Appeals District Court receives something almost as important: funding for various judicial expenses, most notably money to help pay for court reporters, judicial secretaries, and law clerks. What does this court funding depend on? The bail decisions the magistrate makes that determine whether a defendant obtains pretrial release. When a defendant has to buy a commercial surety bond, a portion of the bond’s value goes to a fund for judges’ expenses. So the more often the magistrate requires a secured money bond as a condition of release, the more money the court has to cover expenses. And the magistrate is a member of the committee that allocates those funds. Arrestees argue that the magistrate’s dual role—generator and administrator of court fees—creates a conflict of interest when the judge sets their bail. We [agree with the district court] that this dual role violates due process.
The plaintiffs also argued that judges must take into account a defendant’s ability to pay when setting bail. The appeals court didn’t rule on that issue but ironically judges who get a percent of the proceeds from bail do have an incentive to take into account ability to pay because only paid bail generates revenues. Eliminating the judge’s cut eliminates the incentive to think about ability to pay. Still, I support the decision. We should try for first best. The theory of second best leads only to madness and ruin.
Marty Weitzman passed away suddenly yesterday. He was on many people’s shortlist for the Nobel. His work is marked by high-theory applied to practical problems. The theory is always worked out in great generality and is difficult even for most economists. Weitzman wanted to be understood by more than a handful of theorists, however, and so he also went to great lengths to look for special cases or revealing metaphors. Thus, the typical Weitzman paper has a dense middle section of math but an introduction and conclusion of sparkling prose that can be understood and appreciated by anyone for its insights.
The Noah’s Ark Problem illustrates the model and is my favorite Weitzman paper. It has great sentences like these:
Noah knows that a flood is coming. There are n existing species/libraries, indexed i = 1, 2,… , n. Using the same notation as before, the set of all n species/libraries is denoted S. An Ark is available to help save some species/libraries. In a world of unlimited resources, the entire set S might be saved. Unfortunately, Noah’s Ark has a limited capacity of B. In the Bible, B is given as 300 x 50 x 30 = 450,000 cubits. More generally, B stands for the total size of the budget available for biodiversity preservation.
…If species/library i is boarded on the Ark, and thereby afforded some protection, its survival probability is enhanced to Pi. Essentially, boarding on the Ark is a metaphor for investing in a conservation project, like habitat protection, that improves survivability of a particular species/library. A particularly grim version of the Noah’s Ark Problem would make the choice a matter of life or death, meaning that Pi= 0 and Pi= 1. This specification is perhaps closest to the old testament version, so I am taking literary license here by extending the metaphor to less stark alternatives.
Weitzman first shows that the solution to this problem has a surprising property:
The solution of the Noah’s Ark Problem is always “extreme” in the following sense…In an optimal policy, the entire budget is spent on a favored subset of species/libraries that is afforded maximal protection. The less favored complementary subset is sacrificed to a level of minimal protection in order to free up to the extreme all possible scarce budget dollars to go into protecting the favored few.
Weitzman offers a stark example. Suppose there are two species with probabilities of survival of .99 and .01. For the same cost, we can raise the probability of either surviving by .01. What should we do?
We should save the first species and let the other one take its chances. The intuition comes from thinking about the species or libraries as having some unique features but also sharing some genes or books. When you invest in the first species you are saving the unique genes associated with that species and you are also increasing the probability of saving the genes that are shared by the two species. But when you put your investment in the second species you are essentially only increasing the probability of saving the unique aspects of species 2 because the shared aspects are likely saved anyway. Thus, on the margin you get less by investing in species 2 than by investing in species 1 even though it seems like you are saving the species that is likely to be saved anyway.
The math establishing the result is complex and, of course, there are caveats such as linearity assumptions which might reverse the example in a particular case but the thrust of the result is always operating: Putting all your eggs in one basket is a good idea when it comes to saving species.
Weitzman gets the math details right, of course!, but he knows that Noah isn’t a math geek.
Noah is a practical outdoors man. He needs robustness and rugged performance “in the field.” As he stands at the door of the ark, Noah desires to use a simple priority ranking list from which he can check off one species at a time for boarding. Noah wishes to have a robust rule….Can we help Noah? Is the concept of an ordinal ranking system sensible? Can there exist such a simple myopic boarding rule, which correctly prioritizes each species independent of the budget size? And if so, what is the actual formula that determines Noah’s ranking list for achieving an optimal ark-full of species?
So working the problem further, Weitzman shows that there is a relatively simple rule which is optimal to second-order, namely:
Where R is an index of priority. Higher R gets you on the ark, lower R bars entrance. D is a measure of a species distinctiveness–this could be measured, for example, by the nearest common ancestor metric. U is a measure of the special utility of a species beyond its diversity (Pandas are cute, goats are useful etc.) C is the cost of a project to increase the probability of survival and Delta P is the increase in the probability of survival so Delta P/C is the cost of increasing the probability of survival per dollar. Put simply we should invest our dollars where they have the most survival probability per dollar multiplied by a factor taking into account diversity and utility.
The rule is simple and sensible and and it has been used occasionally. Much more could be done, however, to optimize dollars spent on conservation and Weitzman’s rule gives us the necessary practical guidance. RIP.
Short sellers are often scapegoated for market crashes but a rational market requires rational buyers and sellers. When the markets are dominated by irrational exuberance only the short sellers are speaking sanity. Short-sellers, therefore, should make prices more informative and reduce the Wile E. Coyote moment when it suddenly dawns on the irrational that gravity exists.
Deng, Gao and Kim test the theory and find it holds up; lifting restrictions on short sales reduces prices crashes.
We examine the relation between short-sale constraints and stock price crash risk. To establish causality, we take advantage of a regulatory change from the Securities and Exchange Commission (SEC)’s Regulation SHO pilot program, which temporarily lifted short-sale constraints for randomly designated stocks. Using Regulation SHO as a natural experiment setting in which to apply a difference-in-differences research design, we find that the lifting of short-sale constraints leads to a significant decrease in stock price crash risk. We further investigate the possible underlying mechanisms through which short-sale constraints affect stock price crash risk. We provide evidence suggesting that lifting of short-sale constraints reduces crash risk by constraining managerial bad news hoarding and improving corporate investment efficiency. The results of our study shed new light on the cause of stock price crash risk as well as the roles that short sellers play in monitoring managerial disclosure strategies and real investment decisions.
Hat tip: Paul Kedrosky.
Governing: Like many other rural jurisdictions, towns in south Georgia have suffered decades of a slow economic decline that’s left them without much of a tax base. But they see a large amount of through-traffic from semi-trucks and Florida-bound tourists. And they’ve grown reliant on ticketing them to meet their expenses. “Georgia is a classic example of a place where you have these inextricable ties between the police, the town and the court,” says Lisa Foster, co-director of the Fines and Fees Justice Center. “Any city that’s short on revenue is going to be tempted to use the judicial system.”
… in hundreds of jurisdictions throughout the country, fines are used to fund a significant portion of the budget…In some extreme cases, local budgets are funded almost exclusively by fines. Georgetown, La., a village of fewer than 500 residents, was the most reliant on fines of all reviewed nationally. Its 2018 financial statement reported nearly $500,000 in fines, accounting for 92 percent of general revenues. Not far behind is Fenton, La., which reported more than $1.2 million in fines, or 91 percent of 2017 general fund revenues.
In To Serve and Collect (forthcoming Journal of Legal Studies) Makowsky, Stratmann and myself find that the allure of fine and forfeiture revenue can distort policing by shifting arrests towards crimes and misdemeanors with greater potential for revenue rather than greater social harm.
There is, however, some good news. The U.S. Supreme Court ruled that the Constitution’s ban on excessive fines applies to states and localities and that is putting pressure of them to reform. My co-author Mike Makowsky has a good suggestion:
The way governments allocate fine revenue also matters. The majority deposit it into their general fund, but many in Oklahoma, for example, route the money to separate police or public safety funds. That’s a mistake, says Michael Makowsky, an economics professor at Clemson University in South Carolina. “You want to separate officer incentives from the revenues they generate,” he says. One solution he proposes is to route fines and fees to state governments. States would then redistribute all the funds back as block grants based on population or other metrics, effectively removing incentives to issue tickets.
The Governing report is good and links to more data.
The theory behind micro‐expressions posits that when people attempt to mask their true emotional state, expressions consistent with their actual state will appear briefly on their face. Thus, while people are generally good at hiding their emotions, some facial muscles are more difficult to control than others and automatic displays of emotion will produce briefly detectable emotional “leakage” or micro‐expressions (Ekman, 1985). When a person does not wish to display his or her true feelings s/he will quickly suppress these expressions. Yet, there will be an extremely short time between the automatic display of the emotion and the conscious attempt to conceal it, resulting in the micro‐expression(s) that can betray a true feeling and according to theory, aid in detecting deception.
…The METT Advanced programme, marketed by the Paul Ekman Group (2011), coined an “online training to increase emotional awareness and detect deception” and promoted with claims that it “… enables you to better spot lies” and “is meant for those whose work requires them to evaluate truthfulness and detect deception—such as police and security personnel” (Paul Ekman Group, METT Advanced‐Online only, para. 2). The idea that micro‐expression recognition improves lie detection has also been put forth in the scientific literature (Ekman, 2009; Ekman & Matsumoto, 2011; Kassin, Redlich, Alceste, & Luke, 2018) and promoted in the wider culture. One example of this is its use as a focal plot device in the crime drama television series Lie to Me, which ran for three seasons (Baum, 2009). Though a fictional show, Lie to Me was promoted as being based on the research of Ekman. Ekman himself had a blog for the show in which he discussed the science of each episode (Ekman, 2010). Micro‐expression recognition training is not only marketed for deception detection but, more problematically, is actually used for this purpose by the United States government. Training in recognising micro‐expressions is part of the behavioural screening programme, known as Screening Passengers by Observation Technique (SPOT) used in airport security (Higginbotham, 2013; Smith, 2011; Weinberger, 2010). The SPOT programme deploys so‐called behaviour detection officers who receive various training in detecting deception from nonverbal behaviour, including training using the METT (the specific content of this programme is classified, Higginbotham, 2013). Evidently, preventing terrorists from entering the country’s borders and airports is an important mission. However, to our knowledge, there is no research on the effectiveness of METT in improving lie detection accuracy or security screening efficacy.
…Our findings do not support the use of METT as a lie detection tool. The METT did not improve accuracy any more than a bogus training protocol or even no training at all. The METT also did not improve accuracy beyond the level associated with guessing. This is problematic to say the least given that training in the recognition of micro‐expressions comprises a large part of a screening system that has become ever more pervasive in our aviation security (Higginbotham, 2013; Weinberger, 2010).
Hat tip the excellent Rolf Degen on twitter.
The earth is getting greener, in large part due to increased CO2 in the atmosphere. Surprisingly, however, another driver is programs in China to increase and conserve forests and more intensive use of cropland in India. A greener China and India isn’t the usual story and pollution continues to be a huge issue in India but contrary to what many people think urbanization increases forestation as does increased agricultural productivity. Here’s the abstract from a recent paper in Nature Sustainability.
Satellite data show increasing leaf area of vegetation due to direct factors (human land-use management) and indirect factors (such as climate change, CO2 fertilization, nitrogen deposition and recovery from natural disturbances). Among these, climate change and CO2 fertilization effects seem to be the dominant drivers. However, recent satellite data (2000–2017) reveal a greening pattern that is strikingly prominent in China and India and overlaps with croplands world-wide. China alone accounts for 25% of the global net increase in leaf area with only 6.6% of global vegetated area. The greening in China is from forests (42%) and croplands (32%), but in India is mostly from croplands (82%) with minor contribution from forests (4.4%). China is engineering ambitious programmes to conserve and expand forests with the goal of mitigating land degradation, air pollution and climate change. Food production in China and India has increased by over 35% since 2000 mostly owing to an increase in harvested area through multiple cropping facilitated by fertilizer use and surface- and/or groundwater irrigation. Our results indicate that the direct factor is a key driver of the ‘Greening Earth’, accounting for over a third, and probably more, of the observed net increase in green leaf area. They highlight the need for a realistic representation of human land-use practices in Earth system models.