A recent study of 180 academic curricula vitae found that 56 percent that claimed to have at least one publication contained at least one unverifiable or inaccurate publication, and it suggests that CV falsification could be much more common than scholars committed to professional integrity might hope. The study is small — the 56 percent reflects only 79 CVs, of 141 that claimed to have at least one publication. The researchers behind the study make no presumption as to whether the errors were intentional.
#7 Tyler Cowen (GMU) on less homework, Swiss science culture, and low university completion rates
In this episode with Tyler Cowen we talk about a broad range of topics. For example, why it’s important that students have less homework, the Swiss science culture, and the low university completion rates.
Moreover, hypersonic gliders are actually at a speed disadvantage compared with ballistic missiles of the same range. Ballistic missiles are also boosted to high speed by large rockets, before arcing through the vacuum of space. A glider, by contrast, spends most of its trajectory in the atmosphere, using aerodynamic lift to extend its range. The increased range comes at the cost of faster deceleration caused by atmospheric friction. One implication of this reduced speed is that hypersonic gliders may be more vulnerable to interception by U.S. “point” missile defenses (especially after such defenses have been optimized for that purpose). Like cornerbacks in football, point missile defenses are intended to protect small but important areas — such as U.S. military bases in the western Pacific.
Here is the full piece by James Acton.
Last year, some of the same research team reported finding complex organic macromolecules within the water vapor that were likely floating on the surface of Enceladus’ ocean. This year, they followed up with a more sophisticated analysis of what sorts of molecules were dissolved into the ocean water. The compounds found within Enceladus’ water vapor plumes, which are responsible for most of the content of Saturn’s E ring, are believed to be present in the liquid subsurface ocean that exists underneath the south pole rather than being the result of contamination as the water escapes from its subsurface prison. That’s significant because many of the nitrogen and oxygen-based compounds the researchers detected are also essential to amino acids here on Earth…
“If the conditions are right, these molecules coming from the deep ocean of Enceladus could be on the same reaction pathway as we see here on Earth,” said Nozair Khawaja, who led the research team of the Free University of Berlin. “We don’t yet know if amino acids are needed for life beyond Earth, but finding the molecules that form amino acids is an important piece of the puzzle.” Khawaja’s findings were published Oct. 2 in the Monthly Notices of the Royal Astronomical Society.
Here is further information.
…blackouts are costing the Lebanese economy about $3.9 billion per year, or roughly 8.2 percent of the country’s GDP.
I asked why the Lebanese government can’t put the private generators out of business. He replied that EdL [the state-owned electricity company] is losing some $1.3 billion per year, while the private generators are taking in as much as $2 billion per annum. “It’s a huge business,” he said, “and it’s very dangerous to interfere with this business.”
…Nakhle, an official in the Energy Ministry, was admitting that the generator mafia bribes Lebanese politicians to make sure that EdL stays weak and blackouts persist…
Maya Ammar, a model and architect in Beirut…told me, “The one reason is in Lebanon that we do not have electricity is corruption, plain and simple.”…The electric grid, she continued, is “a microcosmic example of how this country runs.”
That is from the forthcoming and excellent book by Robert Bryce, A Question of Power: Electricity and the Wealth of Nations.
This has been bothering me, so I’m putting it out there – The shift to 6 yrs for an Econ PhD is a TERRIBLE trend for female PhD students – & also some men, obviously – but especially for women. This issue warrants much more attention.
So says the wise Melissa S. Kearney.
Along those lines, I have a modest proposal. Eliminate the economics Ph.D, period. Offer everyone three years of graduate economics education, and no more (with a clock reset allowed for pregnancy). Did Smith, Keynes, or Hayek have an economics Ph.D? This way, no one will assume you know what you are talking about, and the underlying message is that economics learning is lifelong.
After the three years is up, you would be free to look for a job, or alternatively you might find someone to support you to do additional research, such as in the newly structured “post doc without the doc.” The researchers who absolutely need additional training would try to glom on to a lab or major grant, but six years would not be the default.
Of course, in that setting, schools could take chances on more students, and more students could take a chance on trying economics as a profession. Furthermore, for most of the most accomplished students, it is already clear they deserve a top job by the time their third year rolls around, usually well before then. Women would hit their tenure clocks much earlier, also, easing childbearing constraints. A dissertation truly would become just a job market paper, which has already been the trend for a long time. Why obsess over the non-convexity of “finishing”? Finish everyone, and throw them into the maws of some mix of AI and human evaluators sooner rather than later.
Over time, I would expect that more people would take the first-year sequence in their senior year of undergraduate study, and more first-year jobs would have zero or very low teaching loads. All to the better.
And if you’re mainly going to teach Principles at a state university, three years of graduate study really is enough. You’ll learn more your first year teaching anyway.
Which other fields might benefit from such a reform?
People, you have nothing to lose but your chains.
That is the new, interesting, and engaging book by Sean Carroll. Some of it is exposition, the rest argues for a version of Many Worlds Theory, but with a finite number of universes. Here is one excerpt:
String theory, loop quantum gravity, and other ideas share a common pattern: they start with a set of classical variables, then quantize. From the perspective we’ve been following in this book, that’s a little backward. Nature is quantum from the start, described by a wave function evolving according to an appropriate version of the Schroedinger equation. Things like “space” and “fields” and “particles” are useful ways of talking about that wave function in a appropriate classical limit. We don’t want to start with space and fields and quantize them; we want to extract them from an intrinsically quantum wave function.
I very much liked the discussion on pp.300-301 on how a finite number of quantum degrees of freedom implies a finite-dimensional Hilbert space for the system as a whole, that in turn constraining the number of worlds in an Everett-like model. If only I understood it properly…
You can buy the book here.
From a loyal MR reader:
Advice question for you and MR readers, riffing on one of your Conversations themes, if you would indulge me.
What advice would you give to someone wishing to build a career in climate change mitigation as a non-scientist?
Two advice scenarios: 1) the person is 16; 2) the person is mid-career. Assume no constraints with respect to skill development or self-directed study. That is, what should these people teach themselves? To whom should they reach out to for mentorship?
Tyler and I have been arguing about free will for decades. One of the strongest arguments against free-will is an empirical argument due to physiologist Benjamin Libet. Libet famously found that the brain seems to signal a decision to act before the conscious mind makes an intention to act. Brain scans can see a finger tap coming 500 ms before the tap but the conscious decision seems to be made nly 150 ms before the tap. Libet’s results, however, are now being reinterpreted:
The Atlantic: To decide when to tap their fingers, the participants simply acted whenever the moment struck them. Those spontaneous moments, Schurger reasoned, must have coincided with the haphazard ebb and flow of the participants’ brain activity. They would have been more likely to tap their fingers when their motor system happened to be closer to a threshold for movement initiation.
This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices.
…In a new study under review for publication in the Proceedings of the National Academy of Sciences, Schurger and two Princeton researchers repeated a version of Libet’s experiment. To avoid unintentionally cherry-picking brain noise, they included a control condition in which people didn’t move at all. An artificial-intelligence classifier allowed them to find at what point brain activity in the two conditions diverged. If Libet was right, that should have happened at 500 milliseconds before the movement. But the algorithm couldn’t tell any difference until about only 150 milliseconds before the movement, the time people reported making decisions in Libet’s original experiment.
In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision.
I am pleased to announce the initiation of a new, special tranche of the Emergent Ventures fund, namely to study the nature and causes of progress, economic and scientific progress yes but more broadly too, including social and cultural factors. This has been labeled at times “Progress Studies.”
Simply apply at the normal Emergent Ventures site and follow the super-simple instructions. Feel free to mention the concept of progress if appropriate to your idea and proposal. Here is the underlying philosophy of Emergent Ventures.
And I am pleased to announce that an initial award from this tranche has been made to the excellent Pseudoerasmus, for blog writing on historical economic development and also for high-quality Twitter engagement and for general scholarly virtue and commitment to ideas.
Pseudoerasmus has decided to donate this award to the UK Economic History Society. Hail Pseudoerasmus!
On January 5, 1845, the Prussian cultural minister Karl Friedrich von Eichorn received a request from a group of six young men to form a new Physical Society in Berlin. By the time their statutes were approved in March, they numbered forty-nine and were meeting biweekly to discuss the latest developments in the physical sciences and physiology. They were preparing to write critical reviews for a new journal, Die Fortschritte der Physik (Advances in physics), and from the beginning they set out to define what constituted progress and what did not. Their success in this rather aggressive endeavor has long fascinated historians of science. In fields from thermodynamics, mechanics, and electromagnetism to animal electricity, ophthalmology, and psychophysics, members of this small group established leading positions in what only thirty years later had become a new landscape of physical science populated by large institutes and laboratories of experiment and precision measurement.
How was this possible? How could a bunch of twenty-somethings, without position or recognition, and possessed of little more than their outsized confidence and ambition, succeed in seizing the future? What were their resources?
That is the opening passage from M. Norton Wise, Aesthetics, Industry, and Science: Hermann von Helmholtz and the Berlin Physical Society.
Marty Weitzman passed away suddenly yesterday. He was on many people’s shortlist for the Nobel. His work is marked by high-theory applied to practical problems. The theory is always worked out in great generality and is difficult even for most economists. Weitzman wanted to be understood by more than a handful of theorists, however, and so he also went to great lengths to look for special cases or revealing metaphors. Thus, the typical Weitzman paper has a dense middle section of math but an introduction and conclusion of sparkling prose that can be understood and appreciated by anyone for its insights.
The Noah’s Ark Problem illustrates the model and is my favorite Weitzman paper. It has great sentences like these:
Noah knows that a flood is coming. There are n existing species/libraries, indexed i = 1, 2,… , n. Using the same notation as before, the set of all n species/libraries is denoted S. An Ark is available to help save some species/libraries. In a world of unlimited resources, the entire set S might be saved. Unfortunately, Noah’s Ark has a limited capacity of B. In the Bible, B is given as 300 x 50 x 30 = 450,000 cubits. More generally, B stands for the total size of the budget available for biodiversity preservation.
…If species/library i is boarded on the Ark, and thereby afforded some protection, its survival probability is enhanced to Pi. Essentially, boarding on the Ark is a metaphor for investing in a conservation project, like habitat protection, that improves survivability of a particular species/library. A particularly grim version of the Noah’s Ark Problem would make the choice a matter of life or death, meaning that Pi= 0 and Pi= 1. This specification is perhaps closest to the old testament version, so I am taking literary license here by extending the metaphor to less stark alternatives.
Weitzman first shows that the solution to this problem has a surprising property:
The solution of the Noah’s Ark Problem is always “extreme” in the following sense…In an optimal policy, the entire budget is spent on a favored subset of species/libraries that is afforded maximal protection. The less favored complementary subset is sacrificed to a level of minimal protection in order to free up to the extreme all possible scarce budget dollars to go into protecting the favored few.
Weitzman offers a stark example. Suppose there are two species with probabilities of survival of .99 and .01. For the same cost, we can raise the probability of either surviving by .01. What should we do?
We should save the first species and let the other one take its chances. The intuition comes from thinking about the species or libraries as having some unique features but also sharing some genes or books. When you invest in the first species you are saving the unique genes associated with that species and you are also increasing the probability of saving the genes that are shared by the two species. But when you put your investment in the second species you are essentially only increasing the probability of saving the unique aspects of species 2 because the shared aspects are likely saved anyway. Thus, on the margin you get less by investing in species 2 than by investing in species 1 even though it seems like you are saving the species that is likely to be saved anyway.
The math establishing the result is complex and, of course, there are caveats such as linearity assumptions which might reverse the example in a particular case but the thrust of the result is always operating: Putting all your eggs in one basket is a good idea when it comes to saving species.
Weitzman gets the math details right, of course!, but he knows that Noah isn’t a math geek.
Noah is a practical outdoors man. He needs robustness and rugged performance “in the field.” As he stands at the door of the ark, Noah desires to use a simple priority ranking list from which he can check off one species at a time for boarding. Noah wishes to have a robust rule….Can we help Noah? Is the concept of an ordinal ranking system sensible? Can there exist such a simple myopic boarding rule, which correctly prioritizes each species independent of the budget size? And if so, what is the actual formula that determines Noah’s ranking list for achieving an optimal ark-full of species?
So working the problem further, Weitzman shows that there is a relatively simple rule which is optimal to second-order, namely:
Where R is an index of priority. Higher R gets you on the ark, lower R bars entrance. D is a measure of a species distinctiveness–this could be measured, for example, by the nearest common ancestor metric. U is a measure of the special utility of a species beyond its diversity (Pandas are cute, goats are useful etc.) C is the cost of a project to increase the probability of survival and Delta P is the increase in the probability of survival so Delta P/C is the cost of increasing the probability of survival per dollar. Put simply we should invest our dollars where they have the most survival probability per dollar multiplied by a factor taking into account diversity and utility.
The rule is simple and sensible and and it has been used occasionally. Much more could be done, however, to optimize dollars spent on conservation and Weitzman’s rule gives us the necessary practical guidance. RIP.
Team impact is predicted more by the lower-citation rather than the higher-citation team members, typically centering near the harmonic average of the individual citation indices. Consistent with this finding, teams tend to assemble among individuals with similar citation impact in all fields of science and patenting. In assessing individuals, our index, which accounts for each coauthor, is shown to have substantial advantages over existing measures. First, it more accurately predicts out-of-sample paper and patent outcomes. Second, it more accurately characterizes which scholars are elected to the National Academy of Sciences. Overall, the methodology uncovers universal regularities that inform team organization while also providing a tool for individual evaluation in the team production era.
That is part of the abstract of a new paper by Mohammad Ahmadpoor and Benjamin F. Jones.
The theory behind micro‐expressions posits that when people attempt to mask their true emotional state, expressions consistent with their actual state will appear briefly on their face. Thus, while people are generally good at hiding their emotions, some facial muscles are more difficult to control than others and automatic displays of emotion will produce briefly detectable emotional “leakage” or micro‐expressions (Ekman, 1985). When a person does not wish to display his or her true feelings s/he will quickly suppress these expressions. Yet, there will be an extremely short time between the automatic display of the emotion and the conscious attempt to conceal it, resulting in the micro‐expression(s) that can betray a true feeling and according to theory, aid in detecting deception.
…The METT Advanced programme, marketed by the Paul Ekman Group (2011), coined an “online training to increase emotional awareness and detect deception” and promoted with claims that it “… enables you to better spot lies” and “is meant for those whose work requires them to evaluate truthfulness and detect deception—such as police and security personnel” (Paul Ekman Group, METT Advanced‐Online only, para. 2). The idea that micro‐expression recognition improves lie detection has also been put forth in the scientific literature (Ekman, 2009; Ekman & Matsumoto, 2011; Kassin, Redlich, Alceste, & Luke, 2018) and promoted in the wider culture. One example of this is its use as a focal plot device in the crime drama television series Lie to Me, which ran for three seasons (Baum, 2009). Though a fictional show, Lie to Me was promoted as being based on the research of Ekman. Ekman himself had a blog for the show in which he discussed the science of each episode (Ekman, 2010). Micro‐expression recognition training is not only marketed for deception detection but, more problematically, is actually used for this purpose by the United States government. Training in recognising micro‐expressions is part of the behavioural screening programme, known as Screening Passengers by Observation Technique (SPOT) used in airport security (Higginbotham, 2013; Smith, 2011; Weinberger, 2010). The SPOT programme deploys so‐called behaviour detection officers who receive various training in detecting deception from nonverbal behaviour, including training using the METT (the specific content of this programme is classified, Higginbotham, 2013). Evidently, preventing terrorists from entering the country’s borders and airports is an important mission. However, to our knowledge, there is no research on the effectiveness of METT in improving lie detection accuracy or security screening efficacy.
…Our findings do not support the use of METT as a lie detection tool. The METT did not improve accuracy any more than a bogus training protocol or even no training at all. The METT also did not improve accuracy beyond the level associated with guessing. This is problematic to say the least given that training in the recognition of micro‐expressions comprises a large part of a screening system that has become ever more pervasive in our aviation security (Higginbotham, 2013; Weinberger, 2010).
Hat tip the excellent Rolf Degen on twitter.