Science

“Where are they?”, cried out Enrico Fermi in anguish.  We have wondered ever since.  In spite of some subsequent refinements, I still find the Fermi paradox a…paradox.  Where are they?

Now, Oumuamua comes along…

And furthermore:

The object’s trajectory is so strange and its speeds are so blistering that it probably did not originate from within our solar system. Its discoverers concluded that the object is a rare interstellar traveler from beyond our solar system, the first object of its kind observed by humans.

So what do the academics say?

“The possibility that this object is, in fact, an artificial object — that it is a spaceship, essentially — is a remote possibility,” Andrew Siemion, director of the Berkeley Search for Extraterrestrial Intelligence Research Center, told The Washington Post on Monday.

Given the Fermi paradox, shouldn’t we assume a fairly high probability this is in fact some form of alien contact or display?  It’s like when you are expecting a package from UPS and then finally the doorbell rings…

So I’m excited, even though I don’t see much of a chance of a visit.  p = 0.3?  I need to crack open those old Arthur C. Clarke novels.

François Derrien, Ambrus Kecskes, and Phuong-Anh Nguyen have a new paper on this topic, and basically the answer is yes, because of labor supply effects:

We argue that a younger labor force produces more innovation. Using the native born labor force projected based on local historical births, we find that a younger age structure causes a significant increase in innovation. We use three levels of analysis in succession – commuting zones, firms, and inventors – to examine or eliminate various effects such as firm and inventor life cycles. We also find that innovation activities reflect the innovative characteristics of younger labor forces. Our results indicate that demographics increase innovation through the labor supply channel rather than through a financing supply or consumer demand channel.

Here is the SSRN link.

That is a new paper by Aghion, Akcigit, Hyytinen, and Toivanen, here is one brief excerpt:

In particular we see that IQ is by far the main characteristic for the probability of becoming an inventor in terms of the share of variation it explains, followed by parental education.  These two groups of variables account for 66% and 16% of the overall variation captured by our model.  In contrast, IQ plays a relatively speaking much more minor role for becoming a medical doctor or a lawyer.  Parental education is the main explanatory variable for the probability of becoming a medical doctor or a lawyer (40% and 53%), with base controls and parental income also playing clearly more important roles than for inventors.

The paper offers many other points of value, and you will note the data are from Finland.

Where?, I hear you asking.  No, that is the title of a new book by Karl Sigmund and the subtitle is The Vienna Circle and the Epic Quest for the Foundations of Science.  I enjoyed this book very much, though I don’t recommend it as a balanced introduction to its chosen topic.  I liked it best for its whims and interstices:

1. The mathematician Richard von Mises (brother of economist Ludwig) was a patron of Rilke, and he established a foundation for the sole purpose of supporting Robert Musil.

2. Carl Menger was planning on writing a philosophical treatise, and one which would have had a “Vienna Circle” anti-metaphysical slant.

3. Arguably Karl Popper learned the most from a polymathic cabinetmaker he was apprenticed to in his youth.

4. Friedrich Wieser had supported Mussolini, but a young Oskar Morgenstern, in his diary, complained that Wieser was too liberal.

5. Morgenstern later became a confirmed liberal, and he also remarked a few times that game theory was for the social sciences completing the research program of Kurt Gödel.

6. Karl Popper complained that Wittgenstein threatened him, in a lecture, with a poker.  It is not obvious this was the case.

7. I came away from my read wanting to sample more Ernst Mach, more Moritz Schlick, and thinking Otto Neurath was perhaps badly underrated.

Note that most of the book is more serious than this, and less concerned with economists, much more with math and science and some psychoanalysis and positivism too.

There was the ever-present worry that aircraft would make war even more horrific.  Some called for the international control of aviation to prevent its misuse.  A few even advocated the complete destruction of all aircraft on the grounds that even civilian machines could be adapted for war.

…At the opposite end of the spectrum were the enthusiasts who expected that soon everyone would be able to fly their own personal aircraft…As early as 1928, Popular Mechanics predicted a car that could be turned into a helicopter, but most commentators thought the autogyro was a better bet — although it did need a short horizontal run before take-off…As late as 1971, Isaac Asimov was still expecting that VTOL [vertical take-off and landing system] machines would eventually take the place of automobiles.

That is from Peter J. Bowler, A History of the Future: Prophets of Progress from H.G.Wells to Isaac Asimov.

One thing I learned from this book is that “money crank” Frederick Soddy was an early prophet of nuclear power, before many others understood the potential.  I am reminded of how “socialist crank” [oceans of lemonade with ships pulled by dolphins] Charles Fourier once prophesied that all of Europe would be tied together by railways.

Are there genetic vulnerabilities for depression across cultures?

Genetic vulnerability differs substantially from country to country. East Asian contexts, for example, show a high prevalence of genes associated with depression. Yet, despite these vulnerabilities, they develop fewer cases of the disorder. One hypothesis is that genetic vulnerabilities have co-evolved with culture, creating extra protective factors (in this case, extra interdependence). However, when these people leave their cultural contexts, they have a higher risk of developing depression.

That is an interview with Yulia Chentsova-Dutton, associate professor of psychology at Georgetown, and a researcher in this area.  You can imagine further applications of this mechanism.  The interview has other interesting points, for instance:

What is the role of emotion regulation?

Emotion regulation is increasingly becoming understood as a core factor in all affective disorders. In western societies, we don’t see enough adaptive strategies like reappraisal: learning to tell yourself a different story that would eventually lead to different emotions. There is also not enough social regulation of emotion, which occurs by sharing our emotions with others. Research shows that cultures can facilitate functional regulation strategies. For example, Igor Grossmann’s work shows that Russians make rumination (generally considered a dysfunctional strategy) more functional by encouraging people to ruminate about the self from another person’s perspective, making rumination almost reappraisal-like in its quality.

Do read the whole thing.

The IQ gains of the 20th century have faltered. Losses in Nordic nations after 1995 average at 6.85 IQ points when projected over thirty years. On Piagetian tests, Britain shows decimation among high scorers on three tests and overall losses on one. The US sustained its historic gain (0.3 points per year) through 2014. The Netherlands shows no change in preschoolers, mild losses at high school, and possible gains by adults. Australia and France offer weak evidence of losses at school and by adults respectively. German speakers show verbal gains and spatial losses among adults. South Korea, a latecomer to industrialization, is gaining at twice the historic US rate.

When a later cohort is compared to an earlier cohort, IQ trends vary dramatically by age. Piagetian trends indicate that a decimation of top scores may be accompanied by gains in cognitive ability below the median. They also reveal the existence of factors that have an atypical impact at high levels of cognitive competence. Scandinavian data from conventional tests confirm the decimation of top scorers but not factors of atypical impact. Piagetian tests may be more sensitive to detecting this phenomenon.

That is newly published research from James R. Flynn and Michael Shayer, via Rolf Degen.

The authors are Alex Bell, Raj Chetty, Xavier Jaravel, Neviana Petkova, and John Van Reenen, here is the abstract:

We characterize the factors that determine who becomes an inventor in America by using deidentified data on 1.2 million inventors from patent records linked to tax records. We establish three sets of results. First, children from high-income (top 1%) families are ten times as likely to become inventors as those from below-median income families. There are similarly large gaps by race and gender. Differences in innate ability, as measured by test scores in early childhood, explain relatively little of these gaps. Second, exposure to innovation during childhood has significant causal effects on children’s propensities to become inventors. Growing up in a neighborhood or family with a high innovation rate in a specific technology class leads to a higher probability of patenting in exactly the same technology class. These exposure effects are gender-specific: girls are more likely to become inventors in a particular technology class if they grow up in an area with more female inventors in that technology class. Third, the financial returns to inventions are extremely skewed and highly correlated with their scientific impact, as measured by citations. Consistent with the importance of exposure effects and contrary to standard models of career selection, women and disadvantaged youth are as under-represented among high-impact inventors as they are among inventors as a whole. We develop a simple model of inventors’ careers that matches these empirical results. The model implies that increasing exposure to innovation in childhood may have larger impacts on innovation than increasing the financial incentives to innovate, for instance by cutting tax rates. In particular, there are many “lost Einsteins” – individuals who would have had highly impactful inventions had they been exposed to innovation.

Here is the paper, here are the slides (best place to start), here is a David Leonhardt column on it.

Florence!  Motown!  Kuna molas!  David Hume knew this!  The work looks very interesting, though I doubt if the main effect is actually channeled through absolute income, as evidenced by the immediately afore-mentioned examples.  Also, I don’t think their tax analysis quite holds up once you see intermediaries as needing to cover fixed costs for the innovators.  Taxing profits from innovation then lowers the number of potential innovators quite a bit, by discouraging investment from the intermediaries.

That is the new and excellent history by Leslie Berlin, substantive throughout, here is one good bit of many:

In March 1967, Robert and Taylor, jointly leading a meeting of ARPA’s principal investigators in Ann Arbor, Michigan, told the researchers that ARPA was going to build a computer network and they were all expected to connect to it.  The principle investigators were not enthusiastic.  They were busy running their labs and doing their own work.  They saw no real reason to add this network to their responsibilities.  Researchers with more powerful computers worried that those with less computing power would use the network to commandeer precious computing cycles.  “If I could not get some ARPA-funded participants involved in a commitment to a purpose higher than “Who is going to steal the next ten percent of my memory cycles?”, there would be no network,” Taylor later wrote.  Roberts agreed: “They wanted to buy their own machines and hide in the corner.”

You can buy the book here, here is one good review from Wired, excerpt:

While piecing together a timeline of the Valley’s early history—picture end-to-end sheets of paper covered in black dots—Berlin was amazed to discover a period of rapid-fire innovation between 1969 and 1976 that included the first Arpanet transmission; the birth of videogames; and the launch of Apple, Atari, Genentech, and major venture firms such as Kleiner Perkins and Sequoia Capital. “I just thought, ‘What the heck was going on in those years?’ ” she says.

Here is praise from Patrick Collison on Twitter.

From a recent survey by Pennington, Heim, Levy, and Larkin:

This systematic literature review appraises critically the mediating variables of stereotype threat. A bibliographic search was conducted across electronic databases between 1995 and 2015. The search identified 45 experiments from 38 articles and 17 unique proposed mediators that were categorized into affective/subjective (n = 6), cognitive (n = 7) and motivational mechanisms (n = 4). Empirical support was accrued for mediators such as anxiety, negative thinking, and mind-wandering, which are suggested to co-opt working memory resources under stereotype threat. Other research points to the assertion that stereotype threatened individuals may be motivated to disconfirm negative stereotypes, which can have a paradoxical effect of hampering performance. However, stereotype threat appears to affect diverse social groups in different ways, with no one mediator providing unequivocal empirical support. Underpinned by the multi-threat framework, the discussion postulates that different forms of stereotype threat may be mediated by distinct mechanisms.

Or from Wikipedia:

Whether the effect occurs at all has also been questioned, with researchers failing to replicate the finding. Flore and Wicherts concluded the reported effect is small, but also that the field is inflated by publication bias. They argue that, correcting for this, the most likely true effect size is near zero (see meta-analytic plot, highlighting both the restriction of large effect to low-powered studies, and the plot asymmetry which occurs when publication bias is active).[

Earlier meta-analyses reached similar conclusions. For instance, Ganley et al. (2013)[10] examined stereotype threat on mathematics test performance. They report a series of 3 studies, with a total sample of 931 students. These included both childhood and adolescent subjects and three activation methods, ranging from implicit to explicit. While they found some evidence of gender differences in math, these occurred regardless of stereotype threat. Importantly, they found “no evidence that the mathematics performance of school-age girls was impacted by stereotype threat”. In addition, they report that evidence for stereotype threat in children appears to be subject to publication bias. The literature may reflect selective publication of false-positive effects in underpowered studies, where large, well-controlled studies find smaller or non-significant effects:

Personally, I find stereotype threat to be a very intuitive idea with a fair amount of anecdotal support.  So why aren’t these meta-results more convincing?

A remarkable new paper on logical induction by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor dramatically extends Ramsey’s Dutch book arguments in support of Bayesian epistemology and in so doing demonstrates deep connections between logical thinking and efficient markets. The research was supported by the Machine Intelligence Research Institute.

We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano arithmetic, it assigns probabilities to all arithmetical statements, including claims about the twin prime conjecture, the outputs of long-running computations, and its own probabilities. We show that our algorithm, an instance of what we call a logical inductor, satisfies a number of intuitive desiderata, including: (1) it learns to predict patterns
of truth and falsehood in logical statements, often long before having the resources to evaluate the statements, so long as the patterns can be written down in polynomial time; (2) it learns to use appropriate statistical summaries to predict sequences of statements whose truth values appear pseudorandom; and (3) it learns to have accurate beliefs about its own current beliefs, in a manner that avoids the standard paradoxes of self-reference. For example, if a given computer program only ever produces outputs in a certain range, a logical inductor learns this fact in a timely manner; and if late digits in the decimal expansion of π are difficult to predict, then a logical inductor learns to assign ≈ 10% probability to “the nth digit of π is a 7” for large n. Logical inductors also learn to trust their future beliefs more than their current beliefs, and their beliefs are coherent in the limit (whenever φ → ψ, P∞(φ) ≤ P∞(ψ), and so on); and logical inductors strictly dominate the universal semimeasure in the limit.

These properties and many others all follow from a single logical induction criterion, which is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence φ is associated with a stock that is worth $1 per share if φ is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where Pn(φ) = 50% means that on day n, shares of φ may be bought or sold from the reasoner for 50¢. The logical induction criterion says (very roughly) that there should not be any polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. This criterion bears strong resemblance to the “no Dutch book” criteria that support both expected utility theory (von Neumann and Morgenstern 1944) and Bayesian probability theory (Ramsey 1931; de Finetti 1937).

The authors are quick to acknowledge that their algorithm holds only in the limit which makes it impractical to implement. Nevertheless, the first fully rational beings on the planet will surely be artificial intelligences.

In 2013, the Post-Polio Health International (PPHI) organizations estimated that there were six to eight iron lung users in the United States. Now, PPHI executive director Brian Tiburzi says he doesn’t know anyone alive still using the negative-pressure ventilators. This fall, I met three polio survivors who depend on iron lungs. They are among the last few, possibly the last three.”

…In the 1940s and 1950s, hospitals across the country were filled with rows of iron lungs that kept victims alive. Lillard recalls being in rooms packed with metal tubes—especially when there were storms and all the men, women, adults, and children would be moved to the same room so nurses could manually operate the iron lungs if the power went out. “The period of time that it took the nurse to get out of the chair, it seemed like forever because you weren’t breathing,” Lillard said. “You just laid there and you could feel your heart beating and it was just terrifying. The only noise that you can make when you can’t breathe is clicking your tongue. And that whole dark room just sounded like a big room full of chickens just cluck-cluck-clucking. All the nurses were saying, ‘Just a second, you’ll be breathing in just a second.’”

…Mia Farrow only had to spend eight months in an iron lung when she was nine, before going on to become a famous actress and polio advocate.

Here is the full story, via the excellent Samir Varma.

*The Wizard and the Prophet*

by on November 9, 2017 at 1:34 pm in Books, History, Science | Permalink

That is the new Charles C. Mann book, I pre-ordered long ago, here is the new Kirkus review:

A dual biography of two significant figures who “had little regard” for each other’s work but “were largely responsible for the creation of the basic intellectual blueprints that institutions around the world use today for understanding our environmental dilemmas.”

A thick book featuring two scientists unknown to most readers is a tough sell, but bestselling journalist and historian Mann (1493: Uncovering the New World Columbus Created, 2011, etc.), a correspondent for the Atlantic, Science, and Wired, turns in his usual masterful performance. Nobel Prize–winning agronomist Norman Borlaug (1914-2009) developed high-yield wheat varieties and championed agricultural techniques that led to the “Green Revolution,” vastly increasing world food production. Ornithologist William Vogt (1902-1968) studied the relationship between resources and population and wrote the 1948 bestseller Road to Survival, a founding document of modern environmentalism in which the author maintains that current trends will lead to overpopulation and mass hunger. Borlaug and Vogt represent two sides of a centurylong dispute between what Mann calls “wizards,” who believe that science will allow humans to continue prospering, and “prophets,” who predict disaster unless we accept that our planet’s resources are limited. Beginning with admiring biographies, the author moves on to the environmental challenges the two men symbolize. Agriculture will require a second green revolution by 2050 to feed an estimated 10 billion inhabitants. Only 1 percent of Earth’s water is fresh and accessible; three-quarters goes to agriculture, and shortages are already alarming. More than 1.2 billion people still lack electricity; whether to produce more or use less energy bitterly divides both sides. Neither denies that human activities are wreaking havoc with Earth’s climate. Mann’s most spectacular accomplishment is to take no sides. Readers will thrill to the wizards’ astounding advances and believe the prophets’ gloomy forecasts, and they will also discover that technological miracles produce nasty side effects and that self-sacrifice, as prophets urge, has proven contrary to human nature.

An insightful, highly significant account that makes no predictions but lays out the critical environmental problems already upon us.

You can pre-order here.

There is a new NBER working paper on these topics, by Anna Chorniy, Janet Currie, and Lyudmyla Sonchak, here is the abstract:

In the U.S., nearly 11% of school-age children have been diagnosed with ADHD, and approximately 10% of children suffer from asthma. In the last decade, the number of children diagnosed with these conditions has inexplicably been on the rise. This paper proposes a novel explanation of this trend. First, the increase is concentrated in the Medicaid caseload nationwide. Second, nearly 80% of states transitioned their Medicaid programs from fee-for-service (FFS) reimbursement to managed care (MMC) by 2016. Using Medicaid claims from South Carolina, we show that this change contributed to the increase in asthma and ADHD caseloads. Empirically, we rely on exogenous variation in MMC enrollment due a change in the “default” Medicaid plan from FFS or MMC, and an increase in the availability of MMC. We find that the transition from FFS to MMC explains most of the rise in the number of Medicaid children being treated for ADHD and asthma. These results can be explained by the incentives created by the risk adjustment and quality control systems in MMC.

The economics of medical diagnoses remain a drastically understudied area.

I will be having a Conversations with Tyler with Andy Weir, author of The Martian and assorted on-line works (many of which appear to be off-line at the moment).  He has a new book coming out, Artemis.  Here is Andy’s Wikipedia page.

I thank you all in advance for your ideas and assistance.