Formal training programs, which can be called education, enhance cognition in human and nonhuman animals alike. However, even informal exposure to human contact in human environments can enhance cognition. We review selected literature to compare animals’ behavior with objects among keas and great apes, the taxa that best allow systematic comparison of the behavior of wild animals with that of those in human environments such as homes, zoos, and rehabilitation centers. In all cases, we find that animals in human environments do much more with objects. Following and expanding on the explanations of several previous authors, we propose that living in human environments and the opportunities to observe and manipulate human-made objects help to develop motor skills, embodied cognition, and the use of objects to extend cognition in the animals. Living in a human world also furnishes the animals with more time for such activities, in that the time needed for foraging for food is reduced, and furnishes opportunities for social learning, including emulation, an attempt to achieve the goals of a model, and program-level imitation, in which the imitator reproduces the organizational structure of goal-directed actions without necessarily copying all the details. All these factors let these animals learn about the affordances of many objects and make them better able to come up with solutions to physical problems.
Obviously his talents in crypto and programming are well-known, but he is also a first-rate thinker on both economics and what you broadly might call sociology. You could take away the crypto contributions altogether, and he still would be one of the very smartest people I have met. Here is the audio and transcript. The CWT team summarized it as follows:
Tyler sat down with Vitalik to discuss the many things he’s thinking about and working on, including the nascent field of cryptoeconomics, the best analogy for understanding the blockchain, his desire for more social science fiction, why belief in progress is our most useful delusion, best places to visit in time and space, how he picks up languages, why centralization’s not all bad, the best ways to value crypto assets, whether P = NP, and much more.
Here is one excerpt:
COWEN: If you could go back into the distant past for a year, a time and place of your choosing, you have the linguistic skills and immunity against disease to the extent you need it, maybe some money in your pocket, where would you pick to satisfy your own curiosity?
BUTERIN: Where would I pick? To do what? To spend a year there, or . . . ?
COWEN: Spend a year as a “tourist.” You could pick ancient Athens or preconquest Mexico or medieval Russia. It’s a kind of social science fiction, right?
BUTERIN: Yeah, totally. Let’s see. Possibly first year of World War II — obviously, one of those areas that’s close to it but still reasonably safe from it…
Basically, experience more of what human behavior and what collective human behavior would look like once you pushed humans further into extremes, and people aren’t as comfortable as they are today.
I started the whole dialogue with this:
I went back and I reread all of the papers on your home page. I found it quite striking that there were two very important economics results, one based on menu costs associated with the name of Greg Mankiw. Another is a paper on the indeterminacy of monetary equilibrium associated with Fischer Black.
These are famous papers. On your own, you appear to rediscover these results without knowing about the papers at all. So how would you describe how you teach yourself economics?
Highly recommended, whether or not you understand blockchain. Oh, and there is this:
COWEN: If you had to explain blockchain to a very smart person from 40 years ago, who knew computers but had no idea of crypto, what would be the best short explanation you could give them, basically, for what you do?
BUTERIN: Sure. One of the analogies I keep going back to is this idea of a “world computer.” The idea, basically, is that a blockchain, as a whole, functions like a computer. It has a hard drive, and on that hard drive, it stores what all the accounts are.
It stores what the code of all the smart contracts is, what the memory of all these smart contracts is. It accepts incoming instructions — and these incoming instructions are signed transactions sent by a bunch of different users — and processes them according to a set of rules.
Wealthier countries allocate a greater proportion of their workers to science and engineering, fields which produce ideas that often benefit everyone. This is one reason why we all gain when other countries become rich. It’s not just the number of scientists and engineers that matters, however. In a clever paper, Agarwal and Gaule demonstrate that equally talented people are more productive in wealthier countries.
Agarwal and Gaule collect the scores of thousands of teenagers who entered the International Math Olympiad between 1981 and 2000 and they follow their careers. Every additional point earned at the Olympiad increases the likelihood that a participant will later earn a math PhD, be heavily cited, even earn a Fields medal. But Olympians from poorer countries are less likely to contribute to the mathematical frontier than equally talented teens from richer countries. It could be that smart teens from poorer countries are less likely to pursue a math career–and that could well be optimal–but Agarwal and Gaule find that many of the talented kids from poorer countries simply disappear off the world’s radar. Their talent is wasted.
The post-Olympiad loss is not the largest loss. Most of the potentially great mathematicians from poorer countries are lost to the world long before the opportunity to participate in an Olympiad. But it is frustrating that even after talent has been identified, it does not always bloom. We are, however, starting to do better.
You can see from the graph that upper-middle income countries are as good as turning their talent into results as high-income countries. Agarwal and Gaule also find some evidence that the low-income penalty is diminishing over time.
As incomes increase around the world it’s as if the entire world’s processing power is coming online for the first time in human history. That, at least, is one reason for optimism.
Hat tip: Floridan Ederer.
Scientific output is not a linear function of amounts of federal grant support to individual investigators. As funding per investigator increases beyond a certain point, productivity decreases. This study reports that such diminishing marginal returns also apply for National Institutes of Health (NIH) research project grant funding to institutions. Analyses of data (2006-2015) for a representative cross-section of institutions, whose amounts of funding ranged from $3 million to $440 million per year, revealed robust inverse correlations between funding (per institution, per award, per investigator) and scientific output (publication productivity and citation impact productivity). Interestingly, prestigious institutions had on average 65% higher grant application success rates and 50% larger award sizes, whereas less-prestigious institutions produced 65% more publications and had a 35% higher citation impact per dollar of funding. These findings suggest that implicit biases and social prestige mechanisms (e.g., the Matthew effect) have a powerful impact on where NIH grant dollars go and the net return on taxpayers investments. They support evidence-based changes in funding policy geared towards a more equitable, more diverse and more productive distribution of federal support for scientific research. Success rate/productivity metrics developed for this study provide an impartial, empirically based mechanism to do so.
When people evaluate two or more goods separately versus jointly it’s common to see “preference reversals”. In a random survey, for example, people were asked to value the following dictionaries:
- Dictionary A: 20,000 entries, torn cover but otherwise like new
- Dictionary B: 10,000 entries, like new
When asked to value just one dictionary, either A or B, the average value was higher on Dictionary B. But when people were asked to evaluate both dictionaries together the average value was higher on Dictionary A.
What’s going on? Most people have no idea how many words a good dictionary has so telling them that a dictionary has 10K or 20K entries just fades into the background–it’s a dictionary of course it defines a lot of words. On the other hand, we all know that “like new” is better than “torn cover” so dictionary A gets the higher price. When confronted with the pair of dictionaries, however, we see that Dictionary A has twice as many entries as Dictionary B and it’s obvious that more entries makes for a better dictionary and in comparison to more entries, the sine qua non of a dictionary, the torn cover fades into importance.
- Baseball Card Package A: 10 valuable baseball cards, 3 not-so-valuable baseball cards
- Baseball Card Package B: 10 valuable baseball cards
- Congressional Candidate A: Would create 5000 jobs; has been convicted of a misdemeanor
- Congressional Candidate B: Would create 1000 jobs; has no criminal convictions
In each case B tends to have a higher value when evaluated separately but A tends to evaluate higher with joint evaluation. When is separate evaluation better? When is joint evaluation better?
There is a tendency to think that joint evaluation is always better since it is the “full information” condition. Sunstein pushes against this interpretation because he argues that full information doesn’t mean full rationality. Even with full information we may still be biased. The factor that becomes salient when the goods are evaluated jointly, for example, need not be especially relevant. Is a dictionary with 20k entries actually better than one with 10k entries? Maybe 95% of the time it’s worse because it takes longer to find the word you need and the dictionary is less portable. We might let the seemingly irrefutable numerical betterness of A overwhelm what might actually be more relevant, the torn cover.
Sellers could take advantage of the bias of joint evaluation by emphasizing information that consumers might think is important but actually isn’t–our computer screen has 1.073 billion color combinations while our competitors only has 16.7 million–while making less salient 6 hours of battery life versus 8 which may in practice be more important.
Personally, I’d go for full information and trust myself to figure out what is truly important but maybe that is my bias. See the paper for more examples and thought-experiments.
Simplifiers give one a better overall picture of how the world works, whereas constructors are trying to build something. The balance seems to be shifting, for instance in physics:
Within the Physics label…we find the simplifiers dominated three quarters of the Nobel Prizes from 1952 to 1981, but more recently constructors have edged the balance with more than half of those from 1982 to 2011.
There is also a shift toward constructors in chemistry, though it is less abrupt. In the fields of physiology and medicine, however, simplifiers reign supreme and there has been no shift across time. Three-quarters of the prizes are still going to simplifiers.
Does that mean we should be relatively bullish about progress in those areas, based on forthcoming fundamental breakthroughs?
All these points are from Jeremy J. Baumberg’s new and interesting The Secret Life of Science: How It Really Works and Why It Matters.
Tens of thousands of studies correlate family socioeconomic status with later child outcomes like income, wealth and attainment and then claim the correlation is causal. Very few such studies control for genetics, although twin adoption studies suggest that genetics is important. Cheap genomic scanning, however, has made it possible to go beyond twin studies. A new paper, for example, looks at differences in education-associated genes between non-identical twins raised in the same family and they find that children with more education-associated genes tend to have greater educational attainment and higher income later in life. In other words, differences in child outcomes both across families and within the same family are in part driven by genetics.
Surprisingly, however, the authors also find evidence for “genetic nurture” the idea that parental genes drive child environment which drives outcomes. That’s surprising because it’s hard to find strong evidence for big environmental effects in adoption studies but here the authors can rely on more precise data. Specifically, the authors look at maternal education-associated genes that are NOT passed on to the children and yet they find that such genes are also correlated with important child outcomes (fyi, they only have maternal genes). So smart parents benefit children twice. First by passing on smart genes and second–even when they do not pass on smart genes–by passing on a smart environment. Previous studies missed the latter effect perhaps because they focused on rich parents rather than smart parents (the former being easier to measure). The authors suggest that by looking at how smart parents help kids without smart genes we may be able to figure out smart environments and generalize them to everyone. That strikes me as optimistic.
Here is the paper abstract:
A summary genetic measure, called a “polygenic score,” derived from a genome-wide association study (GWAS) of education can modestly predict a person’s educational and economic success. This prediction could signal a biological mechanism: Education-linked genetics could encode characteristics that help people get ahead in life. Alternatively, prediction could reflect social history: People from well-off families might stay well-off for social reasons, and these families might also look alike genetically. A key test to distinguish biological mechanism from social history is if people with higher education polygenic scores tend to climb the social ladder beyond their parents’ position. Upward mobility would indicate education-linked genetics encodes characteristics that foster success. We tested if education-linked polygenic scores predicted social mobility in >20,000 individuals in five longitudinal studies in the United States, Britain, and New Zealand. Participants with higher polygenic scores achieved more education and career success and accumulated more wealth. However, they also tended to come from better-off families. In the key test, participants with higher polygenic scores tended to be upwardly mobile compared with their parents. Moreover, in sibling-difference analysis, the sibling with the higher polygenic score was more upwardly mobile. Thus, education GWAS discoveries are not mere correlates of privilege; they influence social mobility within a life. Additional analyses revealed that a mother’s polygenic score predicted her child’s attainment over and above the child’s own polygenic score, suggesting parents’ genetics can also affect their children’s attainment through environmental pathways. Education GWAS discoveries affect socioeconomic attainment through influence on individuals’ family-of-origin environments and their social mobility.
You can find the appendix with the key results here. I find the lab style difficult to follow. The authors run regressions, for example, but you won’t find a regression equation followed by a table with all the results. Instead the regression is described in the appendix and then some coefficients, but by no means all, are presented later in the appendix.
Here is a kind of gravity equation for science:
We develop a simple theoretical framework for thinking about how geographic frictions, and in particular travel costs, shape scientists’ collaboration decisions and the types of projects that are developed locally versus over distance. We then take advantage of a quasi-experiment – the introduction of new routes by a low-cost airline – to test the predictions of the theory. Results show that travel costs constitute an important friction to collaboration: after a low-cost airline enters, the number of collaborations increases by 50%, a result that is robust to multiple falsification tests and causal in nature. The reduction in geographic frictions is particularly beneficial for high quality scientists that are otherwise embedded in worse local environments. Consistent with the theory, lower travel costs also endogenously change the types of projects scientists engage in at different levels of distance. After the shock, we observe an increase in higher quality and novel projects, as well as projects that take advantage of complementary knowledge and skills between sub-fields, and that rely on specialized equipment. We test the generalizability of our findings from chemistry to a broader dataset of scientific publications, and to a different field where specialized equipment is less likely to be relevant, mathematics. Last, we discuss implications for the formation of collaborative R&D teams over distance.
That is from a new paper by Christian Catalini, Christian Fons-Rosen, and Patrick Gaulé.
Theory and research indicates that individuals with more frequent positive emotions are better at attaining goals at work and in everyday life. In the current study we examined whether the expression of genuine positive emotions by scientists was positively correlated with work-related accomplishments, defined by bibliometric (e.g. number of citations) and sociometric (number of followers for scholarly updates) indices. Using a sample of 440 scientists from a social networking site for researchers, multiple raters coded smile intensity (full smile, partial smile, or no smile) in publicly available photographs. We found that scientists who presented a full smile had the same quantity of publications yet of higher quality (e.g. citations per paper) and attracted more followers to their updates compared to less positive emotionally expressive peers; results remained after controlling for age and sex. Thin-slicing approaches to the beneficial effects of positive emotionality offer an ecologically valid approach to complement experimental and longitudinal evidence. Evidence linking positive emotional expressions to scientific impact and social influence provides further support for broaden and build models of positive emotions.
I wonder for which fields this might not be true…?
That is the topic of a new paper by Diego Comin and Martí Mestieri, published in AEJ: Macroeconomics, here is the abstract:
We study the cross-country evolution of technology diffusion over the last two centuries. We document that adoption lags between poor and rich countries have converged, while the intensity of use of adopted technologies of poor countries relative to rich countries has diverged. The evolution of aggregate productivity implied by these trends in technology diffusion resembles the actual evolution of the world income distribution in the last two centuries. Cross-country differences in adoption lags account for a significant part of the cross-country income divergence in the nineteenth century. The divergence in intensity of use accounts for the divergence during the twentieth century.
I am struck by the strength of the two major stylized facts in this paper. The mean adoption lag for spindles, classified as a 1779 technology, was 130 years, or in other words that is how long it took for the technology to move to poorer countries. For ships, listed as a 1788 technology, the mean lag is 110 years. Synthetic fiber is a 1931 technology, with a mean adoption lag of 29 years. For the internet, a 1983 technology (is that right?), the mean adoption lag is only 6 years.
But the overall story is not so simple. The more advanced countries use more of these technologies, and use them more effectively (“intensity”), and that gap has been growing over time. Yes, Ghana has the internet, but it is Silicon Valley that is working wonders with it. Some technology use begs more technology use.
If you calibrate those parameters properly, it turns out you can explain about 3/4 of the evolution of income divergence across rich and poor countries.
Few movies serve up more social science. Imagine three identical triplets, separated at a young age, and then reared separately in a poor family, in a middle class family, and in a well-off family. I can’t say much more without spoiling it all, but I’ll offer these points: listen closely, don’t take the apparent conclusion at face value, ponder the Pareto principle throughout, read up on “the control premium,” solve for how niche strategies change with the comparative statics (don’t forget Girard), and are they still guinea pigs? Excellent NYC cameos from the 1980s, and see Project Nim once you are done.
Definitely recommended, and I say don’t read any other reviews before going (they are mostly strongly positive).
Spiders can fly. Here’s the story from an excellent piece by Ed Yong in The Atlantic.
Spiders have no wings, but they can take to the air nonetheless. They’ll climb to an exposed point, raise their abdomens to the sky, extrude strands of silk, and float away. This behavior is called ballooning. It might carry spiders away from predators and competitors, or toward new lands with abundant resources. But whatever the reason for it, it’s clearly an effective means of travel. Spiders have been found two-and-a-half miles up in the air, and 1,000 miles out to sea.
That part has long been known (although it was news to me). What is new is evidence about how spiders fly, electrostatic energy!
Erica Morley and Daniel Robert have an explanation. The duo, who work at the University of Bristol, has shown that spiders can sense the Earth’s electric field, and use it to launch themselves into the air.
Every day, around 40,000 thunderstorms crackle around the world, collectively turning Earth’s atmosphere into a giant electrical circuit. The upper reaches of the atmosphere have a positive charge, and the planet’s surface has a negative one. Even on sunny days with cloudless skies, the air carries a voltage of around 100 volts for every meter above the ground. In foggy or stormy conditions, that gradient might increase to tens of thousands of volts per meter.
Ballooning spiders operate within this planetary electric field. When their silk leaves their bodies, it typically picks up a negative charge. This repels the similar negative charges on the surfaces on which the spiders sit, creating enough force to lift them into the air. And spiders can increase those forces by climbing onto twigs, leaves, or blades of grass. Plants, being earthed, have the same negative charge as the ground that they grow upon, but they protrude into the positively charged air. This creates substantial electric fields between the air around them and the tips of their leaves and branches—and the spiders ballooning from those tips.
…Morley and Robert have tested it with actual spiders.
First, they showed that spiders can detect electric fields. They put the arachnids on vertical strips of cardboard in the center of a plastic box, and then generated electric fields between the floor and ceiling of similar strengths to what the spiders would experience outdoors. These fields ruffled tiny sensory hairs on the spiders’ feet, known as trichobothria. “It’s like when you rub a balloon and hold it up to your hairs,” Morley says.
In response, the spiders performed a set of movements called tiptoeing—they stood on the ends of their legs and stuck their abdomens in the air. “That behavior is only ever seen before ballooning,” says Morley. Many of the spiders actually managed to take off, despite being in closed boxes with no airflow within them. And when Morley turned off the electric fields inside the boxes, the ballooning spiders dropped.
I will be doing a Conversation with Tyler with him, no associated public event. Here is his home page, and the About section. Here is Wikipedia on Pollan. Here is a Sean Iling Vox interview with Pollan, on his recent work on LSD and other psychedelics, and his most recent book is How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. Pollan is perhaps best known for his books on food, cooking, and food supply chains.
So what should I ask him?
The idea that concepts depend on their reference class isn’t new. A short basketball player is tall and a poor American is rich. One might have thought, however, that a blue dot is a blue dot. Blue can be defined by wavelength so unlike a relative concept like short or rich there is some objective reality behind blue even if the boundaries are vague. Nevertheless, in a thought-provoking new paper in Science the all-star team of Levari, Gilbert, Wilson, Sievers, Amodio and Wheatley show that what we identify as blue expands as the prevalence of blue decreases.
In the figure below, for example, the authors ask respondents to identify a dot as blue or purple. The figure on the left shows that as the objective shading increases from very purple to very blue more people identify the dot as blue, just as one would expect. (The initial and final 200 trials indicate that there is no tendency for changes over time.) In the figure at right, however, blue dots were made less prevalent in the final 200 trials and, after the decrease in the prevalence, the tendency to identify a dot as blue increases dramatically. In the decreasing prevalence condition on the right, a dot that previously was previously identified as blue only 25% of the time now becomes identified as blue 50% of the time! (Read upwards from the horizontal axis and compare the yellow and blue prediction lines).
Clever. But so what? What the authors then go on to show, however, is that the same phenomena happens with complex concepts for which we arguably would like to have a consistent and constant identification.
Are people susceptible to prevalence-induced concept change? To answer this question, we showed participants in seven studies a series of stimuli and asked them to determine whether each stimulus was or was not an instance of a concept. The concepts ranged from simple (“Is this dot blue?”) to complex (“Is this research proposal ethical?”). After participants did this for a while, we changed the prevalence of the concept’s instances and then measured whether the concept had expanded—that is, whether it had come to include instances that it had previously excluded.
…When blue dots became rare, purple dots began to look blue; when threatening faces became rare, neutral faces began to appear threatening; and when unethical research proposals became rare, ambiguous research proposals began to seem unethical. This happened even when the change in the prevalence of instances was abrupt, even when participants were explicitly told that the prevalence of instances would change, and even when participants were instructed and paid to ignore these changes.
Assuming the result replicates (the authors have 7 studies which appear to me to be independent, although each study is fairly small in size (20-100) and drawn from Harvard undergrads) it has many implications.
…in 1960, Webster’s dictionary defined “aggression” as “an unprovoked attack or invasion,” but today that concept can include behaviors such as making insufficient eye contact or asking people where they are from. Many other concepts, such as abuse, bullying, mental disorder, trauma, addiction, and prejudice, have expanded of late as well.
… Many organizations and institutions are dedicated to identifying and reducing the prevalence of social problems, from unethical research to unwarranted aggressions. But our studies suggest that even well-meaning agents may sometimes fail to recognize the success of their own efforts, simply because they view each new instance in the decreasingly problematic context that they themselves have brought about. Although modern societies have made extraordinary progress in solving a wide range of social problems, from poverty and illiteracy to violence and infant mortality, the majority of people believe that the world is getting worse. The fact that concepts grow larger when their instances grow smaller may be one source of that pessimism.
The paper also gives us a way of thinking more clearly about shifts in the Overton window. When strong sexism declines, for example, the Overton window shrinks on one end and expands on the other so that what was once not considered sexism at all (e.g. “men and women have different preferences which might explain job choice“) now becomes violently sexist.
Nicholas Christakis and the fearless Gabriel Rossman point out on twitter (see at right) that it works the other way as well. Namely, the presence of extremes can help others near the middle by widening the set of issues that can be discussed or studied without fear of opprobrium.
But why shouldn’t our standards change over time? Most of the people in the 1850s who thought slavery was an abomination would have rejected the idea of inter-racial marriage. Wife beating wasn’t considered a violent crime in just the very recent past. What racism and sexism mean has changed over time. Are these examples of concept creep or progress? I’d argue progress but the blue dot experiment of Levari et al. suggests that if even objective concepts morph under prevalence inducement then subjective concepts surely will. The issue then is not to prevent progress but to recognize it and not be fooled into thinking that progress hasn’t been made just because our identifications have changed.
I’ve already put this Scott Aaronson paper in Assorted Links, but here are two passages I liked in particular:
…finding a fixed point might require Nature to solve an astronomically-hard computational problem! To illustrate, consider a science-fiction scenario wherein you go back in time and dictate Shakespeare’s plays to him. Shakespeare thanks you for saving him the effort, publishes verbatim the plays that you dictated, and centuries later the plays come down to you, whereupon you go back in time and dictate them to Shakespeare, etc. Notice that, in contrast to the grandfather paradox, here there is no logical contradiction: the story as we told it is entirely consistent. But most people find the story “paradoxical” anyway. After all, somehow Hamlet gets written, without anyone ever doing the work of writing it! As Deutsch perceptively observed, if there is a “paradox” here, then it is not one of logic but of computational complexity…
Now, some people have asked how such a claim could possibly be consistent with modern physics. For didn’t Einstein teach us that space and time are merely two aspects of the same structure? One immediate answer is that, even within relativity theory, space and time are not interchangeable: space has a positive signature whereas time has a negative signature. In complexity theory, the difference between space and time manifests itself in the straightforward fact that you can reuse the same memory cells over and over, but you can’t reuse the same moments of time.
Yet, as trivial as that observation sounds, it leads to an interesting thought. Suppose that the laws of physics let us travel backwards in time. In such a case, it’s natural to imagine that time would become a “reusable resource” just like space is—and that, as a result, arbitrary PSPACE computations would fall within our grasp. But is that just an idle speculation, or can we rigorously justify it?
It is in general quite an interesting paper.