I’d like to see the cost-benefit analysis on this one before signing up, but an intriguing idea:
Vertical Forest is a model for a sustainable residential building, a project for metropolitan reforestation contributing to the regeneration of the environment and urban biodiversity without the implication of expanding the city upon the territory. It is a model of vertical densification of nature within the city that operates in relation to policies for reforestation and naturalization of large urban and metropolitan borders. The first example of the Vertical Forest consisting of two residential towers of 110 and 76 m height, was realized in the centre of Milan, on the edge of the Isola neighborhood, hosting 800 trees (each measuring 3, 6 or 9 meters), 4,500 shrubs and 15,000 plants from a wide range of shrubs and floral plants distributed according to the sun exposure of the facade. On flat land, each Vertical Forest equals, in amount of trees, an area of 20,000 square meters of forest. In terms of urban densification it is the equivalent of an area of a single family dwelling of nearly 75,000 sq.m. The vegetal system of the Vertical Forest contributes to the construction of a microclimate, produces humidity, absorbs CO2 and dust particles and produces oxygen.
That new paper by Daniel Barth, Nicholas W. Papageorge and Kevin Thom is attracting a great deal of attention and also some controversy. Here is the first sentence of the abstract:
We show that genetic endowments linked to educational attainment strongly and robustly predict wealth at retirement.
But it’s not mainly about IQ. I found this to be the most interesting part of the paper, noting that EA is a polygenic score:
Our use of the EA score as a measure of biological traits linked to human capital is related to previous attempts in the literature to measure ability through the use of tests scores such as IQ or the AFQT…We note two important differences between the EA score and a measure like IQ that make it valuable to study polygenic scores. First, a polygenic score like the EA score can overcome some interpretational challenges related to IQ and other cognitive test scores. Environmental factors have been found to influence intelligence test results and to moderate genetic influences on IQ (Tucker-Drob and Bates, 2015). It is true that differences in the EA score may reflect differences in environments or investments because parents with high EA scores may also be more likely to invest in their children. However, the EA score is fixed at conception, which means that post-birth investments cannot causally change the value of the score. A measure like IQ suffers from both of these interpretational challenges. High IQ parents might have high IQ children because of the genes that they pass on, but also because of the positive investments that they make…Compared to a cognitive test score like IQ, the EA score may also measure a wider variety of relevant endowments. This is especially important given research, including relatively recent papers in economics, emphasizing the importance of both cognitive and non-cognitive skills in shaping life-cycle outcomes (Heckman and Rubinstein, 2001). Existing evidence suggests a correlation of approximately 0.20 between a cognitive test score available for HRS respondents and the EA score (Papageorge and Thom, 2016). This relatively modest correlation could arise if both variables measure the same underlying cognitive traits with error, or if they measure different traits. However, Papageorge and Thom (2016) find that the relationship between the EA score and income differs substantially from the relationship between later-life cognition scores and income, suggesting that the EA score contains unique information…
…we interpret the EA score as measuring a basket of genetic factors that influence traits relevant for human capital accumulation.
If I understand the paper correctly, the polygenic score is what predicts well from the genetic data set, it is not a “thing with a known nature.” And I believe the results are drawn from the same 1.1 million person data set as is used in this Nature paper.
That is the new Journal of Economic Perspectives article by Nicholas Bloom, John Van Reenen, and Heidi Williams. Most of all, such articles should be more frequent and receive greater attention and higher status, as Progress Studies would suggest. Here is one excerpt:
…moonshots may be justified on the basis of political economy considerations. To generate significant extra resources for research, a politically sustainable vision needs to be created. For example, Gruber and Johnson (2019) argue that increasing federal funding of research as a share of GDP by half a percent—from 0.7 percent today to 1.2 percent, still lower than the almost 2 percent share observed in 1964 in Figure 1—would create a $100 billion fund that could jump-start new technology hubs in some of the more educated but less prosperous American cities (such as Rochester, New York, and Pittsburgh, Pennsylvania). They argue that such a fund could generate local spillovers and, by alleviating spatial inequality, be more politically sustainable than having research funds primarily flow to areas with highly concentrated research, such as Palo Alto, California, and Cambridge, Massachusetts.
In general I agree with their points, but would have liked to have seen more on freedom to build, and of course on culture, culture, culture. At the very least, policy is endogenous to culture, and culture shapes many economic outcomes more directly as well. I’m fine with tax credits for R&D, but I just don’t see them as in the driver’s seat.
Nonstate actors appear to have increasing power, in part due to new technologies that alter actors’ capacities and incentives. Although solar geoengineering is typically conceived of as centralized and state-deployed, we explore highly decentralized solar geoengineering. Done perhaps through numerous small high-altitude balloons, it could be provided by nonstate actors such as environmentally motivated nongovernmental organizations or individuals. Conceivably tolerated or even covertly sponsored by states, highly decentralized solar geoengineering could move presumed action from the state arena to that of direct intervention by nonstate actors, which could in turn, disrupt international politics and pose novel challenges for technology and environmental policy. We conclude that this method appears technically possible, economically feasible, and potentially politically disruptive. Decentralization could, in principle, make control by states difficult, perhaps even rendering such control prohibitively costly and complex.
That is from Jesse L. Reynolds & Gernot Wagner, and injecting fine aerosols into the air, as if to mimic some features of volcanic eruptions, seems to be one of the major possible approaches. I am not able to judge the scientific merits of their claims, but it has long seemed to me evident that some version of this idea would prove possible.
Solve for the equilibrium! What is it? Too much enthusiasm for correction and thus disastrous climate cooling? Preemptive government regulation? It requires government subsidy? It becomes controlled by concerned philanthropists? It starts a climate war between America/Vietnam and Russia/Greenland? Goldilocks? I wonder if we will get to find out.
Via the excellent Kevin Lewis.
Progress itself is understudied. By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. For a number of reasons, there is no broad-based intellectual movement focused on understanding the dynamics of progress, or targeting the deeper goal of speeding it up. We believe that it deserves a dedicated field of study. We suggest inaugurating the discipline of “Progress Studies.”
Plenty of existing scholarship touches on these topics, but it takes place in a highly fragmented fashion and fails to directly confront some of the most important practical questions.
Imagine you want to know how to most effectively select and train the most talented students. While this is an important challenge facing educators, policy makers, and philanthropists, knowledge about how best to do so is dispersed across a very long list of different fields. Psychometrics literature investigates which tests predict success. Sociologists consider how networks are used to find talent. Anthropologists investigate how talent depends on circumstances, and a historiometric literature studies clusters of artistic creativity. There’s a lively debate about when and whether “10,000 hours of practice” are required for truly excellent performance. The education literature studies talent-search programs such as the Center for Talented Youth. Personality psychologists investigate the extent to which openness or conscientiousness affect earnings. More recently, there’s work in sportometrics, looking at which numerical variables predict athletic success. In economics, Raj Chetty and his co-authors have examined the backgrounds and communities liable to best encourage innovators. Thinkers in these disciplines don’t necessarily attend the same conferences, publish in the same journals, or work together to solve shared problems.
You may have seen there is a small cottage industry on Twitter suggesting that we ignore antecedents to Progress Studies, but of course that is not the case, as evidenced by the paragraph above, not to mention claims like: “Progress Studies has antecedents, both within fields and institutions. The economics of innovation is a critical topic and should assume a much larger place within economics.” In fact we consider antecedents in at least nine different paragraphs of a relatively short piece.
The piece is interesting throughout, and I can assure you that Patrick is a very productive and diligent co-author.
Our team in Russia received a tip from the local research community to a new form of publication fraud. The tip led to a website, [redacted] set up by unscrupulous operators to serve as a virtual marketplace where authors can buy or sell authorship in academic manuscripts accepted for publication. This kind of peer-to-peer sharing, in “broad daylight” is not something we’ve seen before – so we conducted a quick analysis of the site, and its data, before taking swift action to alert our friends and colleagues in the scientific community.
There are no author names, or journal names indicated on the site – the journal name is available to buyers only. Sometimes as many as five authorships in a single article are offered for sale, with prices varying depending on place in the list of authors.
Here is the full story, via Brandon.
That is the title of the new Bill Bryson book, and it delivers in all the ways you would expect a Bryson book to do. Here is one sample paragraph:
Before penicillin, the closest thing to a wonder drug that existed was Salvarsan, developed by the German immunologist Paul Ehrlich in 1910, but Salvarsan was effective against only a few things, principally syphilis, and had a lot of drawbacks. For a start, it was made from arsenic, so was toxic, and treatment consisted in injecting roughly a pint of solution into the patient’s arm once a week for fifty weeks or more. If it wasn’t administered exactly right, fluid could seep into muscle, causing painful and sometimes serious side effects, including the need for amputation. Doctors who could administer it safely became celebrated. Ironically, one of the most highly regarded was Alexander Fleming.
By the way:
…the average grave is visited for only about fifteen years…
You can pre-order the book here, I would be interested to read more about Bryson’s work, writing, and research habits.
Here is the transcript and audio, and here is the CWT summary:
If you want to speculate on the development of tech, no one has a better brain to pick than Neal Stephenson. Across more than a dozen books, he’s created vast story worlds driven by futuristic technologies that have both prophesied and even provoked real-world progress in crypto, social networks, and the creation of the web itself. Though Stephenson insists he’s more often wrong than right, his technical sharpness has even led to a half-joking suggestion that he might be Satoshi Nakamoto, the shadowy creator of bitcoin. His latest novel, Fall; or, Dodge in Hell, involves a more literal sort of brain-picking, exploring what might happen when digitized brains can find a second existence in a virtual afterlife.
So what’s the implicit theology of a simulated world? Might we be living in one, and does it even matter? Stephenson joins Tyler to discuss the book and more, including the future of physical surveillance, how clothing will evolve, the kind of freedom you could expect on a Mars colony, whether today’s media fragmentation is trending us towards dystopia, why the Apollo moon landings were communism’s greatest triumph, whether we’re in a permanent secular innovation starvation, Leibniz as a philosopher, Dickens and Heinlein as writers, and what storytelling has to do with giving good driving directions.
Here is one excerpt:
COWEN: If we had a Mars colony, how politically free do you think it would be? Or would it just be like perpetual martial law? Like living on a nuclear submarine?
STEPHENSON: I think it would be a lot like living on a nuclear submarine because you can’t — being in space is almost like being in an intensive care unit in a hospital, in the sense that you’re completely dependent on a whole bunch of machines working in order to keep you alive. A lot of what we associate with freedom, with personal freedom, becomes too dangerous to contemplate in that kind of environment.
COWEN: Is there any Heinlein-esque-like scenario — Moon is a Harsh Mistress, where there’s a rebellion? People break free from the constraints of planet Earth. They chart their own institutions. It becomes like the settlements in the New World were.
STEPHENSON: Well, the settlements in the New World, I don’t think are a very good analogy because there it was possible — if you’re a white person in the New World and you have some basic skills, you can go anywhere you want.
An unheralded part of what happened there is that, when those people got into trouble, a lot of times, they were helped out by the indigenous peoples who were already there and who knew how to do stuff. None of those things are true in a space colony kind of environment. You don’t have indigenous people who know how to get food and how to get shelter. You don’t have that ability to just freely pick up stakes and move about.
COWEN: What will people wear in the future? Say a hundred years from now, will clothing evolve at all?
STEPHENSON: I think clothing is pretty highly evolved, right? If you look at, yeah, at any garment, say, a shirt — I was ironing a shirt today in my hotel room, and it is a frickin’ complicated object. We take it for granted, but you think about the fabric, the way the seams are laid out.
That’s just one example, of course, but you take any — shirts, shoes, any kind of specific item of clothing you want to talk about — once you take it apart and look at all the little decisions and innovations that have gone into it, it’s obvious that people have been optimizing this thing for hundreds or thousands of years.
New materials come along that enable people to do new kinds of things with clothing, but overall, I don’t think that a lot is going to change.
COWEN: Is there anything you would want smart clothing to do for you that, say, a better iPad could not?
STEPHENSON: The thing about clothing is that you change your clothes all the time. So if you become dependent on a particular technology that’s built into your shirt, that’s great as long as you’re wearing that shirt, but then as soon as you change to a different shirt, you don’t have it.
So what are you going to do? Are you going to make sure that every single one of your shirts has that same technology built into it? It seems easier to have it separate from the clothing that you wear, so that you don’t have to think about all those complications.
There is much more at the link, including discussions of some of his best-known novels…
As he prepared for Apollo 11’s lift-off, Neil Armstrong thought he had a 10 per cent chance of dying during the mission, and a 50 per cent chance of not walking on the Moon. “There was still a debate about if you stepped on to the Moon, would you step into 10ft of dust?” says former Nasa official Scott Hubbard.
The entire mission was vulnerable to a single-point failure: if the service module’s engine had failed, for example, there was no back-up.
Nasa’s whole attitude to risk has now changed. Until recently, each system was built to tolerate any two faults. This is now seen as a blunt approach, treating all components as equally important. So Nasa instead tries to limit the probability of failure. The chance of losing SLS and Orion on its first mission is one in 140, according to the agency’s analysis.
That is by Henry Mance and Yuichiro Kanematsu, in the FT, from their splendid look at the current attempt to drive a moon mission. And this:
“We do not have time or funds to build unique, one-of-a-kind systems,” William Gerstenmaier, a senior Nasa official, said recently. The agency’s biggest rocket — Boeing’s troubled Space Launch System (SLS) — will use some of the same engines as the Space Shuttle. Blake Rogers, an engineer at the Aerospace Corporation, a government-funded research agency, told the FT: “2024 is really soon. So there’s not a lot of brand-new technology…Today, Orion’s processing power will still be below 500MHz — significantly less than a MacBook.
Recommended, gated but of course you should subscribe to the FT.
But the concept of coercion isn’t very central to my presumption. At a basic level, I embrace the usual economists’ market failure analysis, preferring interventions that fix large market failures, relative to obvious to-be-expected government failures.
But at a meta level, I care more about having good feedback/learning/innovation processes. The main reason that I tend to be wary of government intervention is that it more often creates processes with low levels of adaptation and innovation regarding technology and individual preferences. Yes, in principle dissatisfied voters can elect politicians who promise particular reforms. But voters have quite limited spotlights of attention and must navigate long chains of accountability to detect and induce real lasting gains.
Yes, low-government mechanisms often also have big problems with adaptation and innovation, especially when customers mainly care about signaling things like loyalty, conformity, wealth, etc. Even so, the track record I see, at least for now, is that these failures have been less severe than comparable government failures. In this case, the devil we know more does in fact tend to be better that the devil we know less.
So when I try to design better social institutions, and to support the proposals of others, I’m less focused than many on assuring zero government invention, or on minimizing “coercion” however conceived, and more concerned to ensure healthy competition overall.
Here is the full post.
Using the University of British Columbia as a case study, we investigated whether the faculty at our institution who flew the most were also the most successful. We found that beyond a small threshold there was no relationship between scholarly output and how much an individual academic flies…
We certainly did find evidence that researchers fly more than is likely necessary. In the portion of our sample composed of only fulltime faculty, we categorized 10% of trips as “easily avoidable”. These were trips like going to your destination and flying back in the same day or flying a short distance trip that could have been replaced by ground travel. Interestingly, green academics (those studying subjects like climate change or sustainability) not only had the same level of emissions from air travel as their peers, but they were indistinguishable in the category of “easily avoidable” trips as well.
But success isn’t just measured by scholarly output, and so we also checked for relationships between how much academics flew and their annual salaries (which are publicly available). We did find a significant relationship: people who fly more, get paid more. Causation though, could lie in the other direction. Prestigious scholars with more grant money may have extra funds with which to book air travel, for instance.
Hal of course was in top form, here is the audio and transcript. Excerpt:
COWEN: Why doesn’t business use more prediction markets? They would seem to make sense, right? Bet on ideas. Aggregate information. We’ve all read Hayek.
VARIAN: Right. And we had a prediction market. I’ll tell you the problem with it. The problem is, the things that we really wanted to get a probability assessment on were things that were so sensitive that we thought we would violate the SEC rules on insider knowledge because, if a small group of people knows about some acquisition or something like that, there is a secret among this small group.
You might like to have a probability assessment of whether that would go through. But then, anybody who looks at the auction is now an insider. So there’s a problem in you have to find things that (a) are of interest to the company but (b) do not reveal financially critical information. That’s not so easy to do.
COWEN: But there are plenty of times when insider trading is either illegal or not enforced. Plenty of countries where it’s been legal, and there we don’t see many prediction markets in companies, if any. So it seems like it ought to have to be some more general explanation, or no?
VARIAN: Well, I’m just referring to our particular case. There was another example at the same time: Ford was running a market, and Ford would have futures markets on the price of gasoline, which was very relevant to them. It was an external price and so on. And it extended beyond the usual futures market.
That’s the other thing. You’re not going to get anywhere if you’re just duplicating a market that already exists. You have to add something to it to make it attractive to insiders.
So we ran a number of cases internally. We found some interesting behavior. There’s an article by Bo Cowgill on our experience with this auction. But ultimately, we ran into this problem that I described. The most valuable predictions would be the most sensitive predictions, and you didn’t want to do that in public.
COWEN: But then you must think we’re not doing enough theory today. Or do you think it’s simply exhausted for a while?
VARIAN: Well, one area of theory that I’ve found very exciting is algorithmic mechanism design. With algorithmic mechanism design, it’s a combination of computer science and economics.
The idea is, you take the economic model, and you bring in computational costs, or show me an algorithm that actually solves that maximization problem. Then on the other side, the computer side, you build incentives into the algorithms. So if multiple people are using, let’s say, some communications protocol, you want them all to have the right incentives to have the efficient use of that protocol.
So that’s a case where it really has very strong real-world applications to doing this — everything from telecommunications to AdWords auctions.
VARIAN: Yeah. I would like to separate the blockchain from just cryptographic protocols in general. There’s a huge demand for various kinds of cryptography.
Blockchain seems to be, by its nature, relatively inefficient. As an economist, I don’t like this proof of work that this is. I don’t like the fact that there’s one version of the blockchain that has to keep being updated. I don’t like the fact that it’s so slow. There are lots of things that you could fix, and I expect to see them fixed in the future, but I would say, crypto in general — big deal. Blockchain — not so much.
COWEN: Now, users seem to like them both, but if I just look at the critics, why does it seem to me that Facebook is more hated than Google?
VARIAN: Well, you know, I actually don’t use Facebook. I don’t have any moral objection to it. I just don’t have the time to do it. [laughs] There are other things of this sort that can end up soaking up a substantial amount of time.
I think that one of the reasons — and this is, of course, quite speculative — I think that one of the reasons people are most worried about Facebook is they don’t really understand the limits of what can be done at Facebook. Whereas at Google, I think we’re pretty clear that we’re showing you ads. We’re showing you ads that are targeted to one thing or another, but that’s how the information’s used.
So, you’ve got this specific application in our case. In Facebook’s case, it’s more amorphous, I think.
There is much, much more at the link.
The author is Charles Fishman, and the subtitle is The Impossible Mission That Flew us to the Moon. Here is one excerpt:
It [NASA’s Mission Control] was the first real-time computing facility IBM had ever installed.
…the Apollo flight computer was the first anywhere to have responsibility for human lives.
That computer had 73 kilobytes of memory and had 0.000002 percent of the computing capacity of an iPhone. And don’t forget this:
At least while you were headed outbound, you’d have plenty of fuel to correct things. Coming home from the Moon is a lot less forgiving. The heat of reentry, the splashdown targeting into the ocean, and the g-forces piling up on the spaceship and the astronauts inside combine to create a very thin slice of air you need to slide your spaceship into. The command module had just 1 degree of latitude on reentry. Too shallow an angle, and your space capsule skips off the top of the atmosphere like a flat stone — out into space and a wide orbit around the Earth, from which there was no rescue. Too steep a cut into the atmosphere, and the speed, heat, and g-forces would combine to incinerate your space capsule. And unlike on the way out, on the way back there are no go-arounds.
Definitely recommended, gripping from start to finish. Overall the best history of how the space revolution and the computer revolution were interconnected.
To provide storage space for the huge coils of wire, three great tanks were carved into the heart of the ship. The drums, sheaves, and dynamometers of the laying mechanism, occupied a large part of the stem decking, and one funnel with its associated boilers had been removed to give additional storage space. When the ship sailed from the Medway on June 24, 1865, she carried seven thousand tons of cable, eight thousand tons of coal, and provisions for five hundred men. Since this was before the days of refrigeration, she also became a seagoing farm. Her passenger list included one cow, a dozen oxen, twenty pigs, one hundred twenty sheep. and a whole poultry-yard of fowl.
That is 1865 we are talking about here, remarkably early (in my view) for laying a cable across the bottom of the entire Atlantic.
The passage is from Arthur C. Clarke’s excellent How the World Was One: Beyond the Global Village.
…if you give away a genetic profile for yourself. Elizabeth Joh (NYT) writes:
You may decide that the police should use your DNA profile without qualification and may even post your information online with that purpose in mind. But your DNA is also shared in part with your relatives. When you consent to genetic sleuthing, you are also exposing your siblings, parents, cousins, relatives you’ve never met and even future generations of your family.
Unless you are going to gain something very specific, I generally recommend that people should not give away their genetic information.