Category: Science

“If economists are so smart, why aren’t they rich?”

Peter Coy (NYT) considers a few hypotheses.  My take here is pretty simple.  Here are three of the main ways to beat market returns:

1. Build a new product and sell it successfully.

2. Assemble and maintain an especially talented team of quants.  (It is a separate but still relevant question at what scale you can do this and thus how rich you can become.)

3. “See” something about the market, at least for a limited period of time, that other people do not and invest accordingly.  That might be falling interest rates, the rise of consumer tech, or the persistence of low inflation (all until recently!).  Note that #3 requires you to have some money in the first place, and for your run to be long enough that you truly become rich.

Putting aside generic demographic factors, there is no particular reason to expect #1 or #2 to be much correlated with expertise in economics.

You might think that #3 is somewhat correlated with expertise in economics, but I don’t think it is very much.  You can pile up a bunch of ancillary reasons why economists might not be practically oriented enough to succeed at #3.  But even putting all that aside, economic theories of “regime change” just aren’t very good!  (It is comparative statics that we excel at, but that knowledge can be replicated and sold cheaply to the rest of the investment community, if it turns out to be valuable.)  So knowing economics won’t correlate much with success at strategy #3.  And some of those non-economists who succeed at #3 are just lucky anyway.

And that is why, dear reader, most economists are not very rich.  You are correct in downgrading their intelligence for these reasons, though there are still some regards in which they are quite smart, such as having ability at hypothesis testing, or perhaps having the ability to ask very good and penetrating questions about economic issues.

Mexicans die from the cold

We examine the impact of temperature on mortality in Mexico using daily data over the period 1998–2017 and find that 3.8 percent of deaths in Mexico are caused by suboptimal temperature (26,000 every year). However, 92 percent of weather-related deaths are induced by cold (<12 degrees C) or mildly cold (12–20 degrees C) days and only 2 percent by outstandingly hot days (>32 degrees C). Furthermore, temperatures are twice as likely to kill people in the bottom half of the income distribution. Finally, we show causal evidence that the Seguro Popular, a universal health care policy, has saved at least 1,600 lives per year from cold weather since 2004.

That is from a new paper in American Economic Journal: Economic Policy, authored by François Cohen and Antoine Dechezleprêtre.  Here are ungated copies.

What is your most underappreciated paper?

If you are a scholar working primarily in the social sciences and/or humanities with at least 4000 Google Scholar citations, we hereby invite you to identify one or two publications with publication date 2012 or prior, and for which the count is lower than your present h-index, that you consider underappreciated. It is OK that the publication is coauthored.

…We encourage you to remark briefly on why you select the publication, and to provide a link to it. However, your entire contribution, including the referenced item(s) should be no more than 200 words.

That is from Econ Journal Watch, instructions for participating are at the link.

Friston’s theory of everything

Bayesian Brain theory flips this idea around again so that cognition is a cybernetic or autopoietic loop. The brain instead attempts to predict its inputs. The output kind of comes first. The brain anticipates the likely states of its environment to allow it to react with fast, unthinking, habit. The shortcut basal ganglia level of processing. It is only when there is a significant prediction error—some kind of surprise encountered—that the brain has to stop and attend, and spend time forming a more considered response. So output leads the way. The brain maps the world not as it is, but as it is about to unfold. And more importantly, how it is going to unfold in terms of the actions and intentions we are just about to impose on it. Cognition is embodied or enactive…

Friston is largely a modest person, but he is not afraid to bang the table a little more these days. At the 2021 Brain Connectivity Workshop, Friston asserted he has done nothing less than found a fourth branch of physics. You have Newtonian mechanics, quantum mechanics, and statistical mechanics (that is, thermodynamics), and now you can have Bayesian mechanics—the physics of systems which can exert a predictive control over their worlds. We can debate the truth of this claim. However, I applaud the ambition. Neuroscience establishing its own deep mathematical foundation at last. This is why I pitch the Bayesian Brain as the big thing of the past 20 years.

That is John McCrone summarizing Friston in a short piece.  And if you wish to read further, here is one famous paper by Friston.  I do not have an opinion of my own here, but am always happy to pass along (relatively) new ideas.  And here is Friston (with co-authors) applying his framework to autism.

Via Michelle Dawson.

Emergent Ventures winners, 19th cohort

Avi Schiffman, Harvard University. a second award to Avi, for his Ukraine Take Shelter project.

Carol Vieria de Magelhaes, Brazil and Northwestern University, to support a visiting research internship at Harvard Medical School.

BioDojo House, “A 3 month long co-living community in the Boston/Cambridge area from June-Aug, hosting 6-10 next generation builders & young emerging scientists between 18-25 years old.”

Serene Han, a free speech project, to expand Tor/Snowflake for Russian and other access to the uncensored internet.

Hector Alberto Diaz Gomez, Peru, Amazonas, general career development and travel, and for research into multilingual search engines.

Louise Perry and Fiona Mackenzie, London area, The Other Half, “a feminist think tank with a post-liberal agenda.”

Bridget Pegg, St. Louis and Mizzou, for general career development, and intellectual and policy outreach for Missouri and the broader Midwest.

Marius Hobbhahn, Tübingen, AI safety and for writings on many other topics as well.

Zeel Patel, Harvard and Broad Institute of MIT, applying machine learning to health care through AI.

Dwarkesh Patel, Austin, podcasting and general career support.

Tim Farrelly, Dublin, working on AI and vision issues and for general career development and conference travel.

Yang Zheng, North Hollywood, a project to crowdsource AI problems.

Ben Smith, University of Oregon, from New Zealand.  For his project on “multi-objective reinforcement learning with an exponential-log function.”

Paulina M Paiz, San Francisco/Toronto, travel grant to attend scientific conferences, and to continue with her work using DeepChem.

Congratulations!

Why are skyscrapers so short?

Brian Potter has a delightful primer on the physical, economic and regulatory barriers to building height beginning with the Great Pyramid of Giza and running to today. He concludes that the limit today isn’t technological–we could build much higher–but regulatory:

…we can estimate the magnitude of building height restrictions by comparing the cost of rent to the marginal cost of adding an additional floor. When Glaeser et al. 2005 did this for Manhattan, they found that the cost of rent was approximately twice the marginal cost of an additional floor, concluding, “the best explanation for why [developers] do not take advantage of this opportunity is the reason they tell us themselves: New York’s maze of building regulations effectively cap their building heights.” Cheshire et al. 2007 found similar magnitudes of rent-to-cost ratios in a variety of major European cities. When Glaeser et al. tried to estimate the size of building height externalities in New York, they concluded it was nowhere near the magnitude of the rent/construction cost difference, suggesting current height limits are far stricter than necessary.

These building height restrictions make us all poorer – not only do they cause a deadweight loss by artificially restricting the supply of available building space where it’s needed the most, but they also screen off the potential agglomeration benefits that accrue from increased density. This makes workers and businesses less productive and innovative than they could be, which not only hurts them, but everyone else who would benefit from cheaper and better goods and services.

The upshot is that there’s a lot of low-hanging fruit in building taller buildings. We don’t need to invent any new technology for pushing the boundaries of what’s possible to build, we just need to stop getting in our own way.

I concluded the same thing when I looked at building height in Mumbai, India. This video also contains a very nice explanation of the Floor Space Index also known as the Floor Area Ratio.

Infrared Mind Control

<strong>STANFORD</strong> Optogenetics, tested in rodents, can control electrical activity in a few carefully selected neurons, and may hold new insights into our disorders.The science of mind control is advancing at a furious rate. In 2011, the NYTimes reported that scientists had engineered neurons to be sensitive to light and then using fiber optic cables and brain implants they were able to quickly turn mice neurons on and off, instantly changing the behavior of the mice (paper here). The setup, which required genetic engineering, brain implants and cables worked incredibly well but a rather expensive method of mind control.

In 2021, scientists eliminated the fiber optic cable (see also the NYTimes coverage):

Nature: Advanced technologies for controlled delivery of light to targeted locations in biological tissues are essential to neuroscience research that applies optogenetics in animal models. Fully implantable, miniaturized devices with wireless control and power-harvesting strategies offer an appealing set of attributes in this context, particularly for studies that are incompatible with conventional fiber-optic approaches or battery-powered head stages. Limited programmable control and narrow options in illumination profiles constrain the use of existing devices. The results reported here overcome these drawbacks via two platforms…Neuroscience applications demonstrate that induction of interbrain neuronal synchrony in the medial prefrontal cortex shapes social interaction within groups of mice, highlighting the power of real-time subject-specific programmability of the wireless optogenetic platforms introduced here.

Now, if I am reading this 2022 paper and announcement correctly, scientists have eliminated the need for any implant or cable but can instead program the brain using infrared:

Now, scientists at the Wu Tsai Neurosciences Institute at Stanford University have developed the first non-invasive technique for controlling targeted brain circuits in behaving animals from a distance. The tool has the potential to solve one of the biggest unmet needs in neuroscience: a way to flexibly test the functions of particular brain cells and circuits deep in the brain during normal behavior — such as mice freely socializing with one another.

The research was published March 21, 2022 in Nature Biomedical Engineering by Guosong Hong and colleagues at Stanford and Singapore’s Nanyang Technological University. Hong is a Wu Tsai Neurosciences Institute Faculty Scholar and assistant professor of materials science and engineering in the Stanford School of Engineering who uses his background in chemistry and materials science to devise biocompatible tools and materials to advance the study of the brain.

Moreover, the scientists think that the genetic modifications can be done with a simple injection:

Hong and colleagues are also developing nanoscopic beads that can convert focused beams of ultrasound into light, and which can be injected directly into the bloodstream, making it possible to optogenetically target cells anywhere in the brain and to change this targeting at will within a single experiment.

If the tools for mind control can be injected then it doesn’t seem fanciful to think that they can be ingested. Mind control in a bottle. The fear is outside mind control but the reality will be self mind-control. Humans have been altering their minds for thousands of years but neuronal level adjustment at the flick of a switch is something new.

Hat tip: Kevin Lewis.

When Can/Should We Pull the Plug?

At Less Wrong a plea that “It’s time for EA leadership to pull the short-timelines fire alarm.”

Based on the past week’s worth of papers, it seems very possible (>30%) that we are now in the crunch-time section of a short-timelines world, and that we have 3-7 years until Moore’s law and organizational prioritization put these systems at extremely dangerous levels of capability.[1]

The papers I’m thinking about:

…For those who haven’t grappled with what actual advanced AI would mean, especially if many different organizations can achieve it:

  • No one knows how to build an AI system that accomplishes goals, that also is fine with you turning it off. It’s an unsolved research problem. Researchers have been trying for decades, but none of them think they’ve succeeded yet.
  • Unfortunately, for most conceivable goals you could give an AI system, the best way to achieve that goal (taken literally, which is the only thing computers know how to do) is to make sure it can’t be turned off. Otherwise, it might be turned off, and then (its version of) the goal is much less likely to happen.
  • If the AI has any way of accessing the internet, it will copy itself to as many places as it can, and then continue doing whatever it thinks it’s supposed to be doing. At this point, it becomes quite likely that we cannot limit its impact, which is likely to involve much more mayhem, possibly including making itself smarter and making sure that humans aren’t capable of creating other AIs that could turn it off. There’s no off button for the internet.
  • Most AI researchers do not believe in ~AGI, and thus have not considered the technical details of reward-specification for human-level AI models. Thus, it is as of today very likely that someone, somewhere, will do this anyway. Getting every AI expert in the world, and those they work with, to think through this is the single most important thing we can do.It is functionally impossible to build a complex system without ever getting to iterate (which we can’t do without an off-switch), and then get lucky and it just works. Every human invention ever has required trial and error to perfect (e.g. planes, computer software). If we have no off-switch, and the system just keeps getting smarter, and we made anything other than the perfect reward function (which, again, no one knows how to do), the global consequences are irreversible.
  • Do not make it easier for more people to build such systems. Do not build them yourself. If you think you know why this argument is wrong, please please please post it here or elsewhere. Many people have spent their lives trying to find the gap in this logic; if you raise a point that hasn’t previously been refuted, I will personally pay you $1,000.

There are several interesting things about this argument. First, in response to pushback, the author retracted the argument.

This post was rash and ill-conceived, and did not have clearly defined goals nor met the vaguely-defined ones. I apologize to everyone on here; you should probably update accordingly about my opinions in the future. In retrospect, I was trying to express an emotion of exasperation related to the recent news I later mention, which I do think has decreased timelines broadly across the ML world.

LessWrong is thus one of the few places in the world you can be shamed for not being Bayesian enough!

I am more interested, however, in the general question when will we know to pull the plug? And will that be too late? A pandemic is much easier to deal with early before it “goes viral”. But it’s very difficult to convince people that strong actions are required early. Why lockdown a city for fear of a virus when more people are dying daily in car accidents? Our record on acting early isn’t great. Moreover, AI risk also has a strong chance of going viral. Everything seems under control and then there’s a “lab leak” to the internet and foom! Maybe foom doesn’t happen but maybe it does. So when should we pull the plug? What are the signals to watch?

What should we expect from the most accurate sources?

The best estimates of a maximally accurate source would be very frequently updated and follow a random walk. And authoritative sources like WHO are often said to be our most accurate sources. Even so, such sources do not tend to act this way. They instead update their estimates rarely, and are especially reluctant to issue estimates that seem to “backtrack” on previous ones. Why?

First, authoritative sources serve as a coordination point for the behavior of others, and it is easier to coordinate when estimates change less often. Second, authoritative sources need to signal that they have power; they influence others far more than others influence them. Both of these pressures push them toward making infrequent changes. Ideally only one change, from “we don’t know”, to “here is the answer”. But if so, why do they feel pressures to issue estimates more often than this?

…authoritative sources prefer a strong consensus on what are the big sources of info that force them to update. This pushes for making very simple, stable, and clear distinctions between “scientific” info sources, on which one must update, and “unscientific” sources, where it is in considered inappropriate to update. Those latter sources must be declared not just less informative, but uninformative, and slandered in enough ways to make few tempted to rely on them.

Due to the third of these pressures, authoritative sources will work hard to prevent challengers competing on track record accuracy. Authorities will issue vague estimates that are hard to compare, prevent the collection of data that would support comparisons, and accuse challengers of crimes (e.g., moral positions) to make them seem ineligible for authority. And other kinds of powers, who prefer a single authority source they can defer to in order to avoid responsibility for their decisions, will help to suppress such competitors.

Here is more from Robin Hanson.

Far UVC Sanitization Kills COVID

A new type of ultraviolet light that is safe for people took less than five minutes to reduce the level of indoor airborne microbes by more than 98%, a joint study by scientists at Columbia University Vagelos College of Physicians and Surgeons and in the U.K. has found.

…Far-UVC light has a shorter wavelength than conventional germicidal UVC, so it can’t penetrate into living human skin cells or eye cells. But it is equally efficient at killing bacteria and viruses, which are much smaller than human cells.

In the past decade, many studies around the world have shown that far-UVC is both efficient at destroying airborne bacteria and viruses without causing damage to living tissue. But until now these studies had only been conducted in small experimental chambers, not in full-sized rooms mimicking real-world conditions.

…The efficacy of different approaches to reducing indoor virus levels is usually measured in terms of equivalent air changes per hour. In this study, far-UVC lamps produced the equivalent of 184 equivalent air exchanges per hour. This surpasses any other approach to disinfecting occupied indoor spaces, where five to 20 equivalent air changes per hour is the best that can be achieved practically.

“Our trials produced spectacular results, far exceeding what is possible with ventilation alone,” says Kenneth Wood, PhD, lecturer in the School of Physics and Astronomy at the University of St. Andrews and senior author of the study. “In terms of preventing airborne disease transmission, far-UVC lights could make indoor places as safe as being outside on the golf course on a breezy day at St. Andrews.”

Summary here. Study here.

Here and here and here are my previous posts on UV-C sanitization. We are moving slower than I would like but the picture is of a UVC robot being used at Pittsburgh airport so kudos to them.

The virtues of seaweed collecting

Nineteenth-century seaweed albums have a baked-in melancholy. Despite the best intentions, they do not flatter seaweed. The samples are brittle where the plant was pliant, opaque when once translucent, flaccid where previously ballooned. The displacement from sea to paper steals a measure of the plants’ integrity, and time leaches away the rest. In every respect, the wonders of seaweed have fled the book. And yet, these albums still speak — not of seaweed exactly, but of the collector’s care and devotion. There is a particular kind of eros that thrums between a receptive human and the natural world; the contours and depth of this eros is the true subject of a seaweed album.

Here is much more, interesting throughout, via Jodi Ettenberg.

The cost of applying

These applications are hefty. Before you even start applying for an NSF grant, you should probably read the 79 pages of instructions. The NIH helpfully provides a 10-part instructional video series. Rhodes Scholarships require a mind-boggling eight letters of recommendation. I tell college seniors to expect fellowship applications to be a six-month part-time job.

All this applying doesn’t just burden applicants. Professors run themselves ragged writing recommendations. The NSF relies on volunteers to complete 240,000 reviews every year. Entire university offices exist just to manage the paperwork that grants generate; universities bill this back to funders in the form of “indirect costs,” which at Harvard go as high as 70% of incoming grant funding. Grant agencies seem not to realize that by making everything about their grants burdensome, they allow universities to spend much of the grant money managing the grant itself!

Here is more from Adam Mastroianni.  Via Anecdotal.

New nuclear security grants

Here is the link with more information, via Stefan Schubert.

US Pandemic Policy: Failures, Successes, and Lessons

My talk at Bowling Green State University on US Pandemic Policy: Failures, Successes, and Lessons

This was not a black swan event. This was an entirely predicted and predictable event. We knew it was going to happen….And yet, we weren’t ready.

I am told that my talk made many people angry (not at me, natch).