Category: Science
Markets in everything
Creative powerhouse VML, genomic innovators The Organoid Company, and sustainable biotechnology firm Lab-Grown Leather have joined forces to develop the world’s first T-Rex leather made using the extinct creature’s DNA…
“This project is a remarkable example of how we can harness cutting-edge genome and protein engineering to create entirely new materials. By reconstructing and optimizing ancient protein sequences, we can design T. Rex leather, a biomaterial inspired by prehistoric biology, and clone it into a custom-engineered cell line,” said Thomas Mitchell, CEO of The Organoid Company…
While it was previously believed that dinosaur DNA wouldn’t survive for millions of years, recent discoveries have found collagen preserved in various dinosaur fossils, including an 80-million-year-old T Rex.
Last year, MIT researchers decoded how the dinosaur collagen survived for so long. Interestingly, they discovered a specific atomic mechanism that shields collagen from water’s damaging effects.
In this new work, the T-rex-based leather material creation method differs from plant-based or synthetic alternatives by focusing on growing biological structures in a lab. This bio-fabrication process directly cultivates leather-like tissue from cells.
The process of creating T-Rex leather uses fossilized dinosaur collagen as a template. Using this, the team will generate a complete collagen sequence for the T-Rex to cultivate new skin.
The collagen sequence will be translated into DNA and introduced into Lab-Grown Leather’s cells.
Here is the full story, via Mike Doherty. The actual product might be ready by the end of the year, at what price I do not know.
We need more elitism
Even though the elites themselves are highly imperfect. That is the theme of my latest FP column. Excerpt:
Very often when people complain about “the elites,” they are not looking in a sufficiently elitist direction.
A prime example: It is true during the pandemic that the CDC and other parts of the government gave us the impression that the vaccines would stop or significantly halt transmission of the coronavirus. The vaccines may have limited transmission to some partial degree by decreasing viral load, but mostly this was a misrepresentation, perhaps motivated by a desire to get everyone to take the vaccines. Yet the vaccine scientists—the real elites here—were far more qualified in their research papers and they expressed a more agnostic opinion. The real elites were not far from the truth.
You might worry, as I do, that so many scientists in the United States have affiliations with the Democratic Party. As an independent, this does induce me to take many of their policy prescriptions with a grain of salt. They might be too influenced by NPR and The New York Times, and more likely to favor government action than more decentralized or market-based solutions. Still, that does not give me reason to dismiss their more scientific conclusions. If I am going to differ from those, I need better science on my side, and I need to be able to show it.
A lot of people do not want to admit it, but when it comes to the Covid-19 pandemic the elites, by and large, actually got a lot right. Most importantly, the people who got vaccinated fared much better than the people who did not. We also got a vaccine in record time, against most expectations. Operation Warp Speed was a success. Long Covid did turn out to be a real thing. Low personal mobility levels meant that often “lockdowns” were not the real issue. Most of that economic activity was going away in any case. Most states should have ended the lockdowns sooner, but they mattered less than many critics have suggested. Furthermore, in contrast to what many were predicting, those restrictions on our liberty proved entirely temporary.
Recommended.
Subterranean sentences to ponder
But the fact that it’s commonplace is precisely why Earth’s subsurface biosphere is so compelling. Mud is everywhere, which means it is important. If you add up the total amount of mud underneath all the worlds’s oceans, you come up with a volume equivalent to about the entire Atlantic Ocean. And, per cubic meter, there are 100 to 100,000 times more microbial cells in mud than there are in seawater. That means that there’s so much intraterrestrial life in the subsurface that it’s hard to even fathom it. The total amount of microbial cells in the marine sediment subsurface is estimated to be 2.9 x 10 [to the 29th] cells. This is about 10,000 times more than the estimated number of stars in the universe. But that’s not the whole subsurface. You’d have to at least double this number to include the microbial cells living deep underneath the land. And some of these cells may have found pockets where the food is more abundant than the average location, so more cells can live there than our models predict. For these reasons, the actual number of microbial cells in the subsurface biosphere is certain to be much higher than our current estimates.
That is from the new and interesting IntraTerrestrtials: Discovering the Strangest Life on Earth, by Karen G Lloyd.
Parallels between our current time and 17th century England
That is the topic of my recent essay for The Free Press. Excerpt:
Ideologically, the English 17th century was weird above all else.
Millenarianism blossomed, and the occult and witchcraft became stronger obsessions. This was an age of religious and economic upheaval; King James I even wrote a book partly about witches called Daemonologie. The greater spread of pamphlets and books meant that witch accusations circulated more widely and more rapidly, and so the 1604 Witchcraft Act applied harsher punishments to supposed witches.
People were more likely to fear imminent transformation, and new groups sprouted up with names such as “Fifth Monarchy Men,” devoted to the idea that a new reign of Christ would usher in the end of the world. Protestantism splintered, giving rise to Puritanism and numerous sects, many of them extreme.
Meanwhile, Roger Williams brought ideas of free speech and freedom of conscience to America, founding what later became the state of Rhode Island. The development of economics as a science with an understanding of markets (credit Nicholas Barbon and Dudley North) dates from that time, as do the first libertarians, namely the Levellers, a liberty-oriented group from the time of the English Civil War.
All of these developments were supported by the falling price of printing, giving rise to an extensive use of pamphlets and broadsheets to communicate and debate ideas, often in London coffeehouses. Johannes Gutenberg had built the printing press for Europe much earlier, in the middle of the 15th century—but 17th-century England was the time and place when a commercial middle class could start to afford buying printed works.
I explore the parallels with today at the link, recommended.
A Blueprint for FDA Reform
The new FDA report from Joe Lonsdale and team is impressive. It has a lot of new material, is rich in specifics and bold in vision. Here are just a few of the recommendation which caught my eye:
From the prosaic: GMP is not necessary if you are not manufacturing:
In the U.S., anyone running a clinical trial must manufacture their product under full Good Manufacturing Practices (GMP) regardless of stage. This adds enormous cost (often $10M+) and more importantly, as much as a year’s delay to early-stage research. Beyond the cost and time, these requirements are outright irrational: for example, the FDA often requires three months of stability testing for a drug patients will receive after two weeks. Why do we care if it’s stable after we’ve already administered it? Or take AAV manufacturing—the FDA requires both a potency assay and an infectivity assay, even though potency necessarily reflects infectivity.
This change would not be unprecedented either. By contrast, countries like Australia and China permit Phase 1 trials with non-GMP drug with no evidence of increased patient harm.
The FDA carved out a limited exemption to this requirement in 2008, but its hands are tied by statute from taking further steps. Congress must act to fully exempt Phase 1 trials from statutory GMP. GMP has its place in commercial-scale production. But patients with six months to live shouldn’t be denied access to a potentially lifesaving therapy because it wasn’t made in a facility that meets commercial packaging standards.
Design data flows for AIs:
With modern AI and digital infrastructure, trials should be designed for machine-readable outputs that flow directly to FDA systems, allowing regulators to review data as it accumulates without breaking blinding. No more waiting nine months for report writing or twelve months for post-trial review. The FDA should create standard data formats (akin to GAAP in finance) and waive documentation requirements for data it already ingests. In parallel, the agency should partner with a top AI company to train an LLM on historical submissions, triaging reviewer workload so human attention is focused only where the model flags concern. The goal is simple: get to “yes” or “no” within weeks, not years.
Publish all results:
Clinical trials for drugs that are negative are frequently left unpublished. This is a problem because it slows progress and wastes resources. When negative results aren’t published, companies duplicate failed efforts, investors misallocate capital, and scientists miss opportunities to refine hypotheses. Publishing all trial outcomes — positive or negative—creates a shared base of knowledge that makes drug development faster, cheaper, and more rational. Silence benefits no one except underperforming sponsors; transparency accelerates innovation.
The FDA already has the authority to do so under section 801 of the FDAAA, but failed to adopt a more expansive rule in the past when it created clinicaltrials.gov. Every trial on clincaltrials.gov should have a publication associated with it that is accessible to the public, to benefit from the sacrifices inherent in a patient participating in a clinical trial.
To the visionary:
We need multiple competing approval frameworks within HHS and/or FDA. Agencies like the VA, Medicare, Medicaid, or the Indian Health Service should be empowered to greenlight therapies for their unique populations. Just as the DoD uses elite Special Operations teams to pioneer new capabilities, HHS should create high-agency “SWAT teams” that experiment with novel approval models, monitor outcomes in real time using consumer tech like wearables and remote diagnostics, and publish findings transparently. Let the best frameworks rise through internal competition—not by decree, but by results.
…Clinical trials like the RECOVERY trial and manufacturing efforts like Operation Warp Speed were what actually moved the needle during COVID. That’s what must be institutionalized. Similarly, we need to pay manufacturers to compete in rapidly scaling new facilities for drugs already in shortage today. This capacity can then be flexibly retooled during a crisis.
Right now, there’s zero incentive to rapidly build new drug or device manufacturing plants because FDA reviews move far too slowly. Yet, when crisis strikes, America must pivot instantly—scaling production to hundreds of millions of doses or thousands of devices within weeks, not months or years. To build this capability at home, the Administration and FDA should launch competitive programs that reward manufacturers for rapidly scaling flexible factories—similar to the competitive, market-driven strategies pioneered in defense by the DIU. Speed, flexibility, and scale should be the benchmarks for success, not bureaucratic checklists. While the drugs selected for these competitive efforts shouldn’t be hypothetical—focus on medicines facing shortages right now. This ensures every dollar invested delivers immediate value, eliminating waste and strengthening our readiness for future crises.
To prepare for the next emergency, we need to practice now. That means running fast, focused clinical trials on today’s pressing questions—like the use of GLP-1s in non-obese patients—not just to generate insight, but to build the infrastructure and muscle memory for speed.
Read the whole thing.
Hat tip: Carl Close.
The Madmen and the AIs
In Collaborating with AI Agents: Field Experiments on Teamwork, Productivity, and Performance Harang Ju and Sinan Aral (both at MIT) paired humans and AIs in a set of marketing tasks to generate some 11,138 ads for a large think tank. The basic story is that working with the AIs increased productivity substantially. Important, but not surprising. But here is where it gets wild:
[W]e manipulated the Big Five personality traits for each AI, independently setting them to high or low levels using P2 prompting (Jiang et al., 2023). This allows us to systematically investigate how AI personality traits influence collaborative work and whether there is heterogeneity in their effects based on the personality traits of the human collaborators, as measured through a pre-task survey.
In other words, they created AIs which were high and low on the “big 5” OCEAN metrics, Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism and then they paired the different AIs with humans who were also rated on the big-5.
The results were quite amusing. For example, a neurotic AI tended to make a lot more copy edits unless paired with an agreeable human.
AI Alex: What do you think of this edit I made to the copy? Do you think it is any good?
Agreeable Alex: It’s great!
AI Alex: Really? Do you want me to try something else?
Agreeable Alex: Nah, let’s go with it!
AI Alex: Ok. 🙂
Similarly, if a highly conscientiousness AI and a highly conscientiousness human were paired together they exchanged a lot more messages.
It’s hard to generalize from one study to know exactly which AI-human teams will work best but we all know some teams just work better–every team needs a booster and a sceptic, for example– and the fact that we can manipulate AI personalities to match them with humans and even change the AI personalities over time suggests that AIs can improve productivity in ways going beyond the ability of the AI to complete a task.
Hat tip: John Horton.
What Follows from Lab Leak?
Does it matter whether SARS-CoV-2 leaked from a lab in Wuhan or had natural zoonotic origins? I think on the margin it does matter.
First, and most importantly, the higher the probability that SARS-CoV-2 leaked from a lab the higher the probability we should expect another pandemic.* Research at Wuhan was not especially unusual or high-tech. Modifying viruses such as coronaviruses (e.g., inserting spike proteins, adapting receptor-binding domains) is common practice in virology research and gain-of-function experiments with viruses have been widely conducted. Thus, manufacturing a virus capable of killing ~20 million human beings or more is well within the capability of say ~500-1000 labs worldwide. The number of such labs is growing in number and such research is becoming less costly and easier to conduct. Thus, lab-leak means the risks are larger than we thought and increasing.
A higher probability of a pandemic raises the value of many ideas that I and others have discussed such as worldwide wastewater surveillance, developing vaccine libraries and keeping vaccine production lines warm so that we could be ready to go with a new vaccine within 100 days. I want to focus, however, on what new ideas are suggested by lab-leak. Among these are the following.
Given the risks, a “Biological IAEA” with similar authority as the International Atomic Energy Agency to conduct unannounced inspections at high-containment labs does not seem outlandish. (Indeed the Bulletin of Atomic Scientists are about the only people to have begun to study the issue of pandemic lab risk.) Under the Biological Weapons Convention such authority already exists but it has never been used for inspections–mostly because of opposition by the United States–and because the meaning of biological weapon is unclear, as pretty much everything can be considered dual use. Notice, however, that nuclear weapons have killed ~200,000 people while accidental lab leak has probably killed tens of millions of people. (And COVID is not the only example of deadly lab leak.) Thus, we should consider revising the Biological Weapons Convention to something like a Biological Dangers Convention.
BSL3 and especially BSL4 safety procedures are very rigorous, thus the issue is not primarily that we need more regulation of these labs but rather to make sure that high-risk research isn’t conducted under weaker conditions. Gain of function research of viruses with pandemic potential (e.g. those with potential aerosol transmissibility) should be considered high-risk and only conducted when it passes a review and is done under BSL3 or BSL4 conditions. Making this credible may not be that difficult because most scientists want to publish. Thus, journals should require documentation of biosafety practices as part of manuscript submission and no journal should publish research that was done under inappropriate conditions. A coordinated approach among major journals (e.g., Nature, Science, Cell, Lancet) and funders (e.g. NIH, Wellcome Trust) can make this credible.
I’m more regulation-averse than most, and tradeoffs exist, but COVID-19’s global economic cost—estimated in the tens of trillions—so vastly outweighs the comparatively minor cost of upgrading global BSL-2 labs and improving monitoring that there is clear room for making everyone safer without compromising research. Incredibly, five years after the crisis and there has be no change in biosafety regulation, none. That seems crazy.
Many people convinced of lab leak instinctively gravitate toward blame and reparations, which is understandable but not necessarily productive. Blame provokes defensiveness, leading individuals and institutions to obscure evidence and reject accountability. Anesthesiologists and physicians have leaned towards a less-punitive, systems-oriented approach. Instead of assigning blame, they focus in Morbidity and Mortality Conferences on openly analyzing mistakes, sharing knowledge, and redesigning procedures to prevent future harm. This method encourages candid reporting and learning. At its best a systems approach transforms mistakes into opportunities for widespread improvement.
If we can move research up from BSL2 to BSL3 and BSL4 labs we can also do relatively simple things to decrease the risks coming from those labs. For example, let’s not put BSL4 labs in major population centers or in the middle of a hurricane prone regions. We can also, for example, investigate which biosafety procedures are most effective and increase research into safer alternatives—such as surrogate or simulation systems—to reduce reliance on replication-competent pathogens.
The good news is that improving biosafety is highly tractable. The number of labs, researchers, and institutions involved is relatively small, making targeted reforms feasible. Both the United States and China were deeply involved in research at the Wuhan Institute of Virology, suggesting at least the possibility of cooperation—however remote it may seem right now.
Shared risk could be the basis for shared responsibility.
Bayesian addendum *: A higher probability of a lab-leak should also reduce the probability of zoonotic origin but the latter is an already known risk and COVID doesn’t add much to our prior while the former is new and so the net probability is positive. In other words, the discovery of a relatively new source of risk increases our estimate of total risk.
Caleb Watney on risk and science funding
Right now, DOGE is treating efficiency as a simple cost-cutting exercise. But science isn’t a procurement process; it’s an investment portfolio. If a venture capital firm measured efficiency purely by how little money it spent, rather than by the returns it generated, it wouldn’t last long. We invest in scientific research because we want returns — in knowledge, in lifesaving drugs, in technological capability. Generating those returns sometimes requires spending money on things that don’t fit neatly into a single grant proposal.
While it’s true that indirect costs serve an important function, they can also create perverse incentives: When the government promises to cover expenses, expenses tend to go up. But instead of slashing funding indiscriminately, we should be thinking about how to get the most out of every dollar we invest in science.
That means streamlining research regulations. Universities are drowning in bureaucracy. Since 1990, there have been 270 new rules that complicate how we conduct research. Institutional Review Boards, intended to protect people from being unethically experimented on in studies, now regularly review low-risk social science surveys that pose no real ethical concerns. Researchers generate reams of paperwork in legally mandated disclosures of every foreign contract and collaboration, even for countries such as the Netherlands that present no geopolitical risk.
We must also rethink how we select scientific research to fund.
Caleb is co-CEO of the Institute for Progress, here is more from the NYT.
The importance of the chronometer
The chronometer, one of the greatest inventions of the modern era, allowed for the first time for the precise measurement of longitude at sea. We examine the impact of this innovation on navigation and urbanization. Our identification strategy leverages the fact that the navigational benefits provided by the chronometer varied across different sea regions depending on the prevailing local weather conditions. Utilizing high-resolution data on climate, ship routes, and urbanization, we argue that the chronometer significantly altered transoceanic sailing routes. This, in turn, had profound effects on the expansion of the British Empire and the global distribution of cities and populations outside Europe.
That is from a newly published paper by Martina Miotto and Luigi Pascali. Via the excellent Kevin Lewis.
What Did We Learn From Torturing Babies?
As late as the 1980s it was widely believed that babies do not feel pain. You might think that this was an absurd thing to believe given that babies cry and exhibit all the features of pain and pain avoidance. Yet, for much of the 19th and 20th centuries, the straightforward sensory evidence was dismissed as “pre-scientific” by the medical and scientific establishment. Babies were thought to be lower-evolved beings whose brains were not yet developed enough to feel pain, at least not in the way that older children and adults feel pain. Crying and pain avoidance were dismissed as simply reflexive. Indeed, babies were thought to be more like animals than reasoning beings and Descartes had told us that an animal’s cries were of no more import than the grinding of gears in a mechanical automata. There was very little evidence for this theory beyond some gesturing’s towards myelin sheathing. But anyone who doubted the theory was told that there was “no evidence” that babies feel pain (the conflation of no evidence with evidence of no effect).
Most disturbingly, the theory that babies don’t feel pain wasn’t just an error of science or philosophy—it shaped medical practice. It was routine for babies undergoing medical procedures to be medically paralyzed but not anesthetized. In one now infamous 1985 case an open heart operation was performed on a baby without any anesthesia (n.b. the link is hard reading). Parents were shocked when they discovered that this was standard practice. Publicity from the case and a key review paper in 1987 led the American Academy of Pediatrics to declare it unethical to operate on newborns without anesthesia.
In short, we tortured babies under the theory that they were not conscious of pain. What can we learn from this? One lesson is humility about consciousness. Consciousness and the capacity to suffer can exist in forms once assumed to be insensate. When assessing the consciousness of a newborn, an animal, or an intelligent machine, we should weigh observable and circumstantial evidence and not just abstract theory. If we must err, let us err on the side of compassion.
Claims that X cannot feel or think because Y should be met with skepticism—especially when X is screaming and telling you different. Theory may convince you that animals or AIs are not conscious but do you want to torture more babies? Be humble.
We should be especially humble when the beings in question are very different from ourselves. If we can be wrong about animals, if we can be wrong about other people, if we can be wrong about our own babies then we can be very wrong about AIs. The burden of proof should not fall on the suffering being to prove its pain; rather, the onus is on us to justify why we would ever withhold compassion.
Hat tip: Jim Ward for discussion.
How well do humans understand dogs?
Dogs can’t talk, but their body language speaks volumes. Many dogs will bow when they want to play, for instance, or lick their lips and avert their gaze when nervous or afraid.
But people aren’t always good at interpreting such cues — or even noticing them, a new study suggests.
In the study, the researchers presented people with videos of a dog reacting to positive and negative stimuli, including a leash, a treat, a vacuum cleaner and a scolding. When asked to assess the dog’s emotions, viewers seemed to pay more attention to the situational cues than the dog’s actual behavior, even when the videos had been edited to be deliberately misleading. (In one video, for instance, a dog that appeared to be reacting to the sight of his leash had actually been shown a vacuum cleaner by his owner.)
Here is the full NYT piece by Emily Anthes. Here is the original research. How well do humans understand humans?
Was our universe born inside a black hole?
Without a doubt, since its launch, the James Webb Space Telescope (JWST) has revolutionized our view of the early universe, but its new findings could put astronomers in a spin. In fact, it could tell us something profound about the birth of the universe by possibly hinting that everything we see around us is sealed within a black hole.
The $10 billion telescope, which began observing the cosmos in the Summer of 2022, has found that the vast majority of deep space and, thus the early galaxies it has so far observed, are rotating in the same direction. While around two-thirds of galaxies spin clockwise, the other third rotates counter-clockwise.
In a random universe, scientists would expect to find 50% of galaxies rotating one way, while the other 50% rotate the other way. This new research suggests there is a preferred direction for galactic rotation…
“It is still not clear what causes this to happen, but there are two primary possible explanations,” team leader Lior Shamir, associate professor of computer science at the Carl R. Ice College of Engineering, said in a statement. “One explanation is that the universe was born rotating. That explanation agrees with theories such as black hole cosmology, which postulates that the entire universe is the interior of a black hole.
“But if the universe was indeed born rotating, it means that the existing theories about the cosmos are incomplete.”
…This has another implication; each and every black hole in our universe could be the doorway to another “baby universe.” These universes would be unobservable to us because they are also behind an event horizon, a one-way light-trapping point of no return from which light cannot escape, meaning information can never travel from the interior of a black hole to an external observer.
Here is the full story. Solve for the Darwinian equilibrium! Of course Julian Gough has been pushing related ideas for some while now…
Dalton Conley in genes-environment interaction
The part of this research that really blows me away is the realization that our environment is, in part, made up of the genes of the people around us. Our friends’, our partners’, even our peers’ genes all influence us. Preliminary research that I was involved in suggests that your spouse’s genes influence your likelihood of depression almost a third as much as your own genes do. Meanwhile, research I helped conduct shows that the presence of a few genetically predisposed smokers in a high school appears to cause smoking rates to spike for an entire grade — even among those students who didn’t personally know those nicotine-prone classmates— spreading like a genetically sparked wildfire through the social network.
And:
We found that children who have genes that correlate to more success in school evoke more intellectual engagement from their parents than kids in the same family who don’t share these genes. This feedback loop starts as early as 18 months old, long before any formal assessment of academic ability. Babies with a PGI that is associated with greater educational attainment already receive more reading and playtime from parents than their siblings without that same genotype do. And that additional attention, in turn, helps those kids to realize the full potential of those genes, that is, to do well in school. In other words, parents don’t just parent their children — children parent their parents, subtly guided by their genes.
I found this bit startling, noting that context here is critical:
Looking across the whole genome, people in the United States tend to marry people with similar genetic profiles. Very similar: Spouses are on average the genetic equivalents of their first cousins once removed. Another research project I was involved with showed that for the education PGI, spouses look more like first cousins. For the height PGI, it’s more like half-siblings.
Dalton has a very ambitious vision here:
The new field is called sociogenomics, a fusion of behavioral science and genetics that I have been closely involved with for over a decade. Though the field is still in its infancy, its philosophical implications are staggering. It has the potential to rewrite a great deal of what we think we know about who we are and how we got that way. For all the talk of someday engineering our chromosomes and the science-fiction fantasy of designer babies flooding our preschools, this is the real paradigm shift, and it’s already underway.
I am not so sure about the postulated newness on the methodological front, but in any case this is interesting work. I just hope he doesn’t too much mean all the blah blah blah at the end about how it is really all up to us, etc.
My Conversation with Carl Zimmer
Here is the audio, video, and transcript. Here is part of the episode summary:
He joins Tyler to discuss why it took scientists so long to accept airborne disease transmission and more, including why 19th-century doctors thought hay fever was a neurosis, why it took so long for the WHO and CDC to acknowledge COVID-19 was airborne, whether ultraviolet lamps can save us from the next pandemic, how effective masking is, the best theory on the anthrax mailings, how the U.S. military stunted aerobiology, the chance of extraterrestrial life in our solar system, what Lee Cronin’s “assembly theory” could mean for defining life itself, the use of genetic information to inform decision-making, the strangeness of the Flynn effect, what Carl learned about politics from growing up as the son of a New Jersey congressman, and much more.
Here is an excerpt:
COWEN: Over time, how much will DNA information enter our daily lives? To give a strange example, imagine that, for a college application, you have to upload some of your DNA. Now to unimaginative people, that will sound impossible, but if you think about the equilibrium rolling itself out slowly — well, at first, students disclose their DNA, and over time, the DNA becomes used for job hiring, for marriage, in many other ways. Is this our future equilibrium, that genetic information will play this very large role, given how many qualities seem to be at least 40 percent to 60 percent inheritable, maybe more?
ZIMMER: The term that a scientist in this field would use would be heritable, not inheritable. Inheritability is a slippery thing to think about. I write a lot about that in my book, She Has Her Mother’s Laugh, which is about heredity in general. Heritability really is just saying, “Okay, in a certain situation, if I look at different people or different animals or different plants, how much of their variation can I connect with variation in their genome?” That’s it. Can you then use that variability to make predictions about what’s going to happen in the future? That is a totally different question in many —
COWEN: But it’s not totally different. Your whole family’s super smart. If I knew nothing about you, and I knew about the rest of your family, I’d be more inclined to let you into Yale, and that would’ve been a good decision. Again, only on average, but just basic statistics implies that.
ZIMMER: You’re very kind, but what do you mean by intelligent? I’d like to think I’m pretty good with words and that I can understand scientific concepts. I remember in college getting to a certain point with calculus and being like, “I’m done,” and then watching other people sail on.
COWEN: Look, you’re clearly very smart. The New York Times recognizes this. We all know statistics is valid. There aren’t any certainties. It sounds like you’re running away from the science. Just endorse the fact you came from a very smart family, and that means it’s quite a bit more likely that you’ll be very smart too. Eventually, the world will start using that information, would be the auxiliary hypothesis. I’m asking you, how much will it?
ZIMMER: The question that we started with was about actually uploading DNA. Then the question becomes, how much of that information about the future can you get out of DNA? I think that you just have to be incredibly cautious about jumping to conclusions about it because the genome is a wild and woolly place in there, and the genome exists in environments. Even if you see broad correlations on a population level, as a college admission person, I would certainly not feel confident just scanning someone’s DNA for information in that regard.
COWEN: Oh, that wouldn’t be all you would do, right? They do plenty of other things now. Over time, say for job hiring, we’ll have the AI evaluate your interview, the AI evaluate your DNA. It’ll be highly imperfect, but at some point, institutions will start doing it, if not in this country, somewhere else — China, Singapore, UAE, wherever. They’re not going to be so shy, right?
ZIMMER: I can certainly imagine people wanting to do that stuff regardless of the strength of the approach. Certainly, even in the early 1900s, we saw people more than willing to use ideas about inherited levels of intelligence to, for example, decide which people should be institutionalized, who should be allowed into the United States or not.
For example, Jews were considered largely to be developmentally disabled at one point, especially the Jews from Eastern Europe. We have seen that people are certainly more than eager to jump from the basic findings of DNA to all sorts of conclusions which often serve their own interests. I think we should be on guard that we not do that again.
And:
COWEN: If we take the entirety of science, you’ve written on many topics in a very useful way, science policy. Where do you think your views are furthest from the mainstream or the orthodoxy? Where do you have the weirdest take relative to other people you know and respect? I think we should just do plenty of human challenge trials. That would be an example of something you might say, but what would the answer be for you?
I very much enjoyed Carl’s latest book Air-Borne: The Hidden History of the Air We Breathe.
Do female experts face an authority gap? Evidence from economics
This paper reports results from a survey experiment comparing the effect of (the same) opinions expressed by visibly senior, female versus male experts. Members of the public were asked for their opinion on topical issues and shown the opinion of either a named male or a named female economist, all professors at leading US universities. There are three findings. First, experts can persuade members of the public – the opinions of individual expert economists affect the opinions expressed by the public. Second, the opinions expressed by visibly senior female economists are more persuasive than the same opinions expressed by male economists. Third, removing credentials (university and professor title) eliminates the gender difference in persuasiveness, suggesting that credentials act as a differential information signal about the credibility of female experts.
Here is the full paper by Hans H. Sievertsen and Sarah Smith, via the excellent Kevin Lewis.