Science

Today is Asteroid Day, the anniversary of the largest asteroid impact in recent history, the June 30, 1908 Siberian Tunguska asteroid strike. The Tunguska asteroid was only about 40 meters in size but the impact was 1000 times more powerful than the Hiroshima nuclear bomb.

Large asteroid strikes are low-probability, high-death events–so high-death that by some estimates the probability of dying from an asteroid strike is on the same order as dying in an airplane crash. To mark asteroid day, events around the world, including here at the observatory at George Mason University, will discuss asteroids and how we can protect our civilization.

Tyler and I are signatories to the 100X Declaration which reads in part:

There are a million asteroids in our solar system that have the potential to strike Earth and destroy a city, yet we have discovered less than 10,000….

Therefore, we, the undersigned, call for…A rapid hundred-fold acceleration of the discovery and tracking of Near-Earth Asteroids to 100,000 per year within the next ten years.

I am also a contributor to an Indiegogo campaign to develop a planetary defense system–yes, seriously! I don’t expect the campaign to succeed because, as our principles of economics textbook explains, too many people will try to free ride. But perhaps the campaign will generate some needed attention. In the meantime, check out this video on public goods and asteroid defense from our MRU course (as always the videos are free for anyone to use in the classroom.)

By the way, can you identify the easter egg to growing up in the 1980s?

Stephen Curry set a record In May of this year:

It took Reggie Miller 22 games to set an NBA playoff record of 58 three-pointers for the Indiana Pacers in the 2000 playoffs. Now, Stephen Curry has broken that mark in just 13 games.

He is now up in the 80s I believe.  Curry, by the way, is NBA MVP and his team is probably on the verge of winning the Finals.  The three-point strategy seems to be working: for Curry, for the Golden State Warriors, and also for last year’s champions, the San Antonio Spurs.

Yet the three-point shot has been in the NBA since 1979 (!), and for most of those years it was not a dominant weapon.

What took so long?  At first the shot was thought to be a cheesy gimmick.  Players had to master the longer shot, preferably from their earliest training.  Coaches had to figure out three-point strategies, which include rethinking the fast break and different methods of floor spacing and passing; players had to learn those techniques too.  The NBA had to change its rules to encourage more three-pointers (e.g., allowing zone defenses, discouraging isolation plays).  General managers had to realize that Rick Pitino, though perhaps a bad NBA coach, was not a total fool, and that the Phoenix Suns were not a fluke.  People had to ponder the expected value concept a little more carefully.  Line-ups had to be smaller.  And so on.  Most of all, coaches and general managers needed the vision to see how all these pieces could fit together — Arnold Kling’s patterns of sustainable trade and specialization.

In other words, this “technology” has been legal since 1979, yet only recently has it started to come into its own.  (Some teams still haven’t figured out how to use it properly.)  And what a simple technology it is: it involves only placing your feet on a different spot on the floor and then moving your arms and legs in a coordinated (one hopes) motion.  The incentives of money, fame, and sex to get this right have been high from the beginning, and there are plenty of different players and teams in the NBA, not to mention college or even high school ball, to figure it out.  There is plenty of objective data in basketball, most of all when it comes to scoring.

Dell Curry, Stephen’s father, was in his time also known as a three-point shooter in the NBA.  But he didn’t come close to his son’s later three-point performance.

So how long do ordinary scientific inventions need to serve up their fruits?  I am a big fan of Stephen Curry, but in fact his family tale is ultimately a sobering one.

Addendum: Tom Haberstroh fills in the history.

Hybrid reviewing systems

by on June 13, 2015 at 3:03 pm in Science | Permalink

Gabriel Power emails me:

I have not seen anyone discuss this possibility of a hybrid open/classic referee process: Editors approve set of X reviewers (say 300). Then, every submitted paper can be reviewed by any approved reviewer. Editor then considers all reviews. If no review, then editor assigns or desk rejects. Issues: Incentives? Selection bias? Matching author quality with reviewer quality? Conflicts of interest? Pros and cons? I think it is intriguing. Sincerely yours — Gabriel Power

I would stress the goal is not to find the “best” reviewing system.  Rather we are looking to set different reviewing systems in competition with each other.

Laugh Now (while you can)

by on June 8, 2015 at 7:21 am in Science | Permalink

Here’s a video of robots falling over on the first day of Darpa’s 2015 robot challenge, a challenge set up after Japan’s nuclear disaster at Fukushima in order to encourage development of robots capable of navigating a disaster area.

Keep in mind that the first Darpa Grand Challenge for driverless vehicles was held in 2004 and not a single vehicle came to close to finishing the course and most failed within a few hundred metres. Did I mention that was in 2004?

AI Darwin

by on June 6, 2015 at 10:41 am in Science | Permalink

Wired: For the first time ever a computer has managed to develop a new scientific theory using only its artificial intelligence, and with no help from human beings.

Computer scientists and biologists from Tufts University programmed the computer so that it was able to develop a theory independently when it was faced with a scientific problem. The problem they chose was one that has been puzzling biologists for 120 years. The genes of sliced-up flatworms are capable of regenerating in order to form new organisms — this is a long-documented phenomenon, but scientists have been mystified for years over exactly what happens to the cells to make this possible.

The first sentences exaggerates (this is neither the first such discovery nor did the AI have no help from humans) but the discovery of how flatworms regenerate is real and unlike say proofs of the four-color theorem the theory is readily comprehensible to humans:

What the computer discovered was that the process requires three known molecules and two proteins that were previously unknown. This discovery, says Levin, “represents the most comprehensive model of planarian regeneration found to date”.

“One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend,” he adds. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data,”

AddendumOriginal paper and another useful discussion from Popular Mechanics.

Until “effective altruism” figures out what drives innovation, those recommendations simply aren’t that reliable.

Addendum: John Sterling just wrote this in the MR comments section:

I think Steven Landsburg made the definitive “pro-Paulson gift” argument in his classic Slate piece defending Ebenezer Scrooge. Paulson could have pulled a “Larry Ellison” and built himself a $200 mm yacht. He decided to forgo (some) of his conspicuous consumption and instead let the Harvard Management Company steward some additional capital.

I’ve sometimes wondered whether the Harvard endowment is the ultimate way to be an “effective altruist” for an Austrian-leaning type. If you believe, like Baldy Harper did, “that savings invested in privately owned economic tools of production amount to … the greatest economic charity of all.” then the Harvard endowment makes a pretty interesting beneficiary. I can’t think of another institution in the world today that is more likely to hold on to its capital in perpetuity than the folks in Cambridge.

I am not saying he is right, just don’t be so quick to conclude he is wrong.  By the way, I do not in fact donate my own money to Harvard.

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed. I have a different objection.

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it”.  I see nothing objectionable in this scenario.

Aside from greater plausibility, a glide path means that dealing with the Terminator scenario is easier. In the Terminator scenario, humans must continually be on guard. In the glide path scenario we only have to avoid the Terminator until we become them and then the problem is resolved with little fuss. No human race but no mass murder either.

More generally, what’s so great about the human race? I agree, there are lots of great things to point to such as the works of Shakespeare, Mozart, and Grothendieck. We should revere the greatness of the works, however, not the substrate on which the works were created. If what is great about humanity is the great things that we have done then the future may hold greater things yet. If we work to pass on our best values and aspirations to our technological progeny then we can be proud of future generations even if they differ from us in some ways. I delight to think of the marvels that future generations may produce. But I see no reason to hope that such marvels will be produced by beings indistinguishable from myself, indeed that would seem rather disappointing.

Thanks to computerized aiming, HEL MD can operate in wholly autonomous mode, which Boeing tested successfully in May 2014 – although the trials uncovered an unexpected challenge. The weapon’s laser beam is silent and invisible, and not all targets explode as they are destroyed, so an automated battle can be over before operators have noticed anything. ‘The engagements happen quickly, and unless you’re staring at a screen 24-7 you’ll never see them,’ Blount says. ‘So we’ve built sound in for whenever we fire the laser. We plan on taking advantage of lots of Star Trek and Star Wars sound bites.’

More generally, fibre-laser weapons may be on their way:

Despite their modest capabilities, Scharre claims that fibre-laser weapons could find a niche in US military defence in 5–10 years. “They may not be as grand and strategic as the Star Wars concept,” he says, “but they could save lives, protect US bases, ships and service members.”

The full article is here, via the excellent Kevin Lewis.

*From the Earth to the Moon*

by on June 2, 2015 at 8:04 am in Books, History, Science | Permalink

I read this 1865 Jules Verne book lately and very much enjoyed it.  It’s a poke at scientific rationalists and project-happy obsessives, and humorous throughout.  It mocks those who wish to bet on ideas, compares the American and French versions of excess grandiosity, and asks in subtle ways what are the limits of progress.  It reminded me of John Gray far more than I had been expecting.  And it’s about an America with no NIMBY, where everyone wants the projects right in their backyard.  The space program in fact sets off a rivalry between Texas and Florida to house the first moon shot…and it is to be done with a very large gun.

Definitely recommended, here is the book’s Wikipedia page.

A resident of Mountain View writes about their interactions with self-driving cars (from the Emerging Technologies Blog):

I see no less than 5 self-driving cars every day. 99% of the time they’re the Google Lexuses, but I’ve also seen a few other unidentified ones (and one that said BOSCH on the side). I have never seen one of the new “Google-bugs” on the road, although I’ve heard they’re coming soon. I also don’t have a good way to tell if the cars were under human control or autonomous control during the stories I’m going to relate.

Anyway, here we go: Other drivers don’t even blink when they see one. Neither do pedestrians – there’s no “fear” from the general public about crashing or getting run over, at least not as far as I can tell.

Google cars drive like your grandma – they’re never the first off the line at a stop light, they don’t accelerate quickly, they don’t speed, and they never take any chances with lane changes (cut people off, etc.).

…Google cars are very polite to pedestrians. They leave plenty of space. A Google car would never do that rude thing where a driver inches impatiently into a crosswalk while people are crossing because he/she wants to make a right turn. However, this can also lead to some annoyance to drivers behind, as the Google car seems to wait for the pedestrian to be completely clear. On one occasion, I saw a pedestrian cross into a row of human-thickness trees and this seemed to throw the car for a loop for a few seconds. The person was a good 10 feet out of the crosswalk before the car made the turn.

…Once, I [on motorcycle, AT] got a little caught out as the traffic transitioned from slow moving back to normal speed. I was in a lane between a Google car and some random truck and, partially out of experiment and partially out of impatience, I gunned it and cut off the Google car sort of harder than maybe I needed too… The car handled it perfectly (maybe too perfectly). It slowed down and let me in. However, it left a fairly significant gap between me and it. If I had been behind it, I probably would have found this gap excessive and the lengthy slowdown annoying. Honestly, I don’t think it will take long for other drivers to realize that self-driving cars are “easy targets” in traffic.

Overall, I would say that I’m impressed with how these things operate. I actually do feel safer around a self-driving car than most other California drivers.

Hat tip: Chris Blattman.

Joel Shurkin reports:

Ants — most are teeny creatures with brains smaller than pinheads — engineer traffic better than humans do. Ants never run into stop-and-go-traffic or gridlock on the trail. In fact, the more ants of one species there are on the road, the faster they go, according to new research.

Researchers from two German institutions — the University of Potsdam and the Martin Luther University of Halle-Wittenberg — found a nest of black meadow ants (Formica pratensis) in the woods of Saxony. The nest had four trunk trails leading to foraging areas, some of them 60 feet long. The researchers set up a camera that took time-lapse photography, and recorded the ants’ comings and goings.

…Oddly, the heavier the traffic, the faster the ants marched. Unlike humans driving cars, their velocity increased as their numbers did, and the trail widened as the ants spread out.

In essence ants vary the number of open lanes, but they have another trick as well:

“Ant vision is not that great, so I suspect that most of the information comes from tactile senses (antennas, legs). This means they are actually aware of not only the ant in front, but the ant behind as well,” he wrote in an e-mail. “That reduces the instability found in automobile highways, where drivers only know about the car in front.”

Driverless vehicles can of course in this regard be more like ants than humans.

How realistic is it to directly send data in and out of the brain? That is the core scientific innovation underlying my novels. From a longer piece in which I discuss neurotechnology. (The Ultimate Interface: Your Brain):

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others.

What’s actually been done in humans?

In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. [..] More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at.

In animals, we’ve boosted cognitive performance:

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. [..] This chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

The real challenges remain hardware and brain surgery:

getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

Quite a bit of R&D is going into solving those hardware and surgery problems:

Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain.

You can read the whole thing here:The Ultimate Interface: Your Brain.

Over the last 5 years, the price of new wind power in the US has dropped 58% and the price of new solar power has dropped 78%. That’s the conclusion of investment firm Lazard Capital. The key graph is here (here’s a version with US grid prices marked). Lazard’s full report is here.

Utility-scale solar in the West and Southwest is now at times cheaper than new natural gas plants. Here’s UBS on the most recent record set by solar. (Full UBS solar market flash here.)

We see the latest proposed PPA price for Xcel’s SPS subsidiary by NextEra (NEE) as in NM as setting a new record low for utility-scale solar. [..] The 25-year contracts for the New Mexico projects have levelized costs of $41.55/MWh and $42.08/MWh.

That is 4.155 cents / kwh and 4.21 cents / kwh, respectively. Even after removing the federal solar Investment Tax Credit of 30%, the New Mexico solar deal is priced at 6 cents / kwh. By contrast, new natural gas electricity plants have costs between 6.4 to 9 cents per kwh, according to the EIA.

(Note that the same EIA report from April 2014 expects the lowest price solar power purchases in 2019 to be $91 / MWh, or 9.1 cents / kwh before subsidy. Solar prices are below that today.)

The New Mexico plant is the latest in a string of ever-cheaper solar deals. SEPA’s 2014 solar market snapshot lists other low-cost solar Power Purchase Agreements. (Full report here.)

  • Austin Energy (Texas) signed a PPA for less than $50 per megawatt-hour (MWh) for 150 MW.
  • TVA (Alabama) signed a PPA for $61 per MWh.
  • Salt River Project (Arizona) signed a PPA for roughly $53 per MWh.

Wind prices are also at all-time lows. Here’s Lawrence Berkeley National Laboratory on the declining price of wind power (full report here):

After topping out at nearly $70/MWh in 2009, the average levelized long-term price from wind power sales agreements signed in 2013 fell to around $25/MWh.

After adding in the wind Production Tax Credit, that is still substantially below the price of new coal or natural gas.

Wind and solar compensate for each other’s variability, with solar providing power during the day, and wind primarily at dusk, dawn, and night.

Energy storage is also reaching disruptive prices at utility scale. The Tesla battery is cheap enough to replace natural gas ‘peaker’ plants. And much cheaper energy storage is on the way.

Renewable prices are not static, and generally head only in one direction: Down. Cost reductions are driven primarily by the learning curve. Solar and wind power prices improve reasonably predictably following a power law. Every doubling of cumulative solar production drives module prices down by 20%. Similar phenomena are observed in numerous manufactured goods and industrial activities,  dating back to the Ford Model T. Subsidies are a clumsy policy (I’d prefer a tax on carbon) but they’ve scaled deployment, which in turn has dropped present and future costs.

By the way, the common refrain that solar prices are so low primarily because of Chinese dumping exaggerates the impact of Chinese manufacturing. Solar modules from the US, Japan, and SE Asia are all similar in price to those from China.

Fossil fuel technologies, by contrast to renewables, have a slower learning curve, and also compete with resource depletion curves as deposits are drawn down and new deposits must be found and accessed.  From a 2007 paper by Farmer and Trancik, at the Santa Fe Institute, Dynamics of Technology Development in the Energy Sector :

Fossil fuel energy costs follow a complicated trajectory because they are influenced both by trends relating to resource scarcity and those relating to technology improvement. Technology improvement drives resource costs down, but the finite nature of deposits ultimately drives them up. […] Extrapolations suggest that if these trends continue as they have in the past, the costs of reaching parity between photovoltaics and current electricity prices are on the order of $200 billion

Renewable electricity prices are likely to continue to drop, particularly for solar, which has a faster learning curve and is earlier in its development than wind. The IEA expects utility scale solar prices to average 4 cents per kwh around the world by mid century, and that solar will be the number 1 source of electricity worldwide. (Full report here.)

Bear in mind that the IEA has also underestimated the growth of solar in every projection made over the last decade.

Germany’s Fraunhofer Institute expects solar in southern and central Europe (similar in sunlight to the bulk of the US) to drop below 4 cents per kwh in the next decade, and to reach 2 cents per kwh by mid century. (Their report is here. If you want to understand the trends in solar costs, read this link in particular.)

Analysts at wealth management firm Alliance Bernstein put this drop in prices into a long term context in their infamous “Welcome to the Terrordome” graph, which shows the cost of solar energy plunging from more than 10 times the cost of coal and natural gas to near parity. The full report outlines their reason for invoking terror. The key quote:

At the point where solar is displacing a material share of incremental oil and gas supply, global energy deflation will become inevitable: technology (with a falling cost structure) would be driving prices in the energy space.

They estimate that solar must grow by an order of magnitude, a point they see as a decade away. For oil, it may in fact be further away. Solar and wind are used to create electricity, and today, do not substantially compete with oil. For coal and natural gas, the point may be sooner.

Unless solar, wind, and energy storage innovations suddenly and unexpectedly falter, the technology-based falling cost structure of renewable electricity will eventually outprice fossil fuel electricity across most of the world. The question appears to be less “if” and more “when”.

Elon Musk, Stephen Hawking, and Bill Gates have recently expressed concern that development of AI could lead to a ‘killer AI’ scenario, and potentially to the extinction of humanity.

None of them are AI researchers or have worked substantially with AI that I know of. (Disclosure: I know Gates slightly from my time at Microsoft, when I briefed him regularly on progress in search. I have great respect for all three men.)

What do actual AI researchers think of the risks of AI?

Here’s Oren Etzioni, a professor of computer science at the University of Washington, and now CEO of the Allen Institute for Artificial Intelligence:

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

Here’s Michael Littman, an AI researcher and computer science professor at Brown University. (And former program chair for the Association of the Advancement of Artificial Intelligence):

there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. […] These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

Here’s Yann LeCun, Facebook’s director of research, a legend in neural networks and machine learning (‘LeCun nets’ are a type of neural net named after him), and one of the world’s top experts in deep learning.  (This is from an Erik Sofge interview of several AI researchers on the risks of AI. Well worth reading.)

Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent A.I. to “reprogram” itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves A.I. researchers, or even computer scientists.

Here’s Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos, before he left to become Chief Scientist at Chinese search engine company Baidu:

“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence,” he said. “But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”

Here’s my own modest contribution, talking about the powerful disincentives for working towards true sentience. (I’m not an AI researcher, but I managed AI researchers and work into neural networks and other types of machine learning for many years.)

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Yesterday I outlined why genetically engineered children are not imminent. The Chinese CRISPR gene editing of embryos experiment was lethal to around 20% of embryos, inserted off-target errors into roughly 10% of embryos (with some debate there), and only produced the desired genetic change in around 5% of embryos, and even then only in a subset of cells in those embryos.

Over time, the technology will become more efficient and the combined error and lethality rates will drop, though likely never to zero.

Human genome editing should be regulated. But it should be regulated primarily to assure safety and informed consent, rather than being banned as it is most developed countries (see figure 3). It’s implausible that human genome editing will lead to a Gattaca scenario, as I’ll show below. And bans only make the societal outcomes worse.

1. Enhancing Human Traits is Hard (And Gattaca is Science Fiction)

The primary fear of human germline engineering, beyond safety, appears to be a Gattaca-like scenario, where the rich are able to enhance the intelligence, looks, and other traits of their children, and the poor aren’t.

But boosting desirable traits such as intelligence and height to any significant degree is implausible, even with a very low error rate.

The largest ever survey of genes associated with IQ found 69 separate genes, which together accounted for less than 8% of the variance in IQ scores, implying that at least hundreds of genes, if not thousands, involved in IQ. (See paper, here.) As Nature reported, even the three genes with the largest individual impact added up to less than two points of IQ:

The three variants the researchers identified were each responsible for an average of 0.3 points on an IQ test. … That means that a person with two copies of each variant would score 1.8 points higher on an intelligence test than a person with none of them.

Height is similarly controlled by hundreds of gene. 697 genes together account for just one fifth of the heritability of adult height. (Paper at Nature Genetics here).

For major personality traits, identified genes account for less than 2% of variation, and it’s likely that hundreds or thousands of genes are involved.

Manipulating IQ, height, or personality is thus likely to involve making a very large number of genetic changes. Even then, genetic changes are likely to produce a moderate rather than overwhelming impact.

Conversely, for those unlucky enough to be conceived with the wrong genes, a single genetic change could prevent Cystic Fibrosis, or dramatically reduce the odds of Alzheimer’s disease, breast cancer or ovarian cancer, or cut the risk of heart disease by 30-40%.

Reducing disease is orders of magnitude easier and safer than augmenting abilities.

2. Parents are risk averse

We already trust parents to make hundreds of impactful decisions on behalf of their children: Schooling, diet and nutrition, neighborhood, screen time, media exposure, and religious upbringing are just a few.  Each of these has a larger impact on the average child – positive or negative – than one is likely to see from a realistic gene editing scenario any time in the next few decades.

And in general, parents are risk averse when their children are involved. Using gene editing to reduce the risk of disease is quite different than taking on new risks in an effort to boost a trait like height or IQ. That’s even more true when it takes dozens or hundreds of genetic tweaks to make even a relatively small change in those traits – and when every genetic tweak adds to the risk of an error.

(Parents could go for a more radical approach: Inserting extra copies of human genes, or transgenic variants not found in humans at all. It seems likely that parents will be even more averse to venturing into such uncharted waters with their children.)

If a trait like IQ could be safely increased to a marked degree, that would constitute a benefit to both the child and society. And while it would pose issues for inequality, the best solution might be to try to rectify inequality of access, rather than ban the technique. (Consider that IVF is subsidized in places as different as Singapore and Sweden.) But significant enhancements don’t appear to be likely any time on the horizon.

Razib Khan points out one other thing we trust parents to do, which has a larger impact on the genes of a child than any plausible technology of the next few decades:

 “the best bet for having a smart child is picking a spouse with a deviated phenotype. Look for smart people to marry.”

3. Bans make safety and inequality worse

A ban on human germline gene editing would cut off medical applications that could reduce the risk of disease in an effort to control the far less likely and far less impactful enhancement and parental control scenarios.

A ban is also unlikely to be global. Attitudes towards genetic engineering vary substantially by country. In the US, surveys find 4% to 14% of the population supports genetic engineering for enhancement purposes. Only around 40% support its use to prevent disease. Yet, As David Macer pointed out, as early as 1994:

in India and Thailand, more than 50% of the 900+ respondents in each country supported enhancement of physical characters, intelligence, or making people more ethical.

While most of Europe has banned genetic engineering, and the US looks likely to follow suit, it’s likely to go forward in at least some parts of Asia. (That is, indeed, one of the premises of Nexus and its sequels.)

If the US and Europe do ban the technology, while other countries don’t, then genetic engineering will be accessible to a smaller set of people: Those who can afford to travel overseas and pay for it out-of-pocket. Access will become more unequal. And, in all likelihood, genetic engineering in Thailand, India, or China is likely to be less well regulated for safety than it would be in the US or Europe, increasing the risk of mishap.

The fear of genetic engineering is based on unrealistic views of the genome, the technology, and how parents would use it. If we let that fear drive us towards a ban on genetic engineering – rather than legalization and regulation – we’ll reduce safety and create more inequality of access.

I’ll give the penultimate word to Jennifer Doudna, the inventor of the technique (this is taken from a truly interesting set of responses to Nature Biotechnology’s questions, which they posed to a large number of leaders in the field):

Doudna, Carroll, Martin & Botchan: We don’t think an international ban would be effective by itself; it is likely some people would ignore it. Regulation is essential to ensure that dangerous, trivial or cosmetic uses are not pursued.

Legalize and regulate genetic engineering. That’s the way to boost safety and equality, and to guide the science and ethics.