Science

Until “effective altruism” figures out what drives innovation, those recommendations simply aren’t that reliable.

Addendum: John Sterling just wrote this in the MR comments section:

I think Steven Landsburg made the definitive “pro-Paulson gift” argument in his classic Slate piece defending Ebenezer Scrooge. Paulson could have pulled a “Larry Ellison” and built himself a $200 mm yacht. He decided to forgo (some) of his conspicuous consumption and instead let the Harvard Management Company steward some additional capital.

I’ve sometimes wondered whether the Harvard endowment is the ultimate way to be an “effective altruist” for an Austrian-leaning type. If you believe, like Baldy Harper did, “that savings invested in privately owned economic tools of production amount to … the greatest economic charity of all.” then the Harvard endowment makes a pretty interesting beneficiary. I can’t think of another institution in the world today that is more likely to hold on to its capital in perpetuity than the folks in Cambridge.

I am not saying he is right, just don’t be so quick to conclude he is wrong.  By the way, I do not in fact donate my own money to Harvard.

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed. I have a different objection.

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it”.  I see nothing objectionable in this scenario.

Aside from greater plausibility, a glide path means that dealing with the Terminator scenario is easier. In the Terminator scenario, humans must continually be on guard. In the glide path scenario we only have to avoid the Terminator until we become them and then the problem is resolved with little fuss. No human race but no mass murder either.

More generally, what’s so great about the human race? I agree, there are lots of great things to point to such as the works of Shakespeare, Mozart, and Grothendieck. We should revere the greatness of the works, however, not the substrate on which the works were created. If what is great about humanity is the great things that we have done then the future may hold greater things yet. If we work to pass on our best values and aspirations to our technological progeny then we can be proud of future generations even if they differ from us in some ways. I delight to think of the marvels that future generations may produce. But I see no reason to hope that such marvels will be produced by beings indistinguishable from myself, indeed that would seem rather disappointing.

Thanks to computerized aiming, HEL MD can operate in wholly autonomous mode, which Boeing tested successfully in May 2014 – although the trials uncovered an unexpected challenge. The weapon’s laser beam is silent and invisible, and not all targets explode as they are destroyed, so an automated battle can be over before operators have noticed anything. ‘The engagements happen quickly, and unless you’re staring at a screen 24-7 you’ll never see them,’ Blount says. ‘So we’ve built sound in for whenever we fire the laser. We plan on taking advantage of lots of Star Trek and Star Wars sound bites.’

More generally, fibre-laser weapons may be on their way:

Despite their modest capabilities, Scharre claims that fibre-laser weapons could find a niche in US military defence in 5–10 years. “They may not be as grand and strategic as the Star Wars concept,” he says, “but they could save lives, protect US bases, ships and service members.”

The full article is here, via the excellent Kevin Lewis.

*From the Earth to the Moon*

by on June 2, 2015 at 8:04 am in Books, History, Science | Permalink

I read this 1865 Jules Verne book lately and very much enjoyed it.  It’s a poke at scientific rationalists and project-happy obsessives, and humorous throughout.  It mocks those who wish to bet on ideas, compares the American and French versions of excess grandiosity, and asks in subtle ways what are the limits of progress.  It reminded me of John Gray far more than I had been expecting.  And it’s about an America with no NIMBY, where everyone wants the projects right in their backyard.  The space program in fact sets off a rivalry between Texas and Florida to house the first moon shot…and it is to be done with a very large gun.

Definitely recommended, here is the book’s Wikipedia page.

A resident of Mountain View writes about their interactions with self-driving cars (from the Emerging Technologies Blog):

I see no less than 5 self-driving cars every day. 99% of the time they’re the Google Lexuses, but I’ve also seen a few other unidentified ones (and one that said BOSCH on the side). I have never seen one of the new “Google-bugs” on the road, although I’ve heard they’re coming soon. I also don’t have a good way to tell if the cars were under human control or autonomous control during the stories I’m going to relate.

Anyway, here we go: Other drivers don’t even blink when they see one. Neither do pedestrians – there’s no “fear” from the general public about crashing or getting run over, at least not as far as I can tell.

Google cars drive like your grandma – they’re never the first off the line at a stop light, they don’t accelerate quickly, they don’t speed, and they never take any chances with lane changes (cut people off, etc.).

…Google cars are very polite to pedestrians. They leave plenty of space. A Google car would never do that rude thing where a driver inches impatiently into a crosswalk while people are crossing because he/she wants to make a right turn. However, this can also lead to some annoyance to drivers behind, as the Google car seems to wait for the pedestrian to be completely clear. On one occasion, I saw a pedestrian cross into a row of human-thickness trees and this seemed to throw the car for a loop for a few seconds. The person was a good 10 feet out of the crosswalk before the car made the turn.

…Once, I [on motorcycle, AT] got a little caught out as the traffic transitioned from slow moving back to normal speed. I was in a lane between a Google car and some random truck and, partially out of experiment and partially out of impatience, I gunned it and cut off the Google car sort of harder than maybe I needed too… The car handled it perfectly (maybe too perfectly). It slowed down and let me in. However, it left a fairly significant gap between me and it. If I had been behind it, I probably would have found this gap excessive and the lengthy slowdown annoying. Honestly, I don’t think it will take long for other drivers to realize that self-driving cars are “easy targets” in traffic.

Overall, I would say that I’m impressed with how these things operate. I actually do feel safer around a self-driving car than most other California drivers.

Hat tip: Chris Blattman.

Joel Shurkin reports:

Ants — most are teeny creatures with brains smaller than pinheads — engineer traffic better than humans do. Ants never run into stop-and-go-traffic or gridlock on the trail. In fact, the more ants of one species there are on the road, the faster they go, according to new research.

Researchers from two German institutions — the University of Potsdam and the Martin Luther University of Halle-Wittenberg — found a nest of black meadow ants (Formica pratensis) in the woods of Saxony. The nest had four trunk trails leading to foraging areas, some of them 60 feet long. The researchers set up a camera that took time-lapse photography, and recorded the ants’ comings and goings.

…Oddly, the heavier the traffic, the faster the ants marched. Unlike humans driving cars, their velocity increased as their numbers did, and the trail widened as the ants spread out.

In essence ants vary the number of open lanes, but they have another trick as well:

“Ant vision is not that great, so I suspect that most of the information comes from tactile senses (antennas, legs). This means they are actually aware of not only the ant in front, but the ant behind as well,” he wrote in an e-mail. “That reduces the instability found in automobile highways, where drivers only know about the car in front.”

Driverless vehicles can of course in this regard be more like ants than humans.

How realistic is it to directly send data in and out of the brain? That is the core scientific innovation underlying my novels. From a longer piece in which I discuss neurotechnology. (The Ultimate Interface: Your Brain):

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others.

What’s actually been done in humans?

In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. [..] More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at.

In animals, we’ve boosted cognitive performance:

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. [..] This chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

The real challenges remain hardware and brain surgery:

getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

Quite a bit of R&D is going into solving those hardware and surgery problems:

Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain.

You can read the whole thing here:The Ultimate Interface: Your Brain.

Over the last 5 years, the price of new wind power in the US has dropped 58% and the price of new solar power has dropped 78%. That’s the conclusion of investment firm Lazard Capital. The key graph is here (here’s a version with US grid prices marked). Lazard’s full report is here.

Utility-scale solar in the West and Southwest is now at times cheaper than new natural gas plants. Here’s UBS on the most recent record set by solar. (Full UBS solar market flash here.)

We see the latest proposed PPA price for Xcel’s SPS subsidiary by NextEra (NEE) as in NM as setting a new record low for utility-scale solar. [..] The 25-year contracts for the New Mexico projects have levelized costs of $41.55/MWh and $42.08/MWh.

That is 4.155 cents / kwh and 4.21 cents / kwh, respectively. Even after removing the federal solar Investment Tax Credit of 30%, the New Mexico solar deal is priced at 6 cents / kwh. By contrast, new natural gas electricity plants have costs between 6.4 to 9 cents per kwh, according to the EIA.

(Note that the same EIA report from April 2014 expects the lowest price solar power purchases in 2019 to be $91 / MWh, or 9.1 cents / kwh before subsidy. Solar prices are below that today.)

The New Mexico plant is the latest in a string of ever-cheaper solar deals. SEPA’s 2014 solar market snapshot lists other low-cost solar Power Purchase Agreements. (Full report here.)

  • Austin Energy (Texas) signed a PPA for less than $50 per megawatt-hour (MWh) for 150 MW.
  • TVA (Alabama) signed a PPA for $61 per MWh.
  • Salt River Project (Arizona) signed a PPA for roughly $53 per MWh.

Wind prices are also at all-time lows. Here’s Lawrence Berkeley National Laboratory on the declining price of wind power (full report here):

After topping out at nearly $70/MWh in 2009, the average levelized long-term price from wind power sales agreements signed in 2013 fell to around $25/MWh.

After adding in the wind Production Tax Credit, that is still substantially below the price of new coal or natural gas.

Wind and solar compensate for each other’s variability, with solar providing power during the day, and wind primarily at dusk, dawn, and night.

Energy storage is also reaching disruptive prices at utility scale. The Tesla battery is cheap enough to replace natural gas ‘peaker’ plants. And much cheaper energy storage is on the way.

Renewable prices are not static, and generally head only in one direction: Down. Cost reductions are driven primarily by the learning curve. Solar and wind power prices improve reasonably predictably following a power law. Every doubling of cumulative solar production drives module prices down by 20%. Similar phenomena are observed in numerous manufactured goods and industrial activities,  dating back to the Ford Model T. Subsidies are a clumsy policy (I’d prefer a tax on carbon) but they’ve scaled deployment, which in turn has dropped present and future costs.

By the way, the common refrain that solar prices are so low primarily because of Chinese dumping exaggerates the impact of Chinese manufacturing. Solar modules from the US, Japan, and SE Asia are all similar in price to those from China.

Fossil fuel technologies, by contrast to renewables, have a slower learning curve, and also compete with resource depletion curves as deposits are drawn down and new deposits must be found and accessed.  From a 2007 paper by Farmer and Trancik, at the Santa Fe Institute, Dynamics of Technology Development in the Energy Sector :

Fossil fuel energy costs follow a complicated trajectory because they are influenced both by trends relating to resource scarcity and those relating to technology improvement. Technology improvement drives resource costs down, but the finite nature of deposits ultimately drives them up. […] Extrapolations suggest that if these trends continue as they have in the past, the costs of reaching parity between photovoltaics and current electricity prices are on the order of $200 billion

Renewable electricity prices are likely to continue to drop, particularly for solar, which has a faster learning curve and is earlier in its development than wind. The IEA expects utility scale solar prices to average 4 cents per kwh around the world by mid century, and that solar will be the number 1 source of electricity worldwide. (Full report here.)

Bear in mind that the IEA has also underestimated the growth of solar in every projection made over the last decade.

Germany’s Fraunhofer Institute expects solar in southern and central Europe (similar in sunlight to the bulk of the US) to drop below 4 cents per kwh in the next decade, and to reach 2 cents per kwh by mid century. (Their report is here. If you want to understand the trends in solar costs, read this link in particular.)

Analysts at wealth management firm Alliance Bernstein put this drop in prices into a long term context in their infamous “Welcome to the Terrordome” graph, which shows the cost of solar energy plunging from more than 10 times the cost of coal and natural gas to near parity. The full report outlines their reason for invoking terror. The key quote:

At the point where solar is displacing a material share of incremental oil and gas supply, global energy deflation will become inevitable: technology (with a falling cost structure) would be driving prices in the energy space.

They estimate that solar must grow by an order of magnitude, a point they see as a decade away. For oil, it may in fact be further away. Solar and wind are used to create electricity, and today, do not substantially compete with oil. For coal and natural gas, the point may be sooner.

Unless solar, wind, and energy storage innovations suddenly and unexpectedly falter, the technology-based falling cost structure of renewable electricity will eventually outprice fossil fuel electricity across most of the world. The question appears to be less “if” and more “when”.

Elon Musk, Stephen Hawking, and Bill Gates have recently expressed concern that development of AI could lead to a ‘killer AI’ scenario, and potentially to the extinction of humanity.

None of them are AI researchers or have worked substantially with AI that I know of. (Disclosure: I know Gates slightly from my time at Microsoft, when I briefed him regularly on progress in search. I have great respect for all three men.)

What do actual AI researchers think of the risks of AI?

Here’s Oren Etzioni, a professor of computer science at the University of Washington, and now CEO of the Allen Institute for Artificial Intelligence:

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

Here’s Michael Littman, an AI researcher and computer science professor at Brown University. (And former program chair for the Association of the Advancement of Artificial Intelligence):

there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. […] These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

Here’s Yann LeCun, Facebook’s director of research, a legend in neural networks and machine learning (‘LeCun nets’ are a type of neural net named after him), and one of the world’s top experts in deep learning.  (This is from an Erik Sofge interview of several AI researchers on the risks of AI. Well worth reading.)

Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent A.I. to “reprogram” itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves A.I. researchers, or even computer scientists.

Here’s Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos, before he left to become Chief Scientist at Chinese search engine company Baidu:

“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence,” he said. “But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”

Here’s my own modest contribution, talking about the powerful disincentives for working towards true sentience. (I’m not an AI researcher, but I managed AI researchers and work into neural networks and other types of machine learning for many years.)

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Yesterday I outlined why genetically engineered children are not imminent. The Chinese CRISPR gene editing of embryos experiment was lethal to around 20% of embryos, inserted off-target errors into roughly 10% of embryos (with some debate there), and only produced the desired genetic change in around 5% of embryos, and even then only in a subset of cells in those embryos.

Over time, the technology will become more efficient and the combined error and lethality rates will drop, though likely never to zero.

Human genome editing should be regulated. But it should be regulated primarily to assure safety and informed consent, rather than being banned as it is most developed countries (see figure 3). It’s implausible that human genome editing will lead to a Gattaca scenario, as I’ll show below. And bans only make the societal outcomes worse.

1. Enhancing Human Traits is Hard (And Gattaca is Science Fiction)

The primary fear of human germline engineering, beyond safety, appears to be a Gattaca-like scenario, where the rich are able to enhance the intelligence, looks, and other traits of their children, and the poor aren’t.

But boosting desirable traits such as intelligence and height to any significant degree is implausible, even with a very low error rate.

The largest ever survey of genes associated with IQ found 69 separate genes, which together accounted for less than 8% of the variance in IQ scores, implying that at least hundreds of genes, if not thousands, involved in IQ. (See paper, here.) As Nature reported, even the three genes with the largest individual impact added up to less than two points of IQ:

The three variants the researchers identified were each responsible for an average of 0.3 points on an IQ test. … That means that a person with two copies of each variant would score 1.8 points higher on an intelligence test than a person with none of them.

Height is similarly controlled by hundreds of gene. 697 genes together account for just one fifth of the heritability of adult height. (Paper at Nature Genetics here).

For major personality traits, identified genes account for less than 2% of variation, and it’s likely that hundreds or thousands of genes are involved.

Manipulating IQ, height, or personality is thus likely to involve making a very large number of genetic changes. Even then, genetic changes are likely to produce a moderate rather than overwhelming impact.

Conversely, for those unlucky enough to be conceived with the wrong genes, a single genetic change could prevent Cystic Fibrosis, or dramatically reduce the odds of Alzheimer’s disease, breast cancer or ovarian cancer, or cut the risk of heart disease by 30-40%.

Reducing disease is orders of magnitude easier and safer than augmenting abilities.

2. Parents are risk averse

We already trust parents to make hundreds of impactful decisions on behalf of their children: Schooling, diet and nutrition, neighborhood, screen time, media exposure, and religious upbringing are just a few.  Each of these has a larger impact on the average child – positive or negative – than one is likely to see from a realistic gene editing scenario any time in the next few decades.

And in general, parents are risk averse when their children are involved. Using gene editing to reduce the risk of disease is quite different than taking on new risks in an effort to boost a trait like height or IQ. That’s even more true when it takes dozens or hundreds of genetic tweaks to make even a relatively small change in those traits – and when every genetic tweak adds to the risk of an error.

(Parents could go for a more radical approach: Inserting extra copies of human genes, or transgenic variants not found in humans at all. It seems likely that parents will be even more averse to venturing into such uncharted waters with their children.)

If a trait like IQ could be safely increased to a marked degree, that would constitute a benefit to both the child and society. And while it would pose issues for inequality, the best solution might be to try to rectify inequality of access, rather than ban the technique. (Consider that IVF is subsidized in places as different as Singapore and Sweden.) But significant enhancements don’t appear to be likely any time on the horizon.

Razib Khan points out one other thing we trust parents to do, which has a larger impact on the genes of a child than any plausible technology of the next few decades:

 “the best bet for having a smart child is picking a spouse with a deviated phenotype. Look for smart people to marry.”

3. Bans make safety and inequality worse

A ban on human germline gene editing would cut off medical applications that could reduce the risk of disease in an effort to control the far less likely and far less impactful enhancement and parental control scenarios.

A ban is also unlikely to be global. Attitudes towards genetic engineering vary substantially by country. In the US, surveys find 4% to 14% of the population supports genetic engineering for enhancement purposes. Only around 40% support its use to prevent disease. Yet, As David Macer pointed out, as early as 1994:

in India and Thailand, more than 50% of the 900+ respondents in each country supported enhancement of physical characters, intelligence, or making people more ethical.

While most of Europe has banned genetic engineering, and the US looks likely to follow suit, it’s likely to go forward in at least some parts of Asia. (That is, indeed, one of the premises of Nexus and its sequels.)

If the US and Europe do ban the technology, while other countries don’t, then genetic engineering will be accessible to a smaller set of people: Those who can afford to travel overseas and pay for it out-of-pocket. Access will become more unequal. And, in all likelihood, genetic engineering in Thailand, India, or China is likely to be less well regulated for safety than it would be in the US or Europe, increasing the risk of mishap.

The fear of genetic engineering is based on unrealistic views of the genome, the technology, and how parents would use it. If we let that fear drive us towards a ban on genetic engineering – rather than legalization and regulation – we’ll reduce safety and create more inequality of access.

I’ll give the penultimate word to Jennifer Doudna, the inventor of the technique (this is taken from a truly interesting set of responses to Nature Biotechnology’s questions, which they posed to a large number of leaders in the field):

Doudna, Carroll, Martin & Botchan: We don’t think an international ban would be effective by itself; it is likely some people would ignore it. Regulation is essential to ensure that dangerous, trivial or cosmetic uses are not pursued.

Legalize and regulate genetic engineering. That’s the way to boost safety and equality, and to guide the science and ethics.

Dr. Greene, working with a student, has also found that “squirrels understand ‘bird-ese,’ and birds understand ‘squirrel-ese.’ ” When red squirrels hear a call announcing a dangerous raptor in the air, or they see such a raptor, they will give calls that are acoustically “almost identical” to the birds, Dr. Greene said. (Researchers have found that eastern chipmunks are attuned to mobbing calls by the eastern tufted titmouse, a cousin of the chickadee.)

The titmice are in on it too.  The article has numerous further points of interest.

Don’t Fear the CRISPR

by on May 18, 2015 at 7:00 am in Medicine, Science | Permalink

I’m honored to be here guest-blogging for the week. Thanks, Alex, for the warm welcome.

I want to start with a topic recently in the news, and that I’ve written about in both fiction and non-fiction.

In April, Chinese scientists announced that they’d used the CRISPR gene editing technique to modify non-viable human embryos. The experiment focused on modifying the gene that causes the quite serious hereditary blood disease Beta-thalassemia.

You can read the paper here. Carl Zimmer has an excellent write-up here. Tyler has blogged about it here. And Alex here.

Marginal Revolution aside, the response to this experiment has been largely negative. Science and Nature, the two most prestigious scientific journals in the world, reportedly rejected the paper on ethical grounds. Francis Collins, director of the NIH, announced that NIH will not fund any CRISPR experiments that involve human embryos.

NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed.

This is a mistake, for several reasons.

  1. The technology isn’t as mature as reported. Most responses to it are over-reactions.
  2. Parents are likely to use genetic technologies in the best interests of their children.
  3. Using gene editing to create ‘superhumans’ will be tremendously harder, riskier, and less likely to be embraced by parents than using it to prevent disease.
  4. A ban on research funding or clinical application will only worsen safety, inequality, and other concerns expressed about the research.

Today I’ll talk about the maturity of the technology. Tomorrow I’ll be back to discuss the other points. (You can read that now in Part 2: Don’t Fear Genetically Engineered Babies.)

CRISPR Babies Aren’t Near

Despite the public reaction (and the very real progress with CRISPR in other domains) we are not near a world of CRISPR gene-edited children.

First, the technique was focused on very early stage embryos made up of just a few cells. Genetically engineering an embryo at that very early stage is the only realistic way to ensure that the genetic changes reach all or most cells in the body. That limits the possible parents to those willing to go through in-vitro fertilization (IVF). It takes an average of roughly 3 IVF cycles, with numerous hormone injections and a painful egg extraction at each cycle, to produce a live birth. In some cases, it takes as many as 6 cycles. Even after 6 cycles, perhaps a third of women going through IVF will not have become pregnant (see table 3, here). IVF itself is a non-trivial deterrent to genetically engineering children.

Second, the Chinese experiment resulted in more dead embryos than successfully gene edited embryos. Of 86 original embryos, only 71 survived the process. 54 of those were tested to see if the gene had successfully inserted. Press reports have mentioned that 28 of those 54 tested embryos showed signs of CRISPR/Cas9 activity.

Yet only 4 embryos showed the intended genetic change. And even those 4 showed the new gene in only some of their cells, becoming ‘mosaics’ of multiple different genomes.

From the paper:

~80% of the embryos remained viable 48 h after injection (Fig. 2A), in agreement with low toxicity of Cas9 injection in mouse embryos  […]

ssDNA-mediated editing occurred only in 4 embryos… and the edited embryos were mosaic, similar to findings in other model systems.

So the risk of destroying an embryo (~20%) was substantially higher than the likelihood of successfully inserting a gene into the embryo (~5%) and much higher than the chance of inserting the gene into all of the embryo’s cells (0%).

There were also off-target mutations. Doug Mortlock believes the off-target mutation rate was actually much lower than the scientists believed, but in general CRISPR has a significantly non-zero chance of inducing an unintended genetic change.

CRISPR is a remarkable breakthrough in gene editing, with applications to agriculture, gene therapy, pharmaceutical production, basic science, and more. But in many of those scenarios, error can be tolerated. Cells with off-target mutations can be weeded out to find the few perfectly edited ones. Getting one complete success out of tens, hundreds, or even thousands of modified cells can suffice, when that one cell can then be replicated to create a new cell line or seed line.

In human fertility, where embryos are created in single digit quantities rather than hundreds or thousands – and where we hope at least one of those embryos comes to term as a child – our tolerance for error is dramatically lower. The efficiency, survivability, and precision of CRISPR all need to rise substantially before many parents are likely to consider using it for an unborn embryo, even to prevent disease.

That is, indeed, the conclusion of the Chinese researchers, who wrote, “Our study underscores the challenges facing clinical applications of CRISPR/Cas9.”

More in part two of this post on the ethics of allowing genetic editing of the unborn, and why a ban in this area is counterproductive.

Tyler and I are delighted to have the great Ramez Naam guest blogging for us this week. Ramez spent many years at Microsoft leading teams working on search and artificial intelligence. His first book, More Than Human: Embracing the Promise of Biological Enhancement was a thought provoking look at the science and ethics of enhancing the human mind, body, and lifespan. More recently, I enjoyed Ramez’s The Infinite Resource: The Power of Ideas on a Finite Planet, an excellent Simonesque guide to climate change, energy and innovation.

Frankly, I didn’t expect much when I bought Ramez’s science fiction novel, Nexus. Good non-fiction authors don’t necessarily make good fiction authors. I was, however, blown away. Nexus is about a near-future in which a new technology allows humans to take control of their biological operating system and communicate mind to mind. Nexus combines the rush of a great thriller, the fascination of hard science fiction and the intrigue of a realistic world of spy-craft and geo-politics. I loved Nexus and immediately bought the second in the trilogy, Crux. I finished that quickly and I am now about half-way through the just released, Apex. Thus it’s great to have Ramez guest blogging as I race towards the end of his exciting trilogy! The trilogy is highly recommended.

Please welcome Ramez to MR.

Nexus Cover


NYTimes: While everyone welcomes Crispr-Cas9 as a strategy to treat disease, many scientists are worried that it could also be used to alter genes in human embryos, sperm or eggs in ways that can be passed from generation to generation. The prospect raises fears of a dystopian future in which scientists create an elite population of designer babies with enhanced intelligence, beauty or other traits.

Does the author really think that smart, beautiful people are a bad thing? Should we shoot the ones we have now? (It seems unlikely that we are at a local maximum).

Sometimes my fellow humans depress me. But I hope for better ones in the future.

NBC: A poker showdown between professional players and an artificial intelligence program has ended with a slim victory for the humans — so slim, in fact, that the scientists running the show said it’s effectively a tie .The event began two weeks ago, as the four pros — Bjorn Li, Doug Polk, Dong Kim and Jason Les — settled down at Rivers Casino in Pittsburgh to play a total of 80,000 hands of Heads-Up, No-Limit Texas Hold ’em with Claudico, a poker-playing bot made by Carnegie Mellon University computer science researchers.

…No actual money was being bet — the dollar amount was more of a running scoreboard, and at the end the humans were up a total of $732,713 (they will share a $100,000 purse based on their virtual winnings). That sounds like a lot, but over 80,000 hands and $170 million of virtual money being bet, three-quarters of a million bucks is pretty much a rounding error, the experimenters said, and can’t be considered a statistically significant victory.

The computer bluffed and bet against the best poker players the world has ever known and over 80,000 hands the humans were not able to discover an exploitable flaw in the computer’s strategy. Thus, a significant win for the computer. Moreover, the computers will get better at a faster pace than the humans.

In my post on opaque intelligence I said that algorithms were becoming so sophisticated that we humans can’t really understand what they are doing, quipping that “any sufficiently advanced logic is indistinguishable from stupidity.” We see hints of that here:

“There are spots where it plays well and others where I just don’t understand it,” Polk said in a Carnegie Mellon news release….”Betting $19,000 to win a $700 pot just isn’t something that a person would do,” Polk continued.

Polk’s careful wording–he doesn’t say the computer’s strategy was wrong but that it was inhuman and beyond his understanding–is a telling indicator of respect.