Results for “naam”
17 found

The wisdom of Ramez Naam, on climate change

From his tweetstorm here are a few bits:

Our biggest climate problems – the sectors that are both large and that lack obvious solutions, are: a) Agriculture and land use changes (AFOLU in the graphic) and b) Manufacturing / Industry. Together, these are 45% of global emissions. And solutions are scarce. 11/

I’m not saying that clean electricity or transport are solved. They’re not. But in electricity, we have solar, wind, batteries growing & getting cheaper & on path for 70-80% decarbonization *at least*. Same with electric cars and trucks. We have momentum in those sectors. 12/

We do NOT have momentum in reducing carbon emissions of agriculture or manufacturing. In agriculture, livestock methane emissions + deforestation to graze livestock are biggest problems. And meat consumption is doubling in next 40 yrs. This should scare you more than coal. 13/

In industry, despite progress in recycling steel, *primary* steel production is still incredibly carbon intensive. As is cement. As is much of manufacturing. We haven’t reached the “solar cheaper than coal” or “EVs cheaper than gasoline” tipping points there. We need to. 14/

If the US is serious about climate policy, it ought to focus on these two sectors – agriculture and industry – that are soon to be the two largest emissions sources, and lack solutions. We should press to invent solutions, drive them down in price, and spread them globally. 16/

Do read the whole thing.

Ramez Naam to Guest Blog at MR!

Tyler and I are delighted to have the great Ramez Naam guest blogging for us this week. Ramez spent many years at Microsoft leading teams working on search and artificial intelligence. His first book, More Than Human: Embracing the Promise of Biological Enhancement was a thought provoking look at the science and ethics of enhancing the human mind, body, and lifespan. More recently, I enjoyed Ramez’s The Infinite Resource: The Power of Ideas on a Finite Planet, an excellent Simonesque guide to climate change, energy and innovation.

Frankly, I didn’t expect much when I bought Ramez’s science fiction novel, Nexus. Good non-fiction authors don’t necessarily make good fiction authors. I was, however, blown away. Nexus is about a near-future in which a new technology allows humans to take control of their biological operating system and communicate mind to mind. Nexus combines the rush of a great thriller, the fascination of hard science fiction and the intrigue of a realistic world of spy-craft and geo-politics. I loved Nexus and immediately bought the second in the trilogy, Crux. I finished that quickly and I am now about half-way through the just released, Apex. Thus it’s great to have Ramez guest blogging as I race towards the end of his exciting trilogy! The trilogy is highly recommended.

Please welcome Ramez to MR.

Nexus Cover


Netflix economics and the future of Netflix

Ted Gioia writes:

Netflix’s market share has been declining steadily, and has now fallen below 50%. One estimate claims that the company’s share of consumers fell more than 30% in a single year. Netflix’s recent quarterly report was a disaster, spurring a share sell-off. You could easily conclude that “Netflix’s long awaited funeral is finally here”—as Bloomberg hinted in its blunt assessment of the results.

Of course the company is still worth quite a bit, so my own view is no more or no less optimistic than what the market indicates.  Still, it is worth asking what the equilibrium here looks like.  There is also AppleTV, Disney, Showtime, HBOMax, Hulu, AmazonPrime, and more.  I don’t think it quite works to argue that we all end up subscribing to all of them, so where are matters headed?  I see a few options:

1. Netflix and its competitors keep on producing new shows until all the rents are exhausted and those companies simply earn the going rate of return on capital, with possible ongoing rents on longstanding properties of real value (e.g., older Disney content).  These scenarios could involve either additional entry, or more (and better?) shows from the incumbent producers.

2. Due to economies of scale, one or two of those companies will produce the best shows and buy up the best content.  We end up with a monopoly or duopoly in the TV streaming market, noting there still would be vigorous competition from other media sources.

3. The companies are allowed to collude in some manner.  One option is they form a consortium where you get “all access” for a common fee, divvied out in proper proportion.  Would the antitrust authorities allow this?  Or might the mere potential for antitrust intervention makes this a collusive solution but one without a strict monopolizing, profit-maximizing price?

4. The companies are allowed to collude in a more partial and less obvious manner.  Rather than a complete consortium, some of the smaller companies will evolve into “feeder” services for one or two of the larger companies.  Those smaller companies will rely increasingly more on the feeder contracts and increasingly less on subscription revenue.  This perhaps resembles the duopoly solution analytically, though a head count would show more than two firms in the market.

It seems to me that only the first scenario is very bad for Netflix.  That said, it seems that along all of these paths short-run rent exhaustion is going on, and that short-run rent exhaustion is costly for Netflix.  They keep on having to pump out “stuff” to keep viewer attention.  It doesn’t matter that new shows are cheap, because as long as the market profits are there the “bar” for retaining customers will continue to grow.  Very few of their shows are geared to produce long-term customer loyalty toward that show – in contrast, people are still talking about Columbo!

Putting the law aside, which economic factors determine which solution will hold?  My intuition is that there are marketing economies of scale, but production diseconomies of scale, as the media companies grow too large and sclerotic.  So maybe that militates in favor of scenario #4?  That to me also suggests an “at least OK” future for Netflix.  The company would continue its investments and marketing and an easy to use website, while increasingly going elsewhere for superior content.

WWBTS?

A highly qualified reader emails me on heterogeneity

I won’t indent further, all the rest is from the reader:

“Some thoughts on your heterogeneity post. I agree this is still bafflingly under-discussed in “the discourse” & people are grasping onto policy arguments but ignoring the medical/bio aspects since ignorance of those is higher.

Nobody knows the answer right now, obviously, but I did want to call out two hypotheses that remain underrated:

1) Genetic variation

This means variation in the genetics of people (not the virus). We already know that (a) mutation in single genes can lead to extreme susceptibility to other infections, e.g Epstein–Barr (usually harmless but sometimes severe), tuberculosis; (b) mutation in many genes can cause disease susceptibility to vary — diabetes (WHO link), heart disease are two examples, which is why when you go to the doctor you are asked if you have a family history of these.

It is unlikely that COVID was type (a), but it’s quite likely that COVID is type (b). In other words, I expect that there are a certain set of genes which (if you have the “wrong” variants) pre-dispose you to have a severe case of COVID, another set of genes which (if you have the “wrong” variants) predispose you to have a mild case, and if you’re lucky enough to have the right variants of these you are most likely going to get a mild or asymptomatic case.

There has been some good preliminary work on this which was also under-discussed:

You will note that the majority of doctors/nurses who died of COVID in the UK were South Asian. This is quite striking. https://www.nytimes.com/2020/04/08/world/europe/coronavirus-doctors-immigrants.html — Goldacre et al’s excellent paper also found this on a broader scale (https://www.medrxiv.org/content/10.1101/2020.05.06.20092999v1). From a probability point of view, this alone should make one suspect a genetic component.

There is plenty of other anecdotal evidence to suggest that this hypothesis is likely as well (e.g. entire families all getting severe cases of the disease suggesting a genetic component), happy to elaborate more but you get the idea.

Why don’t we know the answer yet? We unfortunately don’t have a great answer yet for lack of sufficient data, i.e. you need a dataset that has patient clinical outcomes + sequenced genomes, for a significant number of patients; with this dataset, you could then correlate the presences of genes {a,b,c} with severe disease outcomes and draw some tentative conclusions. These are known as GWAS studies (genome wide association study) as you probably know.

The dataset needs to be global in order to be representative. No such dataset exists, because of the healthcare data-sharing problem.

2) Strain

It’s now mostly accepted that there are two “strains” of COVID, that the second arose in late January and contains a spike protein variant that wasn’t present in the original ancestral strain, and that this new strain (“D614G”) now represents ~97% of new isolates. The Sabeti lab (Harvard) paper from a couple of days ago is a good summary of the evidence. https://www.biorxiv.org/content/10.1101/2020.07.04.187757v1 — note that in cell cultures it is 3-9x more infective than the ancestral strain. Unlikely to be that big of a difference in humans for various reasons, but still striking/interesting.

Almost nobody was talking about this for months, and only recently was there any mainstream coverage of this. You’ve already covered it, so I won’t belabor the point.

So could this explain Asia/hetereogeneities? We don’t know the answer, and indeed it is extremely hard to figure out the answer (because as you note each country had different policies, chance plays a role, there are simply too many factors overall).

I will, however, note that this the distribution of each strain by geography is very easy to look up, and the results are at least suggestive:

  • Visit Nextstrain (Trevor Bedford’s project)
  • Select the most significant variant locus on the spike protein (614)
  • This gives you a global map of the balance between the more infective variant (G) and the less infective one (D) https://nextstrain.org/ncov/global?c=gt-S_614
  • The “G” strain has grown and dominated global cases everywhere, suggesting that it really is more infective
  • A cursory look here suggests that East Asia mostly has the less infective strain (in blue) whereas rest of the world is dominated by the more infective strain:
  • image.png

– Compare Western Europe, dominated by the “yellow” (more infective) strain:

image.png

You can do a similar analysis of West Coast/East Coast in February/March on Nextstrain and you will find a similar scenario there (NYC had the G variant, Seattle/SF had the D).

Again, the point of this email is not that I (or anyone!) knows the answers at this point, but I do think the above two hypotheses are not being discussed enough, largely because nobody feels qualified to reason about them. So everyone talks about mask-wearing or lockdowns instead. The parable of the streetlight effect comes to mind.”

India’s Demonetization–What is Next?

Devangshu Datta has a good run down of the basic facts of India’s demonetization of Rs 500 and Rs 1,000 notes:

About 85% of all currency in circulation has just been turned into coupons that can only be exchanged in specific places. These notes can be converted into currency again only with identity proofs (which hundreds of millions don’t have) and the additional hardship of standing in many queues for many hours.

Over half of India’s population doesn’t have any sort of bank account at the moment and about 300 million don’t have basic ID such as Aadhaar either and hence, cannot access the banking system at all. About 130 million Indians have mobile wallets (about 25 million have credit cards) and there are maybe 550-600 million debit cards in circulation. So access to cash is very, very important for average Indians.

…India is a cash economy. Well over 90% of all transactions are done in cash.

So how is India responding? Lineups at banks and ATMs are long. Tourists at the Taj Mahal have had minor troubles because ticket collectors aren’t accepting the demonetized notes. Demand for gold is up giving some jewelers a temporary albeit welcome windfall. Perhaps most telling is that at Zaveri Bazaar in Mumbai old notes were going at a 60% discount–that is a very heavy discount and indicates that the demonetization is, as I argued earlier, working as a tax on the black market. Other sources, however, indicate that discounts can be had for as low as 20%. Discounting, however, is illegal and the government is cracking down.

What, however, is the point? As Amit Varma argues:

…most truly rich people don’t keep their wealth in the form of cash, but in the form of real estate, gold, deposits in foreign bank accounts and other benaami investments. They will be largely unhurt…it is the poor who will be hurt the most by this.

And Ajay Shah notes:

Controlling corruption is not about blocking access to a non-traceable store of value. There will always be precious metals, US dollars, bitcoin, and jars of Tide…. Solving the problem of corruption requires deeper changes to institutions.

I see this as the main point of the exercise (quoting Datta):

The Income Tax and Excise Departments’ ability to gather data will increase exponentially. So will their discretionary powers, when they can query people who pay large sums in cash into their accounts.

That is not necessarily a bad thing. A key point is that only 1% of India’s population pays income tax–India would be a libertarian paradise if it had a libertarian government but it doesn’t. As a result, what low income tax payments mean is that India is forced to raise money in less efficient ways and to govern through regulation. Some of the only people who pay tax, for example, are those working in large multinational corporations but those are precisely the high-productivity sectors that need to grow.

India’s dilemma is that its high productivity sectors are taxed while its low-productivity sectors aren’t, so valuable resources are trapped in low productivity sectors. Modi knows this and if he is serious then his surprise demonetization will be followed by more efforts to bring India’s informal sector into the formal sector, leveling the playing field, and increasing total wealth.

AddendumMostly Economics has a discussion of two early demonetization in India, one in 1946 and one in 1978, both were focused on the larger bills unlike the current demonetization.

The Top Ten MR Posts of 2015

Here are the top ten MR posts from 2015, mostly as measured by page views. The number one viewed post was:

  1. Apple Should Buy a University. People really like to talk about Apple and this post was picked up all over the web, most notably at Reddit where it received over 2500 comments.

Next most highly viewed were my post(s) on the California water shortage.

2. The Economics of California’s Water Shortage followed closely 4) by The Misallocation of Water.

3. Our guest blogger Ramez Naam earned the number 3 spot with his excellent post on Crispr, Genetically Engineering Humans Isn’t So Scary.

5. My post explaining why Martin Shkreli was able to jack up the price of Daraprim and how this argued in favor of drug reciprocity was timely and got attention: Daraprim Generic Drug Regulation and Pharmaceutical Price-Jacking

6. What was Gary Becker’s Biggest Mistake? generated lots of views and discussion.

7. Tyler’s post Bully for Ben Carson provided plenty of fodder for argument.

8. The Effect of Police Body Cameras–they work and should be mandatory.

9. Do workers benefit when laws require that employers provide them with benefits? I discussed the economics in The Happy Meal Fallacy.

10. Finally, Tyler discussed What Economic Theories are Especially Misunderstood.

Posts on immigration tend to get the most comments. The Case for Getting Rid of Borders generated over 700 comments here and over 1700 comments and 57 thousand likes at The Atlantic where the longer article appeared.

Other highly viewed posts included two questions, Is it Worse if foreigners kill us? from Tyler and Should we Care if the Human Race Goes Extinct? from myself.

The Ferguson Kleptocracy and Tyler’s posts, Greece and Syriza lost the public relations battle and a Simple Primer for Understanding China’s downturn (see also Tyler’s excellent video on this topic) were also highly viewed.

I would also point to Tyler’s best of lists as worthy of review including Best Fiction of 2015, Best Non-Fiction of 2015 and Best Movies of 2015. You can also see Tyler’s book recommendations from previous years here.

The ghost in the machine

I visited two wonderful churches in Barcelona. The first, of course, was La Sagrada Familia. Ramez Naam put it best, this is “the kind of church that Elves from the 22nd Century would build.” I can’t add to that, however, so let me turn to the second church.

The Chapel Torre Girona at the Polytechnic University of Catalonia in Barcelona is home to the MareNostrum, not the world’s fastest but certainly the world’s most beautiful supercomputer.

BSCC

Although off the usual tourist path, it’s possible to get a tour if you arrange in advance. As you walk around the nave, the hum of the supercomputer mixes with Gregorian chants. What is this computer thinking you wonder? Appropriately enough the MareNostrum is thinking about the secrets of life and the universe.

In this picture, I managed to capture within the cooling apparatus a saintly apparition from a stained glass window.

The ghost in the machine.

ComputerSaint

Hat tip: Atlas Obscura.

Should we care if the human race goes extinct?

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed. I have a different objection.

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it”.  I see nothing objectionable in this scenario.

Aside from greater plausibility, a glide path means that dealing with the Terminator scenario is easier. In the Terminator scenario, humans must continually be on guard. In the glide path scenario we only have to avoid the Terminator until we become them and then the problem is resolved with little fuss. No human race but no mass murder either.

More generally, what’s so great about the human race? I agree, there are lots of great things to point to such as the works of Shakespeare, Mozart, and Grothendieck. We should revere the greatness of the works, however, not the substrate on which the works were created. If what is great about humanity is the great things that we have done then the future may hold greater things yet. If we work to pass on our best values and aspirations to our technological progeny then we can be proud of future generations even if they differ from us in some ways. I delight to think of the marvels that future generations may produce. But I see no reason to hope that such marvels will be produced by beings indistinguishable from myself, indeed that would seem rather disappointing.

Can We Network (and Augment) the Human Brain?

How realistic is it to directly send data in and out of the brain? That is the core scientific innovation underlying my novels. From a longer piece in which I discuss neurotechnology. (The Ultimate Interface: Your Brain):

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others.

What’s actually been done in humans?

In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. [..] More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at.

In animals, we’ve boosted cognitive performance:

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. [..] This chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

The real challenges remain hardware and brain surgery:

getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

Quite a bit of R&D is going into solving those hardware and surgery problems:

Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain.

You can read the whole thing here:The Ultimate Interface: Your Brain.

Renewables Are Disruptive to Coal and Gas

Over the last 5 years, the price of new wind power in the US has dropped 58% and the price of new solar power has dropped 78%. That’s the conclusion of investment firm Lazard Capital. The key graph is here (here’s a version with US grid prices marked). Lazard’s full report is here.

Utility-scale solar in the West and Southwest is now at times cheaper than new natural gas plants. Here’s UBS on the most recent record set by solar. (Full UBS solar market flash here.)

We see the latest proposed PPA price for Xcel’s SPS subsidiary by NextEra (NEE) as in NM as setting a new record low for utility-scale solar. [..] The 25-year contracts for the New Mexico projects have levelized costs of $41.55/MWh and $42.08/MWh.

That is 4.155 cents / kwh and 4.21 cents / kwh, respectively. Even after removing the federal solar Investment Tax Credit of 30%, the New Mexico solar deal is priced at 6 cents / kwh. By contrast, new natural gas electricity plants have costs between 6.4 to 9 cents per kwh, according to the EIA.

(Note that the same EIA report from April 2014 expects the lowest price solar power purchases in 2019 to be $91 / MWh, or 9.1 cents / kwh before subsidy. Solar prices are below that today.)

The New Mexico plant is the latest in a string of ever-cheaper solar deals. SEPA’s 2014 solar market snapshot lists other low-cost solar Power Purchase Agreements. (Full report here.)

  • Austin Energy (Texas) signed a PPA for less than $50 per megawatt-hour (MWh) for 150 MW.
  • TVA (Alabama) signed a PPA for $61 per MWh.
  • Salt River Project (Arizona) signed a PPA for roughly $53 per MWh.

Wind prices are also at all-time lows. Here’s Lawrence Berkeley National Laboratory on the declining price of wind power (full report here):

After topping out at nearly $70/MWh in 2009, the average levelized long-term price from wind power sales agreements signed in 2013 fell to around $25/MWh.

After adding in the wind Production Tax Credit, that is still substantially below the price of new coal or natural gas.

Wind and solar compensate for each other’s variability, with solar providing power during the day, and wind primarily at dusk, dawn, and night.

Energy storage is also reaching disruptive prices at utility scale. The Tesla battery is cheap enough to replace natural gas ‘peaker’ plants. And much cheaper energy storage is on the way.

Renewable prices are not static, and generally head only in one direction: Down. Cost reductions are driven primarily by the learning curve. Solar and wind power prices improve reasonably predictably following a power law. Every doubling of cumulative solar production drives module prices down by 20%. Similar phenomena are observed in numerous manufactured goods and industrial activities,  dating back to the Ford Model T. Subsidies are a clumsy policy (I’d prefer a tax on carbon) but they’ve scaled deployment, which in turn has dropped present and future costs.

By the way, the common refrain that solar prices are so low primarily because of Chinese dumping exaggerates the impact of Chinese manufacturing. Solar modules from the US, Japan, and SE Asia are all similar in price to those from China.

Fossil fuel technologies, by contrast to renewables, have a slower learning curve, and also compete with resource depletion curves as deposits are drawn down and new deposits must be found and accessed.  From a 2007 paper by Farmer and Trancik, at the Santa Fe Institute, Dynamics of Technology Development in the Energy Sector :

Fossil fuel energy costs follow a complicated trajectory because they are influenced both by trends relating to resource scarcity and those relating to technology improvement. Technology improvement drives resource costs down, but the finite nature of deposits ultimately drives them up. […] Extrapolations suggest that if these trends continue as they have in the past, the costs of reaching parity between photovoltaics and current electricity prices are on the order of $200 billion

Renewable electricity prices are likely to continue to drop, particularly for solar, which has a faster learning curve and is earlier in its development than wind. The IEA expects utility scale solar prices to average 4 cents per kwh around the world by mid century, and that solar will be the number 1 source of electricity worldwide. (Full report here.)

Bear in mind that the IEA has also underestimated the growth of solar in every projection made over the last decade.

Germany’s Fraunhofer Institute expects solar in southern and central Europe (similar in sunlight to the bulk of the US) to drop below 4 cents per kwh in the next decade, and to reach 2 cents per kwh by mid century. (Their report is here. If you want to understand the trends in solar costs, read this link in particular.)

Analysts at wealth management firm Alliance Bernstein put this drop in prices into a long term context in their infamous “Welcome to the Terrordome” graph, which shows the cost of solar energy plunging from more than 10 times the cost of coal and natural gas to near parity. The full report outlines their reason for invoking terror. The key quote:

At the point where solar is displacing a material share of incremental oil and gas supply, global energy deflation will become inevitable: technology (with a falling cost structure) would be driving prices in the energy space.

They estimate that solar must grow by an order of magnitude, a point they see as a decade away. For oil, it may in fact be further away. Solar and wind are used to create electricity, and today, do not substantially compete with oil. For coal and natural gas, the point may be sooner.

Unless solar, wind, and energy storage innovations suddenly and unexpectedly falter, the technology-based falling cost structure of renewable electricity will eventually outprice fossil fuel electricity across most of the world. The question appears to be less “if” and more “when”.

What Do AI Researchers Think of the Risks of AI?

Elon Musk, Stephen Hawking, and Bill Gates have recently expressed concern that development of AI could lead to a ‘killer AI’ scenario, and potentially to the extinction of humanity.

None of them are AI researchers or have worked substantially with AI that I know of. (Disclosure: I know Gates slightly from my time at Microsoft, when I briefed him regularly on progress in search. I have great respect for all three men.)

What do actual AI researchers think of the risks of AI?

Here’s Oren Etzioni, a professor of computer science at the University of Washington, and now CEO of the Allen Institute for Artificial Intelligence:

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

Here’s Michael Littman, an AI researcher and computer science professor at Brown University. (And former program chair for the Association of the Advancement of Artificial Intelligence):

there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. […] These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

Here’s Yann LeCun, Facebook’s director of research, a legend in neural networks and machine learning (‘LeCun nets’ are a type of neural net named after him), and one of the world’s top experts in deep learning.  (This is from an Erik Sofge interview of several AI researchers on the risks of AI. Well worth reading.)

Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent A.I. to “reprogram” itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves A.I. researchers, or even computer scientists.

Here’s Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos, before he left to become Chief Scientist at Chinese search engine company Baidu:

“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence,” he said. “But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”

Here’s my own modest contribution, talking about the powerful disincentives for working towards true sentience. (I’m not an AI researcher, but I managed AI researchers and work into neural networks and other types of machine learning for many years.)

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Genetically Engineering Humans Isn’t So Scary (Don’t Fear the CRISPR, Part 2)

Yesterday I outlined why genetically engineered children are not imminent. The Chinese CRISPR gene editing of embryos experiment was lethal to around 20% of embryos, inserted off-target errors into roughly 10% of embryos (with some debate there), and only produced the desired genetic change in around 5% of embryos, and even then only in a subset of cells in those embryos.

Over time, the technology will become more efficient and the combined error and lethality rates will drop, though likely never to zero.

Human genome editing should be regulated. But it should be regulated primarily to assure safety and informed consent, rather than being banned as it is most developed countries (see figure 3). It’s implausible that human genome editing will lead to a Gattaca scenario, as I’ll show below. And bans only make the societal outcomes worse.

1. Enhancing Human Traits is Hard (And Gattaca is Science Fiction)

The primary fear of human germline engineering, beyond safety, appears to be a Gattaca-like scenario, where the rich are able to enhance the intelligence, looks, and other traits of their children, and the poor aren’t.

But boosting desirable traits such as intelligence and height to any significant degree is implausible, even with a very low error rate.

The largest ever survey of genes associated with IQ found 69 separate genes, which together accounted for less than 8% of the variance in IQ scores, implying that at least hundreds of genes, if not thousands, involved in IQ. (See paper, here.) As Nature reported, even the three genes with the largest individual impact added up to less than two points of IQ:

The three variants the researchers identified were each responsible for an average of 0.3 points on an IQ test. … That means that a person with two copies of each variant would score 1.8 points higher on an intelligence test than a person with none of them.

Height is similarly controlled by hundreds of gene. 697 genes together account for just one fifth of the heritability of adult height. (Paper at Nature Genetics here).

For major personality traits, identified genes account for less than 2% of variation, and it’s likely that hundreds or thousands of genes are involved.

Manipulating IQ, height, or personality is thus likely to involve making a very large number of genetic changes. Even then, genetic changes are likely to produce a moderate rather than overwhelming impact.

Conversely, for those unlucky enough to be conceived with the wrong genes, a single genetic change could prevent Cystic Fibrosis, or dramatically reduce the odds of Alzheimer’s disease, breast cancer or ovarian cancer, or cut the risk of heart disease by 30-40%.

Reducing disease is orders of magnitude easier and safer than augmenting abilities.

2. Parents are risk averse

We already trust parents to make hundreds of impactful decisions on behalf of their children: Schooling, diet and nutrition, neighborhood, screen time, media exposure, and religious upbringing are just a few.  Each of these has a larger impact on the average child – positive or negative – than one is likely to see from a realistic gene editing scenario any time in the next few decades.

And in general, parents are risk averse when their children are involved. Using gene editing to reduce the risk of disease is quite different than taking on new risks in an effort to boost a trait like height or IQ. That’s even more true when it takes dozens or hundreds of genetic tweaks to make even a relatively small change in those traits – and when every genetic tweak adds to the risk of an error.

(Parents could go for a more radical approach: Inserting extra copies of human genes, or transgenic variants not found in humans at all. It seems likely that parents will be even more averse to venturing into such uncharted waters with their children.)

If a trait like IQ could be safely increased to a marked degree, that would constitute a benefit to both the child and society. And while it would pose issues for inequality, the best solution might be to try to rectify inequality of access, rather than ban the technique. (Consider that IVF is subsidized in places as different as Singapore and Sweden.) But significant enhancements don’t appear to be likely any time on the horizon.

Razib Khan points out one other thing we trust parents to do, which has a larger impact on the genes of a child than any plausible technology of the next few decades:

 “the best bet for having a smart child is picking a spouse with a deviated phenotype. Look for smart people to marry.”

3. Bans make safety and inequality worse

A ban on human germline gene editing would cut off medical applications that could reduce the risk of disease in an effort to control the far less likely and far less impactful enhancement and parental control scenarios.

A ban is also unlikely to be global. Attitudes towards genetic engineering vary substantially by country. In the US, surveys find 4% to 14% of the population supports genetic engineering for enhancement purposes. Only around 40% support its use to prevent disease. Yet, As David Macer pointed out, as early as 1994:

in India and Thailand, more than 50% of the 900+ respondents in each country supported enhancement of physical characters, intelligence, or making people more ethical.

While most of Europe has banned genetic engineering, and the US looks likely to follow suit, it’s likely to go forward in at least some parts of Asia. (That is, indeed, one of the premises of Nexus and its sequels.)

If the US and Europe do ban the technology, while other countries don’t, then genetic engineering will be accessible to a smaller set of people: Those who can afford to travel overseas and pay for it out-of-pocket. Access will become more unequal. And, in all likelihood, genetic engineering in Thailand, India, or China is likely to be less well regulated for safety than it would be in the US or Europe, increasing the risk of mishap.

The fear of genetic engineering is based on unrealistic views of the genome, the technology, and how parents would use it. If we let that fear drive us towards a ban on genetic engineering – rather than legalization and regulation – we’ll reduce safety and create more inequality of access.

I’ll give the penultimate word to Jennifer Doudna, the inventor of the technique (this is taken from a truly interesting set of responses to Nature Biotechnology’s questions, which they posed to a large number of leaders in the field):

Doudna, Carroll, Martin & Botchan: We don’t think an international ban would be effective by itself; it is likely some people would ignore it. Regulation is essential to ensure that dangerous, trivial or cosmetic uses are not pursued.

Legalize and regulate genetic engineering. That’s the way to boost safety and equality, and to guide the science and ethics.

Don’t Fear the CRISPR

I’m honored to be here guest-blogging for the week. Thanks, Alex, for the warm welcome.

I want to start with a topic recently in the news, and that I’ve written about in both fiction and non-fiction.

In April, Chinese scientists announced that they’d used the CRISPR gene editing technique to modify non-viable human embryos. The experiment focused on modifying the gene that causes the quite serious hereditary blood disease Beta-thalassemia.

You can read the paper here. Carl Zimmer has an excellent write-up here. Tyler has blogged about it here. And Alex here.

Marginal Revolution aside, the response to this experiment has been largely negative. Science and Nature, the two most prestigious scientific journals in the world, reportedly rejected the paper on ethical grounds. Francis Collins, director of the NIH, announced that NIH will not fund any CRISPR experiments that involve human embryos.

NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed.

This is a mistake, for several reasons.

  1. The technology isn’t as mature as reported. Most responses to it are over-reactions.
  2. Parents are likely to use genetic technologies in the best interests of their children.
  3. Using gene editing to create ‘superhumans’ will be tremendously harder, riskier, and less likely to be embraced by parents than using it to prevent disease.
  4. A ban on research funding or clinical application will only worsen safety, inequality, and other concerns expressed about the research.

Today I’ll talk about the maturity of the technology. Tomorrow I’ll be back to discuss the other points. (You can read that now in Part 2: Don’t Fear Genetically Engineered Babies.)

CRISPR Babies Aren’t Near

Despite the public reaction (and the very real progress with CRISPR in other domains) we are not near a world of CRISPR gene-edited children.

First, the technique was focused on very early stage embryos made up of just a few cells. Genetically engineering an embryo at that very early stage is the only realistic way to ensure that the genetic changes reach all or most cells in the body. That limits the possible parents to those willing to go through in-vitro fertilization (IVF). It takes an average of roughly 3 IVF cycles, with numerous hormone injections and a painful egg extraction at each cycle, to produce a live birth. In some cases, it takes as many as 6 cycles. Even after 6 cycles, perhaps a third of women going through IVF will not have become pregnant (see table 3, here). IVF itself is a non-trivial deterrent to genetically engineering children.

Second, the Chinese experiment resulted in more dead embryos than successfully gene edited embryos. Of 86 original embryos, only 71 survived the process. 54 of those were tested to see if the gene had successfully inserted. Press reports have mentioned that 28 of those 54 tested embryos showed signs of CRISPR/Cas9 activity.

Yet only 4 embryos showed the intended genetic change. And even those 4 showed the new gene in only some of their cells, becoming ‘mosaics’ of multiple different genomes.

From the paper:

~80% of the embryos remained viable 48 h after injection (Fig. 2A), in agreement with low toxicity of Cas9 injection in mouse embryos  […]

ssDNA-mediated editing occurred only in 4 embryos… and the edited embryos were mosaic, similar to findings in other model systems.

So the risk of destroying an embryo (~20%) was substantially higher than the likelihood of successfully inserting a gene into the embryo (~5%) and much higher than the chance of inserting the gene into all of the embryo’s cells (0%).

There were also off-target mutations. Doug Mortlock believes the off-target mutation rate was actually much lower than the scientists believed, but in general CRISPR has a significantly non-zero chance of inducing an unintended genetic change.

CRISPR is a remarkable breakthrough in gene editing, with applications to agriculture, gene therapy, pharmaceutical production, basic science, and more. But in many of those scenarios, error can be tolerated. Cells with off-target mutations can be weeded out to find the few perfectly edited ones. Getting one complete success out of tens, hundreds, or even thousands of modified cells can suffice, when that one cell can then be replicated to create a new cell line or seed line.

In human fertility, where embryos are created in single digit quantities rather than hundreds or thousands – and where we hope at least one of those embryos comes to term as a child – our tolerance for error is dramatically lower. The efficiency, survivability, and precision of CRISPR all need to rise substantially before many parents are likely to consider using it for an unborn embryo, even to prevent disease.

That is, indeed, the conclusion of the Chinese researchers, who wrote, “Our study underscores the challenges facing clinical applications of CRISPR/Cas9.”

More in part two of this post on the ethics of allowing genetic editing of the unborn, and why a ban in this area is counterproductive.

How disruptive will the Tesla battery be?

Ramez Naam has an opinion, backed up by some reasonable estimates:

For most of the US, this battery isn’t quite cheap enough. But it’s in the right ballpark. And that means a lot. Net Metering plans in the US are filling up. California’s may be full by the end of 2016 or 2017, modulo additional legal changes. That would severely impact the economics of solar. But another factor of 2 price reduction in storage would make it cheap enough that, as Net Metering plans fill up or are reduced around the country, the battery would allow solar owners to save power for the evening or night-time hours in a cost effective way.

That is also a policy tool in debates with utilities. If they see Net Metering reductions as a tool to slow rooftop solar, they’ll be forced to confront the fact that solar owners with cheap batteries are less dependent on Net Metering.

That same factor of 2 price reduction would also make batteries effective for day-night electricity cost arbitrage, wherein customers fill up the battery with cheap grid power at night, and use stored battery power instead of the grid during the day. In California, where there’s a 19 cent gap between middle of the night power and peak-of-day power, those economics look very attractive.

And the cost of batteries is plunging fast. Tesla will get that 2x price reduction within 3-5 years, if not faster.

Read the whole thing, and note the discussion of India too.

  • 1
  • 2