Science

How realistic is it to directly send data in and out of the brain? That is the core scientific innovation underlying my novels. From a longer piece in which I discuss neurotechnology. (The Ultimate Interface: Your Brain):

Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others.

What’s actually been done in humans?

In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. [..] More radical technologies have sent vision straight into the brain. And recently, brain scanners have succeeded in deciphering what we’re looking at.

In animals, we’ve boosted cognitive performance:

In rats, we’ve restored damaged memories via a ‘hippocampus chip’ implanted in the brain. Human trials are starting this year. [..] This chip can actually improve memory. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on.

In monkeys, we’ve done better, using a brain implant to “boost monkey IQ” in pattern matching tests.

The real challenges remain hardware and brain surgery:

getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

Quite a bit of R&D is going into solving those hardware and surgery problems:

Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes, have huge advantages for brain interfaces. They’re working on silk-substrate interfaces that melt into the brain. Researchers at Berkeley have a proposal for neural dust that would be sprinkled across your brain.

You can read the whole thing here:The Ultimate Interface: Your Brain.

Over the last 5 years, the price of new wind power in the US has dropped 58% and the price of new solar power has dropped 78%. That’s the conclusion of investment firm Lazard Capital. The key graph is here (here’s a version with US grid prices marked). Lazard’s full report is here.

Utility-scale solar in the West and Southwest is now at times cheaper than new natural gas plants. Here’s UBS on the most recent record set by solar. (Full UBS solar market flash here.)

We see the latest proposed PPA price for Xcel’s SPS subsidiary by NextEra (NEE) as in NM as setting a new record low for utility-scale solar. [..] The 25-year contracts for the New Mexico projects have levelized costs of $41.55/MWh and $42.08/MWh.

That is 4.155 cents / kwh and 4.21 cents / kwh, respectively. Even after removing the federal solar Investment Tax Credit of 30%, the New Mexico solar deal is priced at 6 cents / kwh. By contrast, new natural gas electricity plants have costs between 6.4 to 9 cents per kwh, according to the EIA.

(Note that the same EIA report from April 2014 expects the lowest price solar power purchases in 2019 to be $91 / MWh, or 9.1 cents / kwh before subsidy. Solar prices are below that today.)

The New Mexico plant is the latest in a string of ever-cheaper solar deals. SEPA’s 2014 solar market snapshot lists other low-cost solar Power Purchase Agreements. (Full report here.)

  • Austin Energy (Texas) signed a PPA for less than $50 per megawatt-hour (MWh) for 150 MW.
  • TVA (Alabama) signed a PPA for $61 per MWh.
  • Salt River Project (Arizona) signed a PPA for roughly $53 per MWh.

Wind prices are also at all-time lows. Here’s Lawrence Berkeley National Laboratory on the declining price of wind power (full report here):

After topping out at nearly $70/MWh in 2009, the average levelized long-term price from wind power sales agreements signed in 2013 fell to around $25/MWh.

After adding in the wind Production Tax Credit, that is still substantially below the price of new coal or natural gas.

Wind and solar compensate for each other’s variability, with solar providing power during the day, and wind primarily at dusk, dawn, and night.

Energy storage is also reaching disruptive prices at utility scale. The Tesla battery is cheap enough to replace natural gas ‘peaker’ plants. And much cheaper energy storage is on the way.

Renewable prices are not static, and generally head only in one direction: Down. Cost reductions are driven primarily by the learning curve. Solar and wind power prices improve reasonably predictably following a power law. Every doubling of cumulative solar production drives module prices down by 20%. Similar phenomena are observed in numerous manufactured goods and industrial activities,  dating back to the Ford Model T. Subsidies are a clumsy policy (I’d prefer a tax on carbon) but they’ve scaled deployment, which in turn has dropped present and future costs.

By the way, the common refrain that solar prices are so low primarily because of Chinese dumping exaggerates the impact of Chinese manufacturing. Solar modules from the US, Japan, and SE Asia are all similar in price to those from China.

Fossil fuel technologies, by contrast to renewables, have a slower learning curve, and also compete with resource depletion curves as deposits are drawn down and new deposits must be found and accessed.  From a 2007 paper by Farmer and Trancik, at the Santa Fe Institute, Dynamics of Technology Development in the Energy Sector :

Fossil fuel energy costs follow a complicated trajectory because they are influenced both by trends relating to resource scarcity and those relating to technology improvement. Technology improvement drives resource costs down, but the finite nature of deposits ultimately drives them up. […] Extrapolations suggest that if these trends continue as they have in the past, the costs of reaching parity between photovoltaics and current electricity prices are on the order of $200 billion

Renewable electricity prices are likely to continue to drop, particularly for solar, which has a faster learning curve and is earlier in its development than wind. The IEA expects utility scale solar prices to average 4 cents per kwh around the world by mid century, and that solar will be the number 1 source of electricity worldwide. (Full report here.)

Bear in mind that the IEA has also underestimated the growth of solar in every projection made over the last decade.

Germany’s Fraunhofer Institute expects solar in southern and central Europe (similar in sunlight to the bulk of the US) to drop below 4 cents per kwh in the next decade, and to reach 2 cents per kwh by mid century. (Their report is here. If you want to understand the trends in solar costs, read this link in particular.)

Analysts at wealth management firm Alliance Bernstein put this drop in prices into a long term context in their infamous “Welcome to the Terrordome” graph, which shows the cost of solar energy plunging from more than 10 times the cost of coal and natural gas to near parity. The full report outlines their reason for invoking terror. The key quote:

At the point where solar is displacing a material share of incremental oil and gas supply, global energy deflation will become inevitable: technology (with a falling cost structure) would be driving prices in the energy space.

They estimate that solar must grow by an order of magnitude, a point they see as a decade away. For oil, it may in fact be further away. Solar and wind are used to create electricity, and today, do not substantially compete with oil. For coal and natural gas, the point may be sooner.

Unless solar, wind, and energy storage innovations suddenly and unexpectedly falter, the technology-based falling cost structure of renewable electricity will eventually outprice fossil fuel electricity across most of the world. The question appears to be less “if” and more “when”.

Elon Musk, Stephen Hawking, and Bill Gates have recently expressed concern that development of AI could lead to a ‘killer AI’ scenario, and potentially to the extinction of humanity.

None of them are AI researchers or have worked substantially with AI that I know of. (Disclosure: I know Gates slightly from my time at Microsoft, when I briefed him regularly on progress in search. I have great respect for all three men.)

What do actual AI researchers think of the risks of AI?

Here’s Oren Etzioni, a professor of computer science at the University of Washington, and now CEO of the Allen Institute for Artificial Intelligence:

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

Here’s Michael Littman, an AI researcher and computer science professor at Brown University. (And former program chair for the Association of the Advancement of Artificial Intelligence):

there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. […] These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

Here’s Yann LeCun, Facebook’s director of research, a legend in neural networks and machine learning (‘LeCun nets’ are a type of neural net named after him), and one of the world’s top experts in deep learning.  (This is from an Erik Sofge interview of several AI researchers on the risks of AI. Well worth reading.)

Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent A.I. to “reprogram” itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves A.I. researchers, or even computer scientists.

Here’s Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos, before he left to become Chief Scientist at Chinese search engine company Baidu:

“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence,” he said. “But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”

Here’s my own modest contribution, talking about the powerful disincentives for working towards true sentience. (I’m not an AI researcher, but I managed AI researchers and work into neural networks and other types of machine learning for many years.)

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

Yesterday I outlined why genetically engineered children are not imminent. The Chinese CRISPR gene editing of embryos experiment was lethal to around 20% of embryos, inserted off-target errors into roughly 10% of embryos (with some debate there), and only produced the desired genetic change in around 5% of embryos, and even then only in a subset of cells in those embryos.

Over time, the technology will become more efficient and the combined error and lethality rates will drop, though likely never to zero.

Human genome editing should be regulated. But it should be regulated primarily to assure safety and informed consent, rather than being banned as it is most developed countries (see figure 3). It’s implausible that human genome editing will lead to a Gattaca scenario, as I’ll show below. And bans only make the societal outcomes worse.

1. Enhancing Human Traits is Hard (And Gattaca is Science Fiction)

The primary fear of human germline engineering, beyond safety, appears to be a Gattaca-like scenario, where the rich are able to enhance the intelligence, looks, and other traits of their children, and the poor aren’t.

But boosting desirable traits such as intelligence and height to any significant degree is implausible, even with a very low error rate.

The largest ever survey of genes associated with IQ found 69 separate genes, which together accounted for less than 8% of the variance in IQ scores, implying that at least hundreds of genes, if not thousands, involved in IQ. (See paper, here.) As Nature reported, even the three genes with the largest individual impact added up to less than two points of IQ:

The three variants the researchers identified were each responsible for an average of 0.3 points on an IQ test. … That means that a person with two copies of each variant would score 1.8 points higher on an intelligence test than a person with none of them.

Height is similarly controlled by hundreds of gene. 697 genes together account for just one fifth of the heritability of adult height. (Paper at Nature Genetics here).

For major personality traits, identified genes account for less than 2% of variation, and it’s likely that hundreds or thousands of genes are involved.

Manipulating IQ, height, or personality is thus likely to involve making a very large number of genetic changes. Even then, genetic changes are likely to produce a moderate rather than overwhelming impact.

Conversely, for those unlucky enough to be conceived with the wrong genes, a single genetic change could prevent Cystic Fibrosis, or dramatically reduce the odds of Alzheimer’s disease, breast cancer or ovarian cancer, or cut the risk of heart disease by 30-40%.

Reducing disease is orders of magnitude easier and safer than augmenting abilities.

2. Parents are risk averse

We already trust parents to make hundreds of impactful decisions on behalf of their children: Schooling, diet and nutrition, neighborhood, screen time, media exposure, and religious upbringing are just a few.  Each of these has a larger impact on the average child – positive or negative – than one is likely to see from a realistic gene editing scenario any time in the next few decades.

And in general, parents are risk averse when their children are involved. Using gene editing to reduce the risk of disease is quite different than taking on new risks in an effort to boost a trait like height or IQ. That’s even more true when it takes dozens or hundreds of genetic tweaks to make even a relatively small change in those traits – and when every genetic tweak adds to the risk of an error.

(Parents could go for a more radical approach: Inserting extra copies of human genes, or transgenic variants not found in humans at all. It seems likely that parents will be even more averse to venturing into such uncharted waters with their children.)

If a trait like IQ could be safely increased to a marked degree, that would constitute a benefit to both the child and society. And while it would pose issues for inequality, the best solution might be to try to rectify inequality of access, rather than ban the technique. (Consider that IVF is subsidized in places as different as Singapore and Sweden.) But significant enhancements don’t appear to be likely any time on the horizon.

Razib Khan points out one other thing we trust parents to do, which has a larger impact on the genes of a child than any plausible technology of the next few decades:

 “the best bet for having a smart child is picking a spouse with a deviated phenotype. Look for smart people to marry.”

3. Bans make safety and inequality worse

A ban on human germline gene editing would cut off medical applications that could reduce the risk of disease in an effort to control the far less likely and far less impactful enhancement and parental control scenarios.

A ban is also unlikely to be global. Attitudes towards genetic engineering vary substantially by country. In the US, surveys find 4% to 14% of the population supports genetic engineering for enhancement purposes. Only around 40% support its use to prevent disease. Yet, As David Macer pointed out, as early as 1994:

in India and Thailand, more than 50% of the 900+ respondents in each country supported enhancement of physical characters, intelligence, or making people more ethical.

While most of Europe has banned genetic engineering, and the US looks likely to follow suit, it’s likely to go forward in at least some parts of Asia. (That is, indeed, one of the premises of Nexus and its sequels.)

If the US and Europe do ban the technology, while other countries don’t, then genetic engineering will be accessible to a smaller set of people: Those who can afford to travel overseas and pay for it out-of-pocket. Access will become more unequal. And, in all likelihood, genetic engineering in Thailand, India, or China is likely to be less well regulated for safety than it would be in the US or Europe, increasing the risk of mishap.

The fear of genetic engineering is based on unrealistic views of the genome, the technology, and how parents would use it. If we let that fear drive us towards a ban on genetic engineering – rather than legalization and regulation – we’ll reduce safety and create more inequality of access.

I’ll give the penultimate word to Jennifer Doudna, the inventor of the technique (this is taken from a truly interesting set of responses to Nature Biotechnology’s questions, which they posed to a large number of leaders in the field):

Doudna, Carroll, Martin & Botchan: We don’t think an international ban would be effective by itself; it is likely some people would ignore it. Regulation is essential to ensure that dangerous, trivial or cosmetic uses are not pursued.

Legalize and regulate genetic engineering. That’s the way to boost safety and equality, and to guide the science and ethics.

Dr. Greene, working with a student, has also found that “squirrels understand ‘bird-ese,’ and birds understand ‘squirrel-ese.’ ” When red squirrels hear a call announcing a dangerous raptor in the air, or they see such a raptor, they will give calls that are acoustically “almost identical” to the birds, Dr. Greene said. (Researchers have found that eastern chipmunks are attuned to mobbing calls by the eastern tufted titmouse, a cousin of the chickadee.)

The titmice are in on it too.  The article has numerous further points of interest.

Don’t Fear the CRISPR

by on May 18, 2015 at 7:00 am in Medicine, Science | Permalink

I’m honored to be here guest-blogging for the week. Thanks, Alex, for the warm welcome.

I want to start with a topic recently in the news, and that I’ve written about in both fiction and non-fiction.

In April, Chinese scientists announced that they’d used the CRISPR gene editing technique to modify non-viable human embryos. The experiment focused on modifying the gene that causes the quite serious hereditary blood disease Beta-thalassemia.

You can read the paper here. Carl Zimmer has an excellent write-up here. Tyler has blogged about it here. And Alex here.

Marginal Revolution aside, the response to this experiment has been largely negative. Science and Nature, the two most prestigious scientific journals in the world, reportedly rejected the paper on ethical grounds. Francis Collins, director of the NIH, announced that NIH will not fund any CRISPR experiments that involve human embryos.

NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed.

This is a mistake, for several reasons.

  1. The technology isn’t as mature as reported. Most responses to it are over-reactions.
  2. Parents are likely to use genetic technologies in the best interests of their children.
  3. Using gene editing to create ‘superhumans’ will be tremendously harder, riskier, and less likely to be embraced by parents than using it to prevent disease.
  4. A ban on research funding or clinical application will only worsen safety, inequality, and other concerns expressed about the research.

Today I’ll talk about the maturity of the technology. Tomorrow I’ll be back to discuss the other points. (You can read that now in Part 2: Don’t Fear Genetically Engineered Babies.)

CRISPR Babies Aren’t Near

Despite the public reaction (and the very real progress with CRISPR in other domains) we are not near a world of CRISPR gene-edited children.

First, the technique was focused on very early stage embryos made up of just a few cells. Genetically engineering an embryo at that very early stage is the only realistic way to ensure that the genetic changes reach all or most cells in the body. That limits the possible parents to those willing to go through in-vitro fertilization (IVF). It takes an average of roughly 3 IVF cycles, with numerous hormone injections and a painful egg extraction at each cycle, to produce a live birth. In some cases, it takes as many as 6 cycles. Even after 6 cycles, perhaps a third of women going through IVF will not have become pregnant (see table 3, here). IVF itself is a non-trivial deterrent to genetically engineering children.

Second, the Chinese experiment resulted in more dead embryos than successfully gene edited embryos. Of 86 original embryos, only 71 survived the process. 54 of those were tested to see if the gene had successfully inserted. Press reports have mentioned that 28 of those 54 tested embryos showed signs of CRISPR/Cas9 activity.

Yet only 4 embryos showed the intended genetic change. And even those 4 showed the new gene in only some of their cells, becoming ‘mosaics’ of multiple different genomes.

From the paper:

~80% of the embryos remained viable 48 h after injection (Fig. 2A), in agreement with low toxicity of Cas9 injection in mouse embryos  […]

ssDNA-mediated editing occurred only in 4 embryos… and the edited embryos were mosaic, similar to findings in other model systems.

So the risk of destroying an embryo (~20%) was substantially higher than the likelihood of successfully inserting a gene into the embryo (~5%) and much higher than the chance of inserting the gene into all of the embryo’s cells (0%).

There were also off-target mutations. Doug Mortlock believes the off-target mutation rate was actually much lower than the scientists believed, but in general CRISPR has a significantly non-zero chance of inducing an unintended genetic change.

CRISPR is a remarkable breakthrough in gene editing, with applications to agriculture, gene therapy, pharmaceutical production, basic science, and more. But in many of those scenarios, error can be tolerated. Cells with off-target mutations can be weeded out to find the few perfectly edited ones. Getting one complete success out of tens, hundreds, or even thousands of modified cells can suffice, when that one cell can then be replicated to create a new cell line or seed line.

In human fertility, where embryos are created in single digit quantities rather than hundreds or thousands – and where we hope at least one of those embryos comes to term as a child – our tolerance for error is dramatically lower. The efficiency, survivability, and precision of CRISPR all need to rise substantially before many parents are likely to consider using it for an unborn embryo, even to prevent disease.

That is, indeed, the conclusion of the Chinese researchers, who wrote, “Our study underscores the challenges facing clinical applications of CRISPR/Cas9.”

More in part two of this post on the ethics of allowing genetic editing of the unborn, and why a ban in this area is counterproductive.

Tyler and I are delighted to have the great Ramez Naam guest blogging for us this week. Ramez spent many years at Microsoft leading teams working on search and artificial intelligence. His first book, More Than Human: Embracing the Promise of Biological Enhancement was a thought provoking look at the science and ethics of enhancing the human mind, body, and lifespan. More recently, I enjoyed Ramez’s The Infinite Resource: The Power of Ideas on a Finite Planet, an excellent Simonesque guide to climate change, energy and innovation.

Frankly, I didn’t expect much when I bought Ramez’s science fiction novel, Nexus. Good non-fiction authors don’t necessarily make good fiction authors. I was, however, blown away. Nexus is about a near-future in which a new technology allows humans to take control of their biological operating system and communicate mind to mind. Nexus combines the rush of a great thriller, the fascination of hard science fiction and the intrigue of a realistic world of spy-craft and geo-politics. I loved Nexus and immediately bought the second in the trilogy, Crux. I finished that quickly and I am now about half-way through the just released, Apex. Thus it’s great to have Ramez guest blogging as I race towards the end of his exciting trilogy! The trilogy is highly recommended.

Please welcome Ramez to MR.

Nexus Cover


NYTimes: While everyone welcomes Crispr-Cas9 as a strategy to treat disease, many scientists are worried that it could also be used to alter genes in human embryos, sperm or eggs in ways that can be passed from generation to generation. The prospect raises fears of a dystopian future in which scientists create an elite population of designer babies with enhanced intelligence, beauty or other traits.

Does the author really think that smart, beautiful people are a bad thing? Should we shoot the ones we have now? (It seems unlikely that we are at a local maximum).

Sometimes my fellow humans depress me. But I hope for better ones in the future.

NBC: A poker showdown between professional players and an artificial intelligence program has ended with a slim victory for the humans — so slim, in fact, that the scientists running the show said it’s effectively a tie .The event began two weeks ago, as the four pros — Bjorn Li, Doug Polk, Dong Kim and Jason Les — settled down at Rivers Casino in Pittsburgh to play a total of 80,000 hands of Heads-Up, No-Limit Texas Hold ‘em with Claudico, a poker-playing bot made by Carnegie Mellon University computer science researchers.

…No actual money was being bet — the dollar amount was more of a running scoreboard, and at the end the humans were up a total of $732,713 (they will share a $100,000 purse based on their virtual winnings). That sounds like a lot, but over 80,000 hands and $170 million of virtual money being bet, three-quarters of a million bucks is pretty much a rounding error, the experimenters said, and can’t be considered a statistically significant victory.

The computer bluffed and bet against the best poker players the world has ever known and over 80,000 hands the humans were not able to discover an exploitable flaw in the computer’s strategy. Thus, a significant win for the computer. Moreover, the computers will get better at a faster pace than the humans.

In my post on opaque intelligence I said that algorithms were becoming so sophisticated that we humans can’t really understand what they are doing, quipping that “any sufficiently advanced logic is indistinguishable from stupidity.” We see hints of that here:

“There are spots where it plays well and others where I just don’t understand it,” Polk said in a Carnegie Mellon news release….”Betting $19,000 to win a $700 pot just isn’t something that a person would do,” Polk continued.

Polk’s careful wording–he doesn’t say the computer’s strategy was wrong but that it was inhuman and beyond his understanding–is a telling indicator of respect.

The University of Toronto’s commercialization office states that it is “in a class with the likes of MIT and Stanford.” But Stanford has generated $1.3-billion (U.S.) in royalties for itself and the Massachusetts Institute of Technology issued 288 U.S. patents last year alone; U of T generates annual licensed IP income of less than $3-million (Canadian) and averages eight U.S. patents a year. Statistics Canada reports that in 2009, just $10-million was netted by all Canadian universities for their licences and IP. Even when accounting for universities that have open IP policies, this is a trivial amount by global standards.

That is from Jim Balsillie, and is interesting more generally, most of all on Canada and innovation.  For the pointer I thank Scott Barlow.  My previous post on this topic is here.

Whale fact of the day

by on May 7, 2015 at 1:47 pm in Food and Drink, Science | Permalink

Scientists at UBC have discovered — by accident — a rorqual whale can take a gulp of water that’s bigger than its massive body, then bounce back to its normal shape.

The whale has nerves to its mouth and tongue that can stretch to double their normal length, then snap back without damage, said Wayne Vogl, a professor in the department of cellular and physiological sciences at UBC.

“The nerves that supply these remarkably expandable tissues in the floor of the mouth of rorqual whales … are very stretchy, they’re like bungee cords,”

It was a surprising discovery, as most vertebrate nerves are more of a fixed length, said Vogl.

whale-graphic

There is more here.

The end of doggie privacy?

by on May 6, 2015 at 1:05 pm in Data Source, Law, Science | Permalink

Dogs can run, but they can’t hide from PooPrints.

BioPet Vet Lab, which specializes in canine genetic testing, is partnering with the appropriately named London borough of Barking and Dagenham to track down dog owners who fail to remove their pets’ public deposits.

Starting in September 2016, people who don’t pick up after their dogs could be fined 80 pounds, or about $125. The registration of dogs’ DNA could become mandatory five months earlier if a pilot program proves successful.

There is more here, via Ray Lopez.  And here is a related story from Vancouver.

*Digital Gold*

by on May 6, 2015 at 9:37 am in Books, Economics, History, Science | Permalink

The author is Nathaniel Popper and the subtitle is Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money.

This excellent work is the book on Bitcoin you’ve been waiting for, most importantly it doesn’t require that you are the kind of person who wants to read a book on Bitcoin.  I devoured my copy right away, it is full of information, explanation, and good humor, definitely recommended and entertaining throughout.

Here is Popper’s piece on Bitcoin and Argentina, here is Popper on Twitter.

Ramez Naam has an opinion, backed up by some reasonable estimates:

For most of the US, this battery isn’t quite cheap enough. But it’s in the right ballpark. And that means a lot. Net Metering plans in the US are filling up. California’s may be full by the end of 2016 or 2017, modulo additional legal changes. That would severely impact the economics of solar. But another factor of 2 price reduction in storage would make it cheap enough that, as Net Metering plans fill up or are reduced around the country, the battery would allow solar owners to save power for the evening or night-time hours in a cost effective way.

That is also a policy tool in debates with utilities. If they see Net Metering reductions as a tool to slow rooftop solar, they’ll be forced to confront the fact that solar owners with cheap batteries are less dependent on Net Metering.

That same factor of 2 price reduction would also make batteries effective for day-night electricity cost arbitrage, wherein customers fill up the battery with cheap grid power at night, and use stored battery power instead of the grid during the day. In California, where there’s a 19 cent gap between middle of the night power and peak-of-day power, those economics look very attractive.

And the cost of batteries is plunging fast. Tesla will get that 2x price reduction within 3-5 years, if not faster.

Read the whole thing, and note the discussion of India too.

No Font of Wisdom

by on May 3, 2015 at 7:49 am in Science | Permalink

You will not understand this post better just because it is hard to read. Small n study magnified by Gladwell, Kahneman et al. doesn’t replicate. ∑Àgain.

More here.

Hat tip: Nathaniel Bechhofer