Science

A key problem with solar energy is intermittency: solar generators produce only when the sun is shining, adding to social costs and requiring electricity system operators to reoptimize key decisions. We develop a method to quantify the economic value of large-scale renewable energy. We estimate the model for southeastern Arizona. Not accounting for offset carbon dioxide, we find social costs of $138.40 per megawatt hour for 20 percent solar generation, of which unforecastable intermittency accounts for $6.10 and intermittency overall for $46.00. With solar installation costs of $1.52 per watt and carbon dioxide social costs of $39.00 per ton, 20 percent solar would be welfare neutral.

20 percent solar for Arizona as the social welfare break-even point does not strike me as especially impressive, but of course the infrastructure integration technologies may yet advance.  In gross terms, intermittency costs exceed carbon costs.  Note also that forecastable intermittency accounts for most of the costs, and so with perfect storage solar would be a much more efficient technology.

Here are ungated versions of the paper.

Chinese scientists are on the verge of being first in the world to inject people with cells modified using the CRISPR–Cas9 gene-editing technique.

A team led by Lu You, an oncologist at Sichuan University’s West China Hospital in Chengdu, plans to start testing such cells in people with lung cancer next month. The clinical trial received ethical approval from the hospital’s review board on 6 July.

…The Chinese trial will enroll patients who have metastatic non-small cell lung cancer and for whom chemotherapy, radiation therapy and other treatments have failed. “Treatment options are very limited,” says Lu. “This technique is of great promise in bringing benefits to patients, especially the cancer patients whom we treat every day.”

On this one, they’re ahead of us.  There is much more information at the link, including a discussion of where the U.S. is at and also the FDA.

This is remarkable:

Now scientists have determined that humans and their honeyguides [a kind of bird] communicate with each other through an extraordinary exchange of sounds and gestures, which are used only for honey hunting and serve to convey enthusiasm, trustworthiness and a commitment to the dangerous business of separating bees from their hives.

The findings cast fresh light on one of only a few known examples of cooperation between humans and free-living wild animals, a partnership that may well predate the love affair between people and their domesticated dogs by hundreds of thousands of years.

Claire N. Spottiswoode, a behavioral ecologist at Cambridge University, and her colleagues reported in the journal Science that honeyguides advertise their scout readiness to the Yao people of northern Mozambique by flying up close while emitting a loud chattering cry.

For their part, the Yao seek to recruit and retain honeyguides with a distinctive vocalization, a firmly trilled “brrr” followed by a grunted “hmm.” In a series of careful experiments, the researchers then showed that honeyguides take the meaning of the familiar ahoy seriously.

…Researchers have identified a couple of other examples of human-wild animal cooperation: fishermen in Brazil who work with bottlenose dolphins to maximize the number of mullets swept into nets or snatched up by dolphin mouths, and orcas that helped whalers finish off harpooned baleen giants by pulling down the cables and drowning the whales, all for the reward from the humans of a massive whale tongue.

But for the clarity of reciprocity, nothing can match the relationship between honeyguide and honey hunter. “Honeyguides provide the information and get the wax,” Dr. Spottiswoode said. “Humans provide the skills and get the honey.”

Here is the full NYT story.

Is education overrated?  Or did the real industrial revolution not come until the latter part of the nineteenth century?  Or maybe a bit of both?  Here is new research by B. Zorina Khan (pdf):

Endogenous growth models raise fundamental questions about the nature of human creativity, and the sorts of resources, skills, and knowledge inputs that shift the frontier of technology and production possibilities. Many argue that the nature of early British industrialization supports the thesis that economic advances depend on specialized scientific training, the acquisition of costly human capital, and the role of elites. This paper examines the contributions of different types of knowledge to British industrialization, by assessing the backgrounds, education and inventive activity of the major contributors to technological advances in Britain during the crucial period between 1750 and 1930. The results indicate that scientists, engineers or technicians were not well-represented among the British great inventors, and their contributions remained unspecialized until very late in the nineteenth century. For developing countries today, the implications are that costly investments in specialized human capital resources might be less important than incentives for creativity, flexibility, and the ability to make incremental adjustments that can transform existing technologies into inventions that are appropriate for prevailing domestic conditions.

For the pointer I thank David Levey.

A central Pennsylvania man is accused of spraying fluid used to embalm a human brain on marijuana that he then smoked.

State police in Carlisle on Thursday charged Joshua Lee Long, 26, with abuse of a corpse and conspiracy.

WGAL-TV says court records indicate Long’s aunt discovered the brain in a department store bag while cleaning out a trailer.

…Court records indicate a coroner concluded the brain was real and that Long supposedly named it Freddy. According to the arrest affidavit, the coroners who examined the brain believe it is “most likely” a stolen medical specimen.

Here is more, via Tim B.

Remember the paper that said “conservatives” were on average more likely to exhibit “psychoticism,” but then it turned out there was a statistical mistake and this should have been attributed to “liberals,” at least within the confines of the paper’s model?  How did it all happen, and why did it take so long to correct?  Jesse Singal has the scoop, here is one excerpt:

Hatemi is convinced that Ludeke is out to get him. In our phone conversation, he repeatedly impressed on me just how minor the error is, how few times the papers in question had been cited, and how much of an overreaction it was for anyone to care all that much. “This error is freaking tangential and minor and there’s nothing novel in the error, whether [the sign on the correlation] was plus or minus,” he told me. “There’s no story. And I wish there was — if there’s any story, it’s, Should people be allowed to honestly correct their errors, or should you lampoon them and badmouth them for everything they didn’t do because they had a real error they admit to?”

Yes it’s that kind of story.  There is much more at the link, including tales of academics acting “like dicks.”  Here is the conclusion of the piece:

…the social-science landscape isn’t yet as embracing as it could be — and should be — of the replicators, challengers, and other would-be nudges like Ludeke who tend to make science better and more rigorous, who make it harder for people to coast by on big names and sloppy research.

For the pointer I thank Daniel Klein.

From Sunita Sah (NYT):

Disclosure can also cause perverse effects even when biases are unavoidable. For example, surgeons are more likely to recommend surgery than non-surgeons. Radiation-oncologists recommend radiation more than other physicians. This is known as specialty bias. Perhaps in an attempt to be transparent, some doctors spontaneously disclose their specialty bias. That is, surgeons may inform their patients that as surgeons, they are biased toward recommending surgery.

My latest research, published last month in the Proceedings of the National Academy of Sciences, reveals that patients with localized prostate cancer (a condition that has multiple effective treatment options) who heard their surgeon disclose his or her specialty bias were nearly three times more likely to have surgery than those patients who did not hear their surgeon reveal such a bias. Rather than discounting the surgeon’s recommendation, patients reported increased trust in physicians who disclosed their specialty bias.

Remarkably, I found that surgeons who disclosed their bias also behaved differently. They were more biased, not less. These surgeons gave stronger recommendations to have surgery, perhaps in an attempt to overcome any potential discounting they feared their patient would make on the recommendation as a result of the disclosure.

Surgeons also gave stronger recommendations to have surgery if they discussed the opportunity for the patient to meet with a radiation oncologist. This aligns with my previous research from randomized experiments, which showed that primary advisers gave more biased advice and felt it was more ethical to do so when they knew that their advisee might seek a second opinion.

The piece is…self-recommending!

That is in the FT, gated most likely, do subscribe!  In any case, here was to me the most interesting bit:

He [Tetlock] is trying to replace the public debates he describes as “Krugman-Ferguson pie fights” — a reference to the clashes over austerity between the economist and Nobel laureate, Paul Krugman and the economic historian, Niall Ferguson — with adversarial collaboration. “You give each side the opportunity to pose, say, 10 questions it thinks are probative and resolvable, and that it thinks it has a comparative advantage in answering” and then have the two sides give testable answers . . . Here is a very clear psychological prediction: people will come out of that tournament more open-minded than they otherwise would have been. You can take that one to the bank.”

More importantly, Tetlock ordered “…an apple fizz cocktail to go with haddock with a sauce of butter and musses.”

Here was the best single sentence from Tetlock, itself worth the price of an FT subscription:

“There is a price to be paid for feeling good about your beliefs.”

Robert Armstrong did an excellent job with the interview and piece.

“These are black boxes,” said Dr. Steven Joffe, a pediatric oncologist and bioethicist of the University of Pennsylvania, who serves on the FDA’s Pediatric Ethics Committee. “IRBs as a rule are incredibly difficult to study. Their processes are opaque, they don’t publicize what they do. There is no public record of their decision or deliberations, they don’t, as a rule, invite scrutiny or allow themselves to be observed. They ought to be accountable for the work they do.”

That is part of a longer and very interesting article on whether IRBs should be for-profit, or if we even at this point have a choice:

“This shift to commercial IRBs is, in effect, over,” said Caplan, who heads the division of bioethics at New York University Langone Medical Center. “It’s automatic and it’s not going back.”

Institutional review boards — which review all research that involves human participants — have undergone a quiet revolution in recent years, with many drug companies strongly encouraging researchers to use commercial boards, considered by many more efficient than their nonprofit counterparts.

Commercial IRBs now oversee an estimated 70 percent of US clinical trials for drugs and medical devices. The industry has also consolidated, with larger IRBs buying smaller ones, and even private equity firms coming along and buying the companies. Arsenal Capital Partners, for example, now owns WIRB-Copernicus Group.

But even if the tide has already turned, the debate over commercial review boards — and whether they can serve as human subject safety nets, responsible for protecting the hundreds of thousands of people who enroll in clinical trials each year — continues to swirl.

I am not well-informed in this area, but if you refer back to the first paragraph, perhaps nobody is.  That’s worrying.

For the pointer I thank Michelle Dawson.

Very few people imagined that self-driving cars would advance so quickly or be deployed so rapidly. As a result, robot cars are largely unregulated. There is no government testing regime or pre-certification for robot cars, for example. Indeed, most states don’t even require a human driver because no one imagined that there was an alternative. Many people, however, are beginning to question laissez-faire in light of the first fatality involving a partially-autonomous car that occurred in May and became public last week. That would be a mistake. The normal system of laissez-faire is working well for robot cars.

Laissez-faire for new technologies is the norm. In the automotive world, for example, new technologies have been deployed on cars for over a hundred years without pre-certification including seatbelts, air bags, crumple zones, abs braking systems, adaptive cruise control and lane departure and collision warning systems. Some of these technologies are now regulated but regulation came after these technologies were developed and became common. Airbags began to be deployed in the 1970s, for example when they were not as safe as they are today but airbags improved over time and by the 1990s were fairly common. It was only in 1998, long after they were an option and the design had stabilized, that the Federal government required airbags in all new cars.

Lane departure and collision warning systems, among other technologies, remain largely unregulated by the Federal government today. All technologies, however, are regulated by the ordinary rules of tort (part of the laissez-faire system). The tort system is imperfect but it works tolerably well especially when it focuses on contract and disclosure. Market regulation also occurs through the insurance companies. Will insurance companies given a discount for self-driving cars? Will they charge more? Forbid the use of self-driving cars? Let the system evolve an answer.

Had burdensome regulations been imposed on airbags in the 1970s the technology would have been delayed and the net result could well have been more injury and death. We have ignored important tradeoffs in drug regulation to our detriment. Let’s avoid these errors in the regulation of other technologies.

The fatality in May was a tragedy but so were the approximately 35,000 other traffic fatalities that occurred last year without a robot at the wheel. At present, these technologies appear to be increasing safety but even more importantly what I have called the glide path of the technology looks very good. Investment is flowing into this field and we don’t want to forestall improvements by raising costs now or imposing technological “fixes” which could well be obsolete in a few years.

Laissez-faire is working well for robot cars. Let’s avoid over-regulation today so that in a dozen years we can argue about whether all cars should be required to be robot cars.

Optical Illusion of the Year

by on July 3, 2016 at 4:55 pm in Games, Science | Permalink

Claims about clutter

by on July 2, 2016 at 11:09 am in Books, Education, Science, The Arts | Permalink

Tidy by category, not by location

One of the most common mistakes people make is to tidy room by room.  This approach doesn’t work because people think they have tied up when in fact they have only shuffled their things around from one location to another or scattered items in the same category around the house, making it impossible to get an accurate grasp of the volume of things they actually own.

The correct approach is to tidy by category.  This means tidying up all the things in the same category in one go.  For example, when tidying the clothes category, the first step is to gather every item of clothing from the entire house in one spot.  This allows you to see objectively exactly how much you have.  Confronted with an enormous mound of clothes, you will also be forced to acknowledge how poorly you have been treating your possessions.  It’s very important to get an accurate grasp of the sheer volume for each category.

That is from Marie Kondo, Spark Joy: An Illustrated Guide to the Japanese Art of Tidying, a recommended book.  Also never tidy the kitchen first, do not keep make-up and skin care products together, and “…the first step in tidying is to get rid of things that don’t spark joy.”

I have a related tip.  If you want to do a truly significant clean-up, focus only on those problems which are not immediately visible.  This will help you build efficient systems, and prepare the way for more systematic solutions to your clutter problems.  You’ll then be prompted to take care of the visible problems in any case.  If you focus on the visible problems instead, you will solve them for a day or two but they will rapidly reemerge because the overall quality of your systems has not improved.

Popular Science: A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene “Geno” Lee, and shot him down every time. In a statement, Lee called it “the most aggressive, responsive, dynamic and credible A.I. I’ve seen to date.”

What’s the most important part of this paragraph? The fact that an AI downed a professional fighter pilot? Or the fact that the AI was developed by a graduate student?

In the research paper the article is based on the authors note:

…given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon.

The AI was running on a $35 Raspberry Pi.

AI pilots can plan and react far quicker than human pilots but that is only half the story. Once we have AI pilots, the entire plane can be redesigned. We can build planes today that are much faster and more powerful than anything that exists now but the pilots can’t take the G-forces even with g-suits, AIs can. Moreover, AI driven planes don’t need ejector seats, life-support, canopies or as much space as humans.

The military won’t hesitate to deploy these systems for battlefield dominance so now seems like a good time to recommend Concrete Problems in AI Safety, a very important paper written by some of the world’s leading researchers in artificial intelligence. The paper examines practical ways to design AI systems so they don’t run off the rails. In the Terminator movie, for example, Skynet goes wrong because it concludes that the best way to fulfill its function to safeguard the world is to eliminate all humans–this is an extreme example of one type of problem, reward hacking.

Imagine that an agent discovers a buffer overflow in its reward function: it may then use this to get extremely high reward in an unintended way. From the agent’s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designer’s informal intent, and sometimes these objective functions, or their implementation, can be “gamed” by solutions that are valid in some literal sense but don’t meet the designer’s intent. Pursuit of these “reward hacks” can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [155, 22], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC.

Concrete Problems in AI Safety asks what kind of general solutions might exist to prevent or ameliorate reward hacking when we can never know all the variables that might be hacked? (The paper looks at many other issues as well.)

Competitive pressures on the battlefield and in the market mean that AI adoption will be rapid and AIs will be placed in greater and greater positions of responsibility. Firms and governments, however, have an incentive to write piecemeal solutions to AI control for each new domain but that is unlikely to be optimal. We need general solutions so that every AI benefits from the best thinking across a wide range of domains. Incentive design is hard enough when applied to humans. It will take a significant research effort combining ideas from computer science, mathematics and economics to design the right kind of incentive and learning structures for super-human AIs.

That is the new book by Ben Wilson, and no it has nothing (directly) to do with Brexit.  Rather it is a survey of the technological breakthroughs of the 1850s and how they reshaped Great Britain and the globe more generally.  Here is one short bit:

Japan may have secluded itself from the rest of the world, but it had not closed itself off.  That was a distinction that people in the West were slow to grasp.  The shogun’s court subscribed to the Illustrated London News, for example, and the bakufu had acquired books and papers detailing global politics and scientific discoveries through their Dutch and Chinese trading partners.  This knowledge was strictly regulated, but the seeds of scientific enlightenment were diffused in small numbers across the archipelago.  Perry did not know it — and nor did many Japanese — but his telegraph was not the first on Japanese soil.

Other parts of this book which I enjoyed were on the Great Geomagnetic Storm of 1859, how the British saw a connection between the U.S. Civil War, and the origins of Reuters.

If you want a new Brexit-relevant title of interest, try Brendan Simms, Britain’s Europe: A Thousand Years of Conflict and Cooperation.

It can be incredibly frustrating when a virtual assistant repeatedly misunderstands what you’re saying. Soon, though, some of them might at least be able to hear the irritation in your voice, and offer an apology.

Amazon is working on significant updates to Alexa, the virtual helper that lives inside the company’s voice-controlled home appliance, called Amazon Echo. These will include better language skills and perhaps the ability to recognize the emotional tenor of your voice.

Researchers have long predicted that emotional cues could make machine interfaces much smarter, but so far such technology has not been incorporated into any consumer technology.

Rosalind Picard, a professor at MIT’s Media Lab, says adding emotion sensing to personal electronics could improve them: “Yes, definitely, this is spot on.” In a 1997 book, Affective Computing, Picard first mentioned the idea of changing the voice of a virtual helper in response to a user’s emotional state. She notes that research has shown how matching a computer’s voice to that of a person can make communication more efficient and effective. “There are lots of ways it could help,” she says.

The software needed to detect the emotional state in a person’s voice exists already. For some time, telephone support companies have used such technology to detect when a customer is becoming irritated while dealing with an automated system. In recent years, new machine-learning techniques have improved the state of the art, making it possible to detect more emotional states with greater accuracy, although the approach is far from perfect.

Here is the full story.  Here is my recent New Yorker piece on how talking bots will affect us.