Category: Science

“Ethos of the Unit”

This is from a child and adolescent mental health group at University College London, but it could and should also count as “Ethos of the Blogger”:

•All research is provisional
•All research raises as many questions as it answers
•All research is difficult to interpret and to draw clear conclusions from
•Qualitative research may be vital to elaborate experience, suggest narratives for understanding phenomena and generate hypotheses but it can’t be taken to prove anything
•Quantitative research may be able to show hard findings but can rarely (never?) give clear answers to complex questions

And yet, despite all the challenges, it is still worth attempting to encourage an evidence-based approach, since the alternative is to continue to develop practice based only on assumption and belief.

For the pointer I thank Michelle Dawson.

Beware the estate of James Joyce

J. Craig Venter and his fellow scientists managed to replace the genetic code of a bacterium with a synthetic code they made on a computer. Which is how they got sued by the estate of James Joyce.

In order to distinguish their synthetic DNA from that naturally present in the bacterium, Venter’s team coded several famous quotes into their DNA, including one from James Joyce’s A Portrait of the Artist of a Young Man: “To live, to err, to fall, to triumph, to recreate life out of life.”

After announcing their work, Venter explained, his team received a cease and desist letter from Joyce’s estate, saying that he’d used the Irish writer’s work without permission. ”We thought it fell under fair use,” said Venter.

That is from Jessica Crispin.

Professionalism vs. amateurism

Here was one MR reader request, from Philip W:

Professionalism vs. Amateurism, the merits and demerits of each. And the relationship of these to science, or “science.” How large is the role of “common sense” in your way of thinking about the world? Should we wish that policymakers would have more professionalism, or more common sense?

Amateurism is splendid when amateurs actually can make contributions.  A lot of the Industrial Revolution was driven by the inventions of so-called amateurs.  One of the most revolutionary economic sectors today — social networking — has been led by amateurs.  Maybe it is stretching the concept, but you can interpret Bill Gates and Steve Jobs as amateurs too.

Amateurs are associated with free entry and a lot of experimentation.  Barbecue quality is very often driven by amateurs, and in general amateurs still make contributions to food and cooking.   The difficulty of maintaining productive amateurs is one of the reasons why scientific progress periodically slows down.  Specialization, however necessary it may be, can make big breakthroughs harder at some margin.  (There is a good recent paper on this.)  This is one aspect of the division of labor which Adam Smith did not fully grasp, though he hinted at it.

Through computers, and the internet, the notion of amateurs working together is becoming more important.  This includes astronomical searches and theorem-proving, plus collection and collation of data, and Wikipedia; this is Shirky’s “cognitive surplus.”

On the latter part of the question, what is “common sense”?  Most common sense, if one can call it that, is a highly refined product of a lot of trial and error.  The real question is how to refine one’s common sense.

Policymakers need more of a sheer willingness to do the right thing, even if it means sacrificing reelection.  Selection mechanisms, however, do not much favor that bravery.  For a sane, well-adjusted person, the job is neither fun nor well-paying, so the job attracts people who love being in office and thus who fail to do the right thing.

When specialization proceeds very, very far, the difference between a professional and an amateur is sometimes no longer well-defined.

Who predicts well?

…what separated those with modest but significant predictive ability from the utterly hopeless was their style of thinking. Experts who had one big idea they were certain would reveal what was to come were handily beaten by those who used diverse information and analytical models, were comfortable with complexity and uncertainty and kept their confidence in check.

That is from Tetlock and Gardner, here is more.  On that general, theme, here is Dan Gardner’s new book Future Babble: Why Expert Predictions are Next to Worthless, and You Can do Better.

Nuclear power reactors: a study in technological lock-in

The interesting Robin Cowan has a well-cited paper with that title:

Recent theory has predicted that if competing technologies operate under dynamic increasing returns, one, possibly inferior, technology will dominate the market. The history of nuclear power technology is used to illustrate these results. Light water is considered inferior to other technologies, yet it dominates the market for power reactors. This is largely due to the early adoption and heavy development by the U.S. Navy of light water for submarine propulsion. When a market for civilian power emerged, light water had a large head start, and by the time other technologies were ready to enter the market, light water was entrenched.

Here are some recent articles on light water reactors in Japan.

What is the consumer surplus of the internet?

Annie Lowrey asks:

But providing an alternative measure of what we produce or consume based on the value people derive from Wikipedia or Pandora proves an extraordinary challenge–indeed, no economist has ever really done it. Brynjolffson says it is possible, perhaps, by adding up various "consumer surpluses," measures of how much consumers would be willing to pay for a given good or service, versus how much they do pay. (You might pony up $10 for a CD, but why would you if it is free?) That might give a rough sense of the dollar value of what the Internet tends to provide for nothing–and give us an alternative sense of the value of our technologies to us, if not their ability to produce growth or revenue for us.

Here is much more.

Genetic Enhancement v. Artificial Intelligence

Will robots and artificial intelligences take human jobs? Perhaps but the nature of humanity is not carved in stone. Genetic enhancement (GE) is within a hairsbreadth of reality.

It's true that the practical applications of AI are moving faster than GE but GE has a head start of over a billion years. Moreover, although GE is still impractical, the costs of GE are falling fast. The costs of sequencing Cost_per_genome a genome (shown at right, click to enlarge), for example, are falling far faster than even Moore's Law would predict. Sequencing takes us only part of the way towards H+ but it's an important part.

Genetic engineering already works wonders, even when used haphazardly. My own efforts at GE (I had the help of a PhD microbiologist) have produced two promising NIs. When used in a more controlled manner the results of GE will be even better ("it's still us, only the best of us.")

I used to worry that religious objections would prevent the evolution of H to H+, especially in the United States. But should courage fail us, the Chinese, the Indians, the Russians or perhaps even the Singaporeans will move humanity forward. In this case, the slippery slope works in favor of progress: from avoiding genetic disease towards making improvements will prove irresistible. You can't keep a better man down.

The contrast of GE and AI in the title is meant to remind us that AI is not the only technology relevant to debates about future jobs but the opposition of GE and AI is obviously false. AI is helping to create GE, of course, but it's deeper than that. In the not so long run it's not about computers substituting for labor or even complementing labor, it's about designing labor to complement computers (and vice-versa). Think about how quickly the phone has migrated from the desk, to the hand, to the ear, to the ear canal. The technology to enhance humanity with access to the internet is literally burying itself into our heads, call it I-fi. There is more to come.

And now for some music.

*The Philosophical Breakfast Club*

The author is Laura J. Snyder and the subtitle is Four Remarkable Friends Who Transformed Science and Changed the World.  This is an excellent book about the history and status of science in 19th century England and in particular the contributions of Charles Babbage, John Herschel, William Whewell, and Richard Jones, the latter an economist and of course Whewell debated induction and scientific method with Mill.  Babbage too had writings on economics.  Here is an excerpt from Snyder:

De Prony had been commissioned to produce a definitive test of logarithmic and trigonometric tables for the newly introduced metric system in France, to facilitate the accurate measurement of property as a basis for taxation.

De Prony had recently read Adam Smith's Wealth of Nations…Smith discussed the importance of a division of labor in the manufacture of pins…

De Prony was the first to see that a Smithian division of intellectual labor could be equally valuable in the work of computation of mathematical tables — although his idea had been anticipated by Leibniz, who believed that talented mathematicians should be freed from tedious calculations that could be done by "peasants."

If you enjoy the history of science, this book stands a good chance of being the best one in that genre to come out this year.  Here is one good review of the book.

Galton’s Bayesian Machine

Stephen Stigler has a cool piece on a machine that Francis Galton built in 1877 that calculated a posterior distribution from a prior and a likelihood function. Galton's originality continues to astound.

Here is Stigler:

StigFigure1 The machine is reproduced in Figure 1 from the original publication. It depicts the fundamental calculation of Bayesian inference: the determination of a posterior distribution from a prior distribution and a likelihood function. Look carefully at the picture–notice it shows the upper portion as three-dimensional, with a glass front and a depth of about four inches. There are cardboard dividers to keep the beads from settling into a flat pattern, and the drawing exaggerates the smoothness of the heap from left to right, something like a normal curve. We could think of the top layer as showing the prior distribution p(θ) as a population of beads representing, say, potential values for θ, from low (left) to high (right)….

…the beads fall to the next lower level. On that second level, you can see what is intended to be a vertical screen, or wall, that is close to the glass front at both the left and the right, but recedes to the rear in the middle.

…The way the machine works its magic is that those beads to the front of the screen are retained as shown; those falling behind are rejected and discarded. (You might think of this stage as doing rejection sampling from the upper stage.)

…The final stage turns this into a standard histogram: The second support platform is removed by pulling to the right on its knob, and the beads fall to a slanted platform immediately below, rolling then to the lowest level, where the depth is again uniform–about one inch deep from the glass in front. This simply rescales the retained beads… the magic of the machine is that this lowest level is proportional to the posterior distribution!

Hat tip: The Endeavour.

How to make better decisions?

I never thought of this method:

What should you do when you really, REALLY have to “go”? Make important life decisions, maybe. Controlling your bladder makes you better at controlling yourself when making decisions about your future, too, according to a study to be published in Psychological Science, a journal of the Association for Psychological Science.

Sexual excitement, hunger, thirst–psychological scientists have found that activation of just one of these bodily desires can actually make people want other, seemingly unrelated, rewards more. Take, for example, a man who finds himself searching for a bag of potato chips after looking at sexy photos of women. If this man were able to suppress his sexual desire in this situation, would his hunger also subside? This is the sort of question Mirjam Tuk, of the University of Twente in the Netherlands, sought to answer in the laboratory.

Tuk came up with the idea for the study while attending a long lecture. In an effort to stay alert, she drank several cups of coffee. By the end of the talk, she says, “All the coffee had reached my bladder. And that raised the question: What happens when people experience higher levels of bladder control?” With her colleagues, Debra Trampe of the University of Groningen and Luk Warlop of the Katholieke Universiteit Leuven, Tuk designed experiments to test whether self-control over one bodily desire can generalize to other domains as well.

In one experiment, participants either drank five cups of water (about 750 milliliters), or took small sips of water from five separate cups. Then, after about 40 minutes–the amount of time it takes for water to reach the bladder–the researchers assessed participants’ self-control. Participants were asked to make eight choices; each was between receiving a small, but immediate, reward and a larger, but delayed, reward. For example, they could choose to receive either $16 tomorrow or $30 in 35 days.

The researchers found that the people with full bladders were better at holding out for the larger reward later. Other experiments reinforced this link; for example, in one, just thinking about words related to urination triggered the same effect.

“You seem to make better decisions when you have a full bladder,” Tuk says. So maybe you should drink a bottle of water before making a decision about your stock portfolio, for example. Or perhaps stores that count on impulse buys should keep a bathroom available to customers, since they might be more willing to go for the television with a bigger screen when they have an empty bladder.

The pointer is from Michelle Dawson, although I do not take her to be necessarily endorsing (or rejecting) the results.  There is related work here and here (pdf).

I wrote this post with an empty bladder.

The Science Paparazzi

From The Onion, via James Boyle:

Members of the paparazzi say they are merely responding to public demand, providing a service to the millions of Americans who closely follow the careers of the world's top physicists, mathematicians, and botanists.

"In this country, people want to know about scientific discoveries the minute they happen," said New Haven-based freelance photographer Lance Evans. "It's only natural that the public would be interested in the personal lives of the men and women behind these discoveries."

Gould insisted that the adoring public is not the problem.

"The paparazzi are far more forceful and disruptive than they need to be," said Gould, who on Aug. 5 pleaded no-contest to a March incident in which he attacked an intrusive paparazzo with a broken graduated cylinder. "I realize they have a job to do, but there is such a thing as taking it too far."

According to Gould, paparazzi often use illegal means to secure photos for such notoriously disreputable tabloids as Science World Weekly and Starz, which bills itself as "your most trusted source for astronomy celebrity news."

The article is humorous throughout.  The closer is this:

"These scientists are the most important people in America," Krause said. "Our very future depends on them. They are enabling us to live longer and better, discovering the history of the planet we live on, and unraveling the mysteries of the universe. There's no way we'd ever let them work in obscurity. It's laughable."

Who will still be famous in 10,000 years?

Sam Hammond, a loyal MR reader, asks me:

Who do you think will still be famous in 10,000 years? People from history or now. Shakespeare? Socrates? Hawking? 

This requires a theory of 10,000 years from now, but let's say we're a lot richer, not computer uploads (if so, I know the answer to the question), and not in a collapsed dystopia.  We still look like human beings and inhabit physical space.  If you wish, postulate that not all of those 10,000 years involved strongly positive economic growth.

In that case, I'll go with the major religious leaders (Jesus, Buddha, etc.), Einstein, Turing, Watson and Crick, Hitler, the major classical music composers, Adam Smith, and Neil Armstrong.  (Addendum: Oops!  I forgot Darwin and Euclid.)

My thinking is this.  The major religions last for a long time and leave a real mark on history.  Path-dependence is critical in that area. 

Otherwise, an individual, to stay famous, will have to securely symbolize an entire area, and an area "with legs" at that.  The theory of relativity still will be true and it may well become more important.  The computer and DNA will not be irrelevant.  Hitler will remain a stand-in symbol for pure evil; if he is topped we may not have a future at all.  Beethoven and Mozart still will be splendid, but Shakespeare and other wordsmiths will require translation and thus will fade somewhat.  The propensity to truck and barter will remain and Smith will keep his role as the symbol of economics.  Keynesian economics may someday be less true, as superior biofeedback, combined with markets in self-improvement, ushers in an era of flexible wages, while market-based expected nominal gdp targeting prevents a downward deflationary spiral.

The fame of those individuals will not perish, in part, because the more distant future will produce fewer lasting mega-famous people.  Achievement will be more decentralized and more connected to teams.  The dominance of Edison and Tesla, in their breakthroughs, will not be repeated.  There won't be a mega-Einstein eighty years from now, to make everyone forget the current Einstein, even if (especially if) science goes very well.