Science

Does using Facebook make you happier?

by on February 27, 2015 at 1:48 am in Science, Web/Tech | Permalink

I’ve long suggested that those worried about inequality, envy, and relative deprivation should tax Facebook rather than the private fortune of Bill Gates.  Most envy is local, and connected to people you know and whose lives you are in touch with.  Along these lines, here is some recent research by Verduyn, et.al.:

Prior research indicates that Facebook usage predicts declines in subjective well-being over time. How does this come about? We examined this issue in 2 studies using experimental and field methods. In Study 1, cueing people in the laboratory to use Facebook passively (rather than actively) led to declines in affective well-being over time. Study 2 replicated these findings in the field using experience-sampling techniques. It also demonstrated how passive Facebook usage leads to declines in affective well-being: by increasing envy. Critically, the relationship between passive Facebook usage and changes in affective well-being remained significant when controlling for active Facebook use, non-Facebook online social network usage, and direct social interactions, highlighting the specificity of this result. These findings demonstrate that passive Facebook usage undermines affective well-being.

The pointer is from Robin Hanson on Twitter.

Psychology journal bans significance testing

by on February 26, 2015 at 7:43 am in Science | Permalink

This is perhaps the first real crack in the wall for the almost-universal use of the null hypothesis significance testing procedure (NHSTP). The journal, Basic and Applied Social Psychology (BASP), has banned the use of NHSTP and related statistical procedures from their journal. They previously had stated that use of these statistical methods was no longer required but can be optional included. Now they have proceeded to a full ban.

The type of analysis being banned is often called a frequentist analysis, and we have been highly critical in the pages of SBM of overreliance on such methods. This is the iconic p-value where <0.05 is generally considered to be statistically significant.

There is more here, with further interesting points in the piece, via Mark Thorson.

Ezekiel J. Emanuel writes:

The big problem is profitability. Unlike drugs for cholesterol or high blood pressure, or insulin for diabetes, which are taken every day for life, antibiotics tend to be given for a short time, a week or at most a few months. So profits have to be made on brief usage. Furthermore, any new antibiotics that might be developed to fight these drug-resistant bacteria are likely to be used very sparingly under highly controlled circumstances, to slow the development of resistant bacteria and extend their usefulness. This also limits the amount that can be sold.

The self-assembling chair

by on February 23, 2015 at 11:55 am in Science, Web/Tech | Permalink

There are few tasks more infuriating than assembling a piece of furniture. But a new project at MIT may eventually eliminate that pesky life chore entirely.

As Wired’s Liz Stinson reports, the loopy geniuses over at the Massachusetts Institute of Technology’s Self-Assembly Lab recently debuted a chair designed to put itself together, without the need for a single vaguely illustrated instruction manual.

There is also a good video at the link, courtesy of the excellent Samir Varma, a loyal MR reader.  I sometimes toy with the proposition that there is in fact nothing I can assemble, not even simple items.  My requested birthday gift this year was that Yana show me how to put together and operate that which I got for Christmas.

Algorithm Aversion

by on February 22, 2015 at 7:40 am in Economics, Science | Permalink

People don’t like deferring to what I earlier called an opaque intelligence. In a paper titled Algorithm Aversion the authors write:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

People who defer to the algorithm will outperform those who don’t, at least in the short run. In the long run, however, will reason atrophy when we defer, just as our map-reading skills have atrophied with GPS? Or will more of our limited resource of reason come to be better allocated according to comparative advantage?

The Rise of Opaque Intelligence

by on February 20, 2015 at 7:31 am in Economics, Science | Permalink

Many years ago I had a job picking up and delivering packages in Toronto. Once the boss told me to deliver package A then C then B when A and B were closer together and delivering ACB would lengthen the trip. I delivered ABC and when the boss found out he wasn’t happy because C needed their package a lot sooner than B and distance wasn’t the only variable to be optimized. I recall (probably inaccurately) the boss yelling:

Listen college boy, I’m not paying you to think. I’m paying you to do what I tell you to do.

It isn’t easy suppressing my judgment in favor of someone else’s judgment even if the other person has better judgment (ask my wife) but once it was explained to me I at least understood why my boss’s judgment made sense. More and more, however, we are being asked to suppress our judgment in favor of that of an artificial intelligence, a theme in Tyler’s Average is Over. As Tyler notes notes:

…there will be Luddites of a sort. “Here are all these new devices telling me what to do—but screw them; I’m a human being! I’m still going to buy bread every week and throw two-thirds of it out all the time.” It will be alienating in some ways. We won’t feel that comfortable with it. We’ll get a lot of better results, but it won’t feel like utopia.

I put this slightly differently, the problem isn’t artificial intelligence but opaque intelligence. Algorithms have now become so sophisticated that we human’s can’t really understand why they are telling us what they are telling us. The WSJ writes about driver’s using UPS’s super algorithm, Orion, to plan their delivery route:

Driver reaction to Orion is mixed. The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.

One driver, who declined to speak for attribution, said he has been on Orion since mid-2014 and dislikes it, because it strikes him as illogical.

Human drivers think Orion is illogical because they can’t grok Orion’s super-logic. Perhaps any sufficiently advanced logic is indistinguishable from stupidity.

Hat tip: Robin Hanson for discussion.

Oceanic average is over

by on February 20, 2015 at 1:41 am in History, Science, Uncategorized | Permalink

The animals in the ocean have been getting bigger, on average, since the Cambrian period – and not by chance.

That is the finding of a huge new survey of marine life past and present, published in the journal Science.

It describes a pattern of increasing body size that cannot be explained by random “drift”, but suggests bigger animals generally fare better at sea.

In the past 542 million years, the average size of a marine animal has gone up by a factor of 150.

It appears that the explosion of different life forms near the start of that time window eventually skewed decisively towards bulkier animals.

Today’s tiniest sea critter is less than 10 times smaller than its Cambrian counterpart, measured in terms of volume; both are minuscule crustaceans. But at the other end of the scale, the mighty blue whale is more than 100,000 times the size of the largest animal the Cambrian could offer: another crustacean with a clam-like, hinged shell.

There is more here, and here is Wikipedia on Cope’s Rule.  Here is one possible explanation.  Does the Rule apply to dinosaurs?  I wonder if the risk-adjusted returns to species size also are going up.

Here is a piece by Tomala, Jia, and Norton:

When people seek to impress others, they often do so by highlighting individual achievements. Despite the intuitive appeal of this strategy, we demonstrate that people often prefer potential rather than achievement when evaluating others. Indeed, compared with references to achievement (e.g., “this person has won an award for his work”), references to potential (e.g., “this person could win an award for his work”) appear to stimulate greater interest and processing, which can translate into more favorable reactions. This tendency creates a phenomenon whereby the potential to be good at something can be preferred over actually being good at that very same thing. We document this preference for potential in laboratory and field experiments, using targets ranging from athletes to comedians to graduate school applicants and measures ranging from salary allocations to online ad clicks to admission decisions.

Here are some ungated copies.  For the pointer I thank the excellent Kevin Lewis, who sent me the link in response to my earlier post on age discrimination.

That is the title of a short essay by Gary Davis, here is the essay in toto:

Teaching is commonly associated with instruction, yet in evolution, immunology, and neuroscience, instructional theories are largely defunct.

We propose a co-immunity theory of teaching, where attempts by a teacher to alter student neuronal structure to accommodate cultural ideas and practices is sort of a reverse to the function of the immune system, which exists to preserve the physical self, while teaching episodes are designed to alter the mental self.

This is a theory of teaching that is based on the inter-subjective relationship between teacher and learner. This theory posits that teaching does not, as is commonly assumed, take place via instruction from teacher to students, but rather through a process of selection in the learner’s brain, stimulated by materials and activities utilized by the teacher. In this theory, the mechanism that drives the selection process in learners’ brains is co-regulated emotional signaling between teacher and learner. From this perspective, the power of formative assessment is that it intrinsically carries with it emotional aspects for both learner and teacher, in that it provides a feedback relationship between them both, and so, according to the Greenspan & Shanker theory of cognitive symbolic development, promotes cognitive development.

That is from the Journal of Brief Ideas, a new and worthy web site, and for the pointer to the site I thank Michelle Dawson.

Here is a Valentine’s Day puzzle: there have been five husband and wives awarded Nobel Prizes. Name them.

I will give you one hint. Four of the couples won for joint work. Only one of the couples each won a Nobel and that couple included a Nobel prize winner in economics.

Geoengineering markets in everything

by on February 5, 2015 at 2:07 pm in Economics, Science | Permalink

Oliver’s Travels, a luxury travel and rental service, has begun offering the service for prospective brides and grooms who are interested in a sunny destination wedding at certain venues in France (the company is planning on expanding availability to the UK and Italy if the concept “takes off”).

For a fee starting at £100,000 (~$150K), the company will give a team of pilots and meteorologists 3 weeks to plan and affect the weather on your big day. The technology is called “cloud bursting” (or “cloud seeding“) or using a chemical called silver iodide to “seed” the clouds and cause them to rain.

By inducing rain and cloud dissipation for 24 hours before the wedding day, the team can guarantee a relatively cloud free and sunny environment on the day of.

There is more here, and for the pointer I thank Michael Rosenwald.

That is the subtitle, the title of the paper is Killing the Golden Goose, and the authors are Ashish Arora, Sharon Belenzon, and Andrea Patacconi.  The abstract shows what an important paper this is:

Scientific knowledge is believed to be the wellspring of innovation. Historically, firms have also invested in research to fuel innovation and growth. In this paper, we document a shift away from scientific research by large corporations between 1980 and 2007. We find that publications by company scientists have declined over time in a range of industries. We also find that the value attributable to scientific research has dropped, whereas the value attributable to technical knowledge (as measured by patents) has remained stable. These effects appear to be associated with globalization and narrower firm scope, rather than changes in publication practices or a decline in the usefulness of science as an input into innovation. Large firms appear to value the golden eggs of science (as reflected in patents) but not the golden goose itself (the scientific capabilities). These findings have important implications for both public policy and management.

There is an ungated version here (pdf).  Of course, for better or worse, this means there is more of a burden on universities.

At least not too visibly:

Thijs Biersteker of digital entrepreneurs Moore has created a book jacket that will open only when a reader shows no judgment. An integrated camera and facial recognition system scans the reader’s face, only unlocking the book – in the prototype, filled with creative work for the Art Directors Club Netherlands annual – when their expression is neutral.

“My aim was to create a book cover that is human and approachable hi-tech. If you approach the book, if you’re overexcited or your face shows a sceptical expression, the book will stay locked,” explains Biersteker on his website. “But if your expression is neutral (no judgment) the system will send an audio pulse and the book will unlock itself. I often worry about my scepticism and judgement getting in the way of my amazement. Judgment should never hinder the relentless enthusiasm of seeing things for the first time.”

The full story, which includes photos, is here.  The Twitter pointer is from Ted Gioia.

“Highly specific pools of reputation information will become more useful in aggregate,” said Mr. Fertik, co-author with David C. Thompson of “The Reputation Economy,” a guide to optimizing digital footprints. “If you’re a really good Uber passenger, that may be useful information for Amtrak or American Airlines. But if you add in your reputation from Airbnb plus OpenTable plus eBay, it starts to get useful globally.”

There is more here, interesting throughout.  But will there be errors in these measurements?  As I wrote to Ashok Rao, fresh regressions are a public good.

A hotel with robot staff and face recognition instead of room keys will open this summer in Huis Ten Bosch in Nagasaki Prefecture, the operator of the theme park said Tuesday.

The two-story Henn na Hotel is scheduled to open July 17. It will be promoted with the slogan “A Commitment for Evolution,” Huis Ten Bosch Co. said.

The name reflects how the hotel will “change with cutting-edge technology,” a company official said. This is a play on words: “Henn” is also part of the Japanese word for change.

Robots will provide porter service, room cleaning, front desk and other services to reduce costs and to ensure comfort.

There will be facial recognition technology so guests can enter their rooms without a key.

At least for now, the facial recognition bit means you cannot send your robot to stay there…

The story is here, alas I have forgotten whom I should thank for this pointer.