Keith Stanovich and what IQ is good for

The always-interesting-and-still-underrated Michelle Dawson points me to this batch of work.  Here is one of the papers, by Keith E. Stanovich and Richard F. West:

In 7 different studies, the authors observed that a large number of thinking biases are uncorrelated with cognitive ability. These thinking biases include some of the most classic and well-studied biases in the heuristics and biases literature, including the conjunction effect, framing effects, anchoring effects, outcome bias, base-rate neglect, “less is more” effects, affect biases, omission bias, myside bias, sunk-cost effect, and certainty effects that violate the axioms of expected utility theory. In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency.

Even more interesting, in my view, is that higher-IQ people are more likely to behave rationally when they are told that a rationality issue is on the table, but less so otherwise. 

If you are interested in issues of IQ, or for that matter overcoming bias, you should read Stanovich's work.  As noted above, higher-IQ people seem to be just as guilty of "myside bias."

Stanovich has a new book summarizing some of the results, namely What Intelligence Tests Miss: The Psychology of Rational Thought.  It is more idiosyncratic than the articles (he overcommits to one very particular model of the mind; cognitive laziness, without regard for margin) but recommended nonetheless.  For those who care about these issues, a must.

Comments

Yeah! The real problem is that our culture overrates the pertinence and validity of IQ tests to measure cognitive ability. Thank goodness that this researcher is shattering this pervasive myth and boldly taking popular perception in a new direction!

Countdown to Steve Sailer in 5,4,3,2....

> Even more interesting, in my view, is that higher-IQ people are more likely to behave rationally when they are told that a rationality issue is on the table, but less so otherwise.

Many high IQ people subscribe to overly rational systems of thought like libertarianism, thereby forcing things into a rational/logical realm where they are perhaps more comfortable.

Useful article.
Thank you
---------------------
girlshot

i wonder if reading this stuf will help explain paul krugman's column today

"These thinking biases include some of the most classic and well-studied biases in the heuristics and biases literature, including the conjunction effect, framing effects, anchoring effects, outcome bias, base-rate neglect, 'less is more' effects, affect biases, omission bias, myside bias, sunk-cost effect, and certainty effects that violate the axioms of expected utility theory."

Take someone with a 100 IQ. See if he can explain those concepts to you after even a semester's long class.

That illustrates is what IQ is good for.

Explains the extremely high indicence of people of high acheivement notheless making idiotic decisions. Also explains Krugman... or am I repeating myself.

"One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool." -- George Orwell

I suspect Tyler Cowen is actually an employee of Amazon.com. Judging by the number of books I buy that he recommends as "must reads", I think he deserves a promotion. I'm thinking executive VP.

The tests of myside bias in the 2007 and 2008 papers of Stanovich and West in Thinking & Reasoning don’t quite match with what I would expect -- of course, they are the experts, not me, so maybe I am barking up the wrong tree. My conception of myside bias can be illustrated by the case of a traveling call on LeBron James late in a game against the Wizards a little while ago that effectively won the game for the Wizards. Myside bias should say that, at least relative to each other, Cavs fans will tend to disagree with the call and Wizards fans will tend to agree with the call. The independence of cognition to this bias should say something to the effect that the extent of disagreement between the partisans is unaffected by whether the fans are basketball experts or casual fans. It seems that Stanovich and West have tested something more akin to whether basketball referees prone to make a lot of traveling calls agree with the call on LeBron more often than referees prone to make few traveling calls. The independence of cognition here would say that the disagreement between the two groups does not depend on whether the referee works professional/major college leagues or only works pee-wee league games. I don’t expect differences in expertise to affect the bias among the two groups of referees, but I might expect differences in expertise among Wizards and Cavs fans to affect the bias.

A related way to put this is that the two sides may have self-selected into those sides precisely because of the arguments that subjects in Stanovich and West’s tests were asked to evaluate. Why should cognitive ability affect someone’s (subsequent) assessment of an argument that they have already decided does or does not make sense?

This is already too long, but I also am skeptical about their tests vis-a-vis onesided bias.

Little bit later than I expected Steve.

I love you accuse anyone of imposing rationalizations for falsehoods, especially in relation to IQ.

Intelligence And Rhythmic Accuracy Go Hand In Hand

http://www.sciencedaily.com/releases/2008/04/080416100459.htm

One problem with IQ I've read in the self-help books is that smarter people tend to foresee more of the risks involved in an activity. So, this tends to paralyze them. It was probably a good thing when we were as likely to become a sabre tooth snack as bag the mastodon, but in today's world luck favors the bold. So, that might explain why dummies that just do it look like geniuses, until they get to the point where The Peter Principle kicks in.

Unrelated, I wonder what Hank Paulson is up to.

A serious flaw in this research is the erroneous conflation of student reported SAT scores with cognitive ability. As a professional SAT tutor and educator who has individually taught many hundreds of students to achieve substantially higher scores, I do not believe that SAT performance can or should be considered an appropriate baseline measure of cognitive ability, as this study does.

Simply put, proper SAT training can improve student scores by literally hundreds of points, and while there is indeed a score ceiling that each student usually reaches, without any statistical control for whether has student has been 'optimized' to achieve this ceiling renders the study's reliance on SAT scores as a test of a student's cognitive ability fatally flawed.

Moreover, even if a student has not had any training, the College Board's own study (PDF) shows that a student's score can improve simply by taking the SAT multiple times. Without a control even for how many times the student took the test, let alone whether the student is reporting best scores in individual subjects from the same test or different tests, there is simply no way to say that a student's final reported SAT score is a legitimate cognitive measure on which to base other experiments for hueristic biases.

It should also be noted that many other non-intelligence factors, both internal and external, also affect SAT scores. Extreme parental and peer pressure can often have a severe (usually negative) impact on student performance. Mental fatigue (often caused by student overscheduling) and physical fatigue (common among student athletes) are also big factors. Likewise, a student's maturity level (both emotional and physical) is an important variable. There are others.

In sum, the sole reliance on self reported, student SAT scores as the definitive indicator of cognitive ability skews the results of this study to such a significant degree that the results must be questioned. I'm not saying the conclusions do not have merit, but without a more accurate and controlled baseline of cognitive ability, there is just no way to tell.

Just because the SAT is less than (perhaps even way less than) 100% correlated with intelligence does not mean that a) the correlation is not positive nor that b) it is not statistically significant.

(BTW: for B, I think you meant to say 'IS statistically significant'?).

Look, I'm merely pointing out that there are a myriad of variables that affect a student's SAT score, both external and internal, that the study's methodology does not control for (which I know because I did indeed read the study). So many in fact, that to base a study of biases in cognitive ability on the scores without accounting for what are certainly significant variations in the baseline data (possibly approaching one standard deviation in extreme cases) really doesn't lend much credibility as a basis for the study.

And make no mistake, the deviation is indeed significant. For instance, take two identical scoring students (both with, say, 1000 combined Math & Reading). Now give only one of the students training that adds another 200 points to his or her score (a nice bump for a tutor worth his or her salt), so that the trained student has a 1200 and the other student remains at 1000. Does anyone seriously contend that the trained student now has greater cognitive ability than the untrained student, or, more importantly, that the 20% increase is not statistically significant?

And that's just a deviation based on a lack of training, not to mention all of the other, non-cognitive factors I mentioned above that might also affect any given student's final SAT score.

Bottom line: A lot of people mistakenly believe that the SAT is somehow a valid substitute for an IQ test. I may not be the world's greatest SAT tutor, but I have more than enough experience with students who take this test to know that belief is not valid.

Or to put it another way: if student performance on the SAT really were based solely on the intelligence of the student, how on earth could I and many others like me stay in business?

if student performance on the SAT really were based solely on the intelligence of the student, how on earth could I and many others like me stay in business?

if investment performance really were based solely on asset class and assumed risk, how on earth could investment managers and hedge funds stay in business?

Because it is really difficult for the average customer to tell the difference, and this is ten times harder if the comparison is a one time event (SAT scores) rather than a yearly comparison (investment performance).

I'm not saying that tutoring doesn't help, it clearly does. Just that your proof is flawed.

I was saying that your proof by how on earth could I and many others like me stay in business? was flawed.

There's a lot of businesses that stay in business while not providing the real benefit they claim. Another example is casinos. And all those weight-loss products.

However you CAN show tutoring improves students results, by comparing their result BEFORE and AFTER tutoring.

So I agree that tutoring helps students (and not just for SATs) but the stay in business proof is flawed.

To put it in Tutoring terms, your answer was correct, but your derivation has a logical flaw.

As noted above, higher-IQ people seem to be just as guilty of "myside bias."

Nixon and Obama are the obvious US political examples.

My point is they don't have to control for them.

Sorry, but you're simply wrong about that. Any serious, statistically-based, academic study of biases in cognitive ability must, by definition, control for any and all significant anomalies in the baseline data. It's simply a matter of proper scientific methodology.

The authors of the study themselves understand this. If you read the study, you'll see that it controls for variations in self reported student scores vs. actual scores. The reason for the control is that students tend to exaggerate their scores - although not as significantly when self reporting in a study as in, say, a job application.

So, if the authors control for that rather minor variation in score reporting, why shouldn't they also control for potentially far more significant problems related to test prep, et al.? And what does it say about the reliability of their conclusions that they don't?

All they need to do is show a positive correlation.

You're missing it. Even if there is some general, positive correlation between SAT scores and intelligence (and I'm not saying there isn't), the problem lies in the fact that we can't rely on SAT scores as a measure of that correlation because variations in test prep and other factors obscure the numbers.

It may very well be that a student who scores 1200 has greater cognitive ability than a student who scores 1000. But then assume that the 1200 student would have scored 1000 had he not had test prep. Does that prep-assisted 1200 now still correlate to greater cognitive ability than a student who scores 1000 without prep?

Or, to put it another way, how can a study that bases its conclusions on SAT score differences have any legitimacy if you can't tell what those differences really are?

Comments for this post are closed