by Tyler Cowen
on January 5, 2013 at 12:59 pm
in Uncategorized |
1. The statistics software signal.
2. The culture that is Iceland (book flood).
3. Not-new books of the year.
4. Isaac Asimov 1988 video on on-line education (very good).
5. Is education becoming more or less segregated?
6. A negative review of Les Mis.
#6 Seems like all I read are negative reviews. You can go to Tyler’s favorite website for a whole list of negative reviews/opinion (http://thebrowser.com/reports/les-miserables). It’s frustrating because I saw it having no idea about the plot, history, characters or anything and I enjoyed it quite a lot. I didn’t even know it was a musical going in. But reading all this negative opinion just irritates me. Perhaps I just have horrible horrible taste but maybe this piece of culture has been a bit over analyzed. Can’t a guy just enjoy the story, music, and performances?
Armond White ha a good positive review: http://cityarts.info/2012/12/26/working-class-heroism/
Of course he does.
I can’t honestly tell if the New Yorker reviewer was all tongue in cheek or what. Suggesting “Singing in the Rain” as an alternative to Les Mis is…just…The Onion.
Pizza is completely overrated. I have never actually eaten pizza from New York City or any ethnic enclave in an urban setting, but I have just recently tried Dominos and it is disgusting. If people want to experience good food, they need to sample some Foie Gras or caviar.
1. Rather scathing but good points. Seems to me that statisticians would benefit from a few courses in epistemology in order to understand the limitations of and proper use of statistics. Either that or having “correlation is not necessarily causation” tattooed on a part of the body that they see every day – e.g., the forehead.
How in the hell did you infer from (1) a universal claim about statisticians benefiting from epistemology? Or the claim about correlation/causation? I did both philosophy and statistics in undergrad. The second year Epistemology course I took was mostly the analysis of knowledge. It was mostly talk about knowledge from an apriori standpoint. There was very little practical information that you could make use of. You’d be better off taking a psychology course in learning and reasoning. At least Psychology tries to incorporate some action-guiding principles (avoiding certain cognitive biases, what science knows about learning), similar to what philosophical logic courses do.
Also, the moronic “correlation doesn’t equal causation” crowd need to learn graphical causal models and bayesian networks. It’s embarrassing to see people trot this line out when the research of the last 10 to 20 years clearly shows we can reason about causation in a concrete manner.
“when the research of the last 10 to 20 years clearly shows we can reason about causation in a concrete manner.”
Causation is a observable property of our models, not an observable property of the universe.
“Causation is a observable property of our models, not an observable property of the universe.”
Only if you are a full blown anti-realist sperglord.
One wonders exactly how you go about living, typing on a computer, and flying in a plane with such a metaphysical worldview.
Reread what I said, dope.
I didn’t say that causation didn’t exist. I said it wasn’t observable.
This is because only correlation is observable.
… and again, you’d be an anti-realist sperglord to suggest that causation isn’t observable. That’s exactly what anti-realism means. You are denying the ability of scientists and engineers to capture accurate aspects of reality, and reason accurately about causation.
Again, you’d have to be a moronic sperglord to actually believe this.
You are universally quantifying over all aspects of science and enginering, and their inability to pinpoint and reason about causation. Again, how do you go about living? Or typing on that computer? Or flying in a plane?
I disagree that 1. makes any good points. It’s just an elitist heaping scorn onto the less-fashionable plebes who don’t use R or other approved software. zbicyclist and Kevin summed it up nicely.
The signaling mechanisms of academic statisticians are still interesting from a sociological point of view, no?
#1) Not sure about the bit on ‘carefulness’ or ‘mindfulness’ of analyses varying between different program users. Having consulted with users of several programs with varying ease of spamming tests, my opinion is that it’s the person running the test that matters most.
Are there good competent people doing statistics work who prefer to use SAS, SPSS or even Excel? Of course.
But overwhelmingly R, SciPy and to a lesser extent Matlab users have a better grasp on statistics. The difference is that the former systems are merely interfaces into stat algos, while the latter are full-featured general programming languages. The tradeoff being that the general purpose systems are harder but give you power to do much more.
This indicates two things, one the programming language users are probably smarter. But more importantly they’ve probably needed the fuller power of a system like R at some point, that justified them putting the effort into learning it. A SAS user by contrast has only ever needed to do out-of-the-box vanilla statistics or regressions.
Say you want to do something like assign cross-validation bins grouped by some factor, test regressions across those bins then plot the in-sample/out-sample error ratio by number of included variables, and then run all of that separately for another factor group. This is either impossible or near Herculean to do in SAS whereas in R it can be done with a single line command if you know what you’re doing.
5. is interesting, especially considering how terms are defined -
‘He focuses on black and white students, not those in other racial and ethnic groups, and he examines “exposure” and “dissimilarity” (defined below) of black and white students as two measures of desegregation. Hinrichs uses federal data from every college, filed since the era in which desegregation started. He argues that these measures illustrate the extent to which colleges are truly desegregated, which may not be reflected simply by increases or decreases in black student enrollments (which can be concentrated at certain institutions).
Exposure is the percentage of black students at colleges attended by white students, and vice versa. Here he shows that from 1968, the typical white student attended a college that was 2.3 percent black. But by 2009, the typical white student attended a college that was 9.8 percent black. This percentage gain is much larger than overall black enrollment during this period, which also rose, from 5.5 percent to 13.7 percent.
While the growth in exposure has been steady, there have been some notable changes. Throughout the ’80s and ’90s, for example, the public institutions attended by white students had higher black enrollments than did private institutions. But shortly after 2000, public and private exposure rates shifted. In the most recent data, white students at private colleges were, on average, at institutions that have a black enrollment of nearly 12 percent while those at publics were under 9 percent in black enrollment. In an interview, Hinrichs said he didn’t know why public and private positions had flipped, but that he thought this was an important issue to study.’
And yet, possibly because the article writer made a mistake, the following is apparently not mathematically correct -
‘Here he shows that from 1968, the typical white student attended a college that was 2.3 percent black. But by 2009, the typical white student attended a college that was 9.8 percent black. This percentage gain is much larger than overall black enrollment during this period, which also rose, from 5.5 percent to 13.7 percent.’
I’m not sure how a percentage gain of 7.5% is higher than 8.2% – it would seem to be reversed, thus somewhat changing the emphasis on what is going on.
With the percent increases, perhaps he means that the first number more than quadrupled while the other “only” underwent a 2.5-fold increase. People talk about percentage changes differently, and I’m never sure exactly what they mean unless they explicitly provide the numbers.
#1 is the statistical priesthood screening out the unwashed.
When I first taught statistics, we did it by hand (sometimes with the aid of hand-crank calculators: to multiply 4 times 3, enter the 4 and turn the crank 3 times). I learned factor analysis using Benjamin Fruchter’s book, which has you do rotations using graph paper. There’s some learning value to this, but not much and not even the most Luddite of statisticians does it this way.
There’s no bloody reason to code your own estimators if you meet the assumptions of a standard analysis. THAT’s where the education is — learning what assumptions you can make, can’t make, and can make if you test them — not writing code if prepackaged, tested commercial software will do the trick.
+1, could not have been better-said.
You’ve managed to completely miss his point. He’s not arguing that you need to program your own estimators to do proper statistics. He’s arguing that you only use R if you’ve had to do that at one point or another. And having needed to do that (and presumably successfully accomplished it) is a strong signal that you know what you’re doing when it comes to statistical inference.
I don’t think I missed his point at all. He’s an R snob.
BAAAAWW, MY NERD FEELINGS ARE HURT.
#6: Denby isn’t alone.
Michael Phillips of the Chicago Tribune gave Les Mis 1.5 stars.
He also put it on his “worst of 2012″ list. Making lemonade out of this lemon, he got a later column splicing together all the angry emails he got from Les Mis fans.
#1 Mostly agree, but author seems to think this of Python/JVM:
“You care about integrating your statistics code into a production codebase.”
The author seems to think that R code is not useful in a production context. Only python and JVM are because they can be used by the much more general-purpose python or java languages, whereas R is specialized. Apparently the author has never heard of the concept of foreign function interfaces, which are widely available for R -> python/C/java.
He also says about R:
“You do not care about aesthetics”
R has by far the best plotting and visualization toolkit of any of the languages he lists. I assume he’s comparing it to Matlab implicitly here. It’s true that to a naive user Matlab seems like its visualization is better, but this is only true to people who haven’t truly learned R’s plotting capabilities. R gives much more power to the visualization user and allows an extreme degree of customization, whereas Matlab’s abilities are much more standardized. The true power user prefers R.
Gnuplot is true power.
#4. I wasn’t aware how honest and sharp and constructive he was as an intellectual. Nice to see.
Thank you for posting the Asimov. I just realized I’d never heard him speak. His voice is not at all what I imagined.
My mother did a high school project on Asimov in the late 60s. He was quite famous and successful at the time, yet gave a random student over an hour of his time for her work.
#1 is great news for me. I’ve been using exclusively R for many years for two reasons:
1. R is free.
2. Matlab, SPSS, SAS etc. cost money.
I know I am sending a signal when I disclose I am coding in R, but I always assumed the signal was: “This guy is a cheapskate and he belongs to a cheap institution”. It’s nice to know that for some people the signal is different.
The institution I attended holds their statistics classes in SAS Hall. Very few students did their dissertation work using SAS. R was by far the most popular. (Except for the StatGen folks. A lot of them like Java for some reason….) Point being, even if given complete and free access to expensive and powerful software like SAS, most researchers still favor R. R is just more flexible, and often times that’s what you need when you’re doing original research or analyzing complex statistical problems.
Very good comment.
As for black attendance at public schools, a lot of affirmative action, quotas, and other preferences were eliminated either by vote or by courts. Private schools, which can admit anyone they please, have no such impediment.
There is also an impact of relative quality. Private schools occupy the top 25 or so slots in the US News and World Report rankings. The top public school (Berkeley) is barely in the top 30 even though it is outstanding. If you were black and were at the top of your high school class, wouldn’t you choose Harvard, Princeton, Stanford, MIT, Yale, or another private school over Berkeley? Our president and first lady did.
Now suppose that every school in the nation wanted to “look like America”, but there simply weren’t enough black students to distribute among schools to meet the 13% target. Every school that met or exceeded their target would make the remaining schools look weak in comparison. Majority black schools would hurt the other schools even more.
Looking at the 2009 CPS data, blacks are actually overrepresented in college at more than 14% of all students. Still, some schools are going to poach opportunity for ‘diversity’ from other schools. That’s why it appears like segregation is increasing. Berkeley has about 4% black student body.
“Iceberg Slim’s Pimp: The Story of My Life (Canongate) was the book that determined the ghetto persona … In terms of that influence he’s probably the most dominant writer since Shakespeare.”
#1: I mostly agree, but I would say the for the purposes, Python (at least numpy), Matlab and Mathematica all belong in the same bucket. That being the one *I* swim in.
numpy is very matlabish.
Agree, but I tend to pick one of them depending on what I’m doing: Python/NumPy/Matlabplot when I need something standalone or that can be integrated into other custom code; Mathematica when I need heavy-duty symbolic capabilities; Matlab for everything else.
One third of that bucket is free.
Why is it that no on can write a review without a silly joke about the French?
The French probably deserve it.
#4: Something of an aside here: Have TC or AT ever refuted the most common argument against online ed since it first appeared in the ’90s? To wit, that the bottom 90% in terms of ability and motivation needs high-touch advising and pedagogy to make it through college, making online ed (w/ current technology) irrelevant for most of the population. The empirical evidence is pretty clear on this one, no? I can’t remember MR addressing this counterargument explicitly.
People get help in college? Maybe I went to a very low-touch school (Georgia Tech) but still saying 90% of the college population needs lots of help to make it seems like a gross exaggeration.
#6 lost me with the comment about Americans creating almost all the greatest musicals ever made.
Comments on this entry are closed.
Previous post: How brutal is vegetarianism to animals?
Next post: The culture that is Republican
Email Tyler Cowen
Follow Tyler on Twitter
Email Alex Tabarrok
Follow Alex on Twitter
Subscribe in a reader
Follow Us on Twitter
Marginal Revolution on Twitter Counter.com
Get smart with the Thesis WordPress Theme from DIYthemes.