Some reflections on GRE scores

The evidence indicates that GRE scores predict graduate school success, general intelligence, and also that SAT scores predict later success in science.  Here is further evidence, and here is yet further evidence.

You don’t have to think that “high GRE score fields” are better than “low GRE score fields.”  Many of my friends, for instance, think string theory is intellectually bankrupt, despite many of its proponents being very, very smart.  I don’t have an opinion on string theory per se, but my friends might be right, and in any case I would rather read books from cultural studies, a lower GRE score field.

If you wish to understand the relative strengths and pathologies of theoretical physics and cultural studies, you cannot do that without knowing that the former is a relatively high GRE score field (or the equivalent) and that the latter is a relatively low GRE score field (or the equivalent).

There are many top economists on Twitter, most of them Democrats, who would never ever utter a word about GRE scores in a blog post or on Twitter.  Yet when on an admissions committee, they will ruthlessly enforce the strictest standards for math GRE scores without hesitation.  Not only in top ten programs, but in top thirty programs and even further down the line in many cases.  It is very, very hard to get into a top or even second-tier economics program without an absolutely stellar math GRE score, and yes that is enforced by the same humans who won’t talk about the issue.

Just in case you didn’t know that.

Personally, I feel it has gone too far in that direction, and economics has overinvested in one very particular kind of intelligence (I would myself put greater stress on the old GRE subject test scores for economics, thus selecting for those with an initial interest in the economy rather than in mathematics).

When I did graduate admissions for George Mason University, I very consciously moved away from an emphasis on GRE scores, and for the better.  My first goal was simply to take in more students, and a more diverse group of students, and in fact many of the later top performers were originally “marginal” students by GRE standards.  Looking back, many of our top GRE-scoring students have not done better than the peers, though they have done fine.  For GMU these admission criteria are (in my view) more like the Rosen-Roback model than anything else, though I would readily grant Harvard and MIT are not in the same position.

If you are afraid to talk about GRE scores, you are afraid to talk about reality.

Comments

If the GRE predicts intelligence and graduate school success, why did you downplay it at GMU. And if indeed it turned out not to be a great predictor there, why did you write a post about how important it is?

You must have scored really poorly on the GREs to bring up such an obvious point.

Respond

Add Comment

Back in 2007, I made up a table of GRE Z-scores by intended field of graduate study. The top 5 fields were:

Physics & astronomy
Philosophy
Mathematical Sciences
Materials Engineering
Economics

https://isteve.blogspot.com/2007/08/graduate-record-exam-scores-by-graduate.html

What do you mean "made up."

Are those scores just your impression of what they scored?

You must of scored super low on the quant section huh Steve.

It’s perfectly clear what Steve meant by “made up”, and in your comment you’re just being a stupid a-hole.

I have two MA's. One in public culture and one in the sociology of TV. Yet Steve wants to pretend that just because our programs have lower GRE scores that somehow where not as smart as the math types.

I am calling Rule 34 on that comment.

I'm pretty sure you are thinking of a different Rule than 34.

Either way, this thread is amusing

Respond

Add Comment

Respond

Add Comment

You didn't read the linked page, which explains the data source and the way he processed it.

Respond

Add Comment

Respond

Add Comment

"Yet Steve wants to pretend that just because our programs have lower GRE scores that somehow where not as smart as the math types."

Steve explained that his last column (which he uses to order the fields) was calculated in a way so as to not overemphasize math.

Steve's table is interesting, and it's almost certainly true that people in some fields are generally smarter than people in other fields.

And your comment was just stupid and obnoxious.

I'd be interested to segment the outcomes in the lower-GRE fields by the people's GRE (or equivalent, or IQ) scores. My experience in those fields leads me to the strong intuition that you'd find something relevant...

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

"economics has overinvested in one very particular kind of intelligence"

Uh, there's only one kind of intelligence. Don't tell me you're a disciple of Howard Gardner? Kinesthetic intelligence and the like?

You can be stronger in math than language, but the core of the discovery of intelligence is that there is a fairly tight correlation between the various cognitive abilities.

Respond

Add Comment

Respond

Add Comment

+1

Respond

Add Comment

This would indeed be a tricky SAT passage for determine 'what is the main idea'. I'm going with 'all of the above'. Or I might bubble both b and c kind of lightly.

Could be applied to 90% of TC posts.

Respond

Add Comment

No, it's definitely a None of the Above

Respond

Add Comment

The main idea is extremely obvious if you do a Straussian reading.

Respond

Add Comment

I enjoyed the post but I also enjoyed this comment. Would that explain the gap in my Quantitative and Verbal scores?

Respond

Add Comment

Respond

Add Comment

I can imagine there are intelligent people with low GRE scores. Finding those could be a "moneyball" admission strategy (which, in turn, is stereotypical GMU thinking).

That said, I doubt the strategy scales. Moneyball mostly works if you are away from the herd.

I suppose that if moneyball scaled, they'd call it ball.

Heh

Respond

Add Comment

Which is what happened in MLB in the years after the book was written

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Look to the incentives. Grad students are the pool for selecting research assistants. A high GRE math grad student is an excellent worker bee. How else do you think faculty at highly ranked places manage to churn out so many research papers every year? In fact you don’t want grad students who are too unconventional in their ideas, they’ll be following their own interests.

Respond

Add Comment

Respond

Add Comment

Was the author of this post someone possessing a high GRE score? Since it is obvious that GRE or SAT scores are not particularly relevant outside of the U.S., you cannot know which field is a relatively high GRE score one or a relatively low GRE score field, even with the caveat 'or the equivalent'. We all know that academics are petty status seekers, being loyal MR readers.

Or they wouldn't have been quite so obsessed about the relevance of this outside of the US. Perhaps someone with a long standing grudge?

Respond

Add Comment

If Tyler was self-interested he'd say the GRE wasn't important at all. He's already a successful and powerful academic so any policy that downplays a credential anyone can attain by taking a test better protects his status.

Respond

Add Comment

Respond

Add Comment

Have they ever made it harder to get a high math GRE score? Last time I checked in the early 2000s, the GRE was like the SAT in the 1970s: math 800s were not uncommon, but verbal 800s were very rare.

Right, I haven't looked at it since then, but at the time, one would have to be awfully non-quantitative (or unprepared/hungover) to get below a 750 or so, meaning that admitting someone with a lower score to a quantitative graduate program would be a disservice to all concerned.

My recommendation is that the testing companies shouldn't go thru all the political pain of making it harder to achieve a given score, they should just add a couple of standard deviations on the top end. E.g., a 700 on the SAT Math test should still be a 700 but the top score should no longer be 800 but 1000.

That's doable with interactive online testing that serves up harder problems the more right answers you get.

wrt math, that is what the Putnam competition is for, right?
I am sure you know that.
and with respect to everything else, how can you really test a high schooler at a "plus 800" level in a fair way on anything but math?
For example, I am sure some home- schooled kid out there has memorized half of the Aeneid, but what does that really mean?

"I am sure some home- schooled kid out there has memorized half of some math textbook, but what does that really mean?"

good point.

that being said, if some kid has a one in a ten thousand math talent, he is going to do all right if all he wants to do in the world is be a school teacher or take on some similar job.

a kid who can memorize half the Aeneid in Latin is screwed unless he develops some non-philological skills. It is not really fair or unfair, it is just this world ----- under the blue sky that has been blue forever in the daytime and which has never had the fortitude to be blue in the middle of the night.

Respond

Add Comment

Respond

Add Comment

The Putnam competition is for college students, though a very few high school students give it a try. There are AMC tests for high schoolers.

https://www.maa.org/math-competitions/amc-1012

https://www.maa.org/math-competitions/invitational-competitions

But with adaptive testing there’s no reason for the SAT to not test at levels beyond an 800 math score.

Respond

Add Comment

Of course graduate programs take Out and into account, at least in economics and the hard sciences. There are no formulas for PhD admits.

Respond

Add Comment

Respond

Add Comment

and for the record, some clown in Silicon valley fine-tuned autocorrect to change home-schooled to joke-schooled.

I noticed before I pressed the submit button.

Yes, they also made my phone such that I have to correct twice if I want the word "Trump" capitalized. So clever, they are.

That's not it's most annoying tendency.

Respond

Add Comment

Ironic given trump's own capitalization Challenges.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

"interactive online testing that serves up harder problems the more right answers you get." Or the more right answers your pal who's sitting the test for you gets.

Cheating is, I think, an underappreciated source of error in these discussions. I also think it's probably an important factor in programs becoming more and more selective while producing less and less valuable research (this is observed across fields).

Respond

Add Comment

Respond

Add Comment

Take a look at the revised GRE. They've done exactly this -- but they've switched to a different (non-overlapping) interval for scores so that any GRE score can be understood in its context.

According to Wikipedia, the old threshold for a quant 800 is now 4 points off perfect, so they have room to 'score' those groups separately.

Respond

Add Comment

Respond

Add Comment

As I recall, in the old days you could get over 800 on the GRE. They say that Larry Summers got over 1000 on the Econ GRE.

I heard they re still adding Larry’s score

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I remember around that time that 800 math was ~94 percentile, while ~750 verbal was ~97 percentile. I never understood why they didn't make the test harder to better separate those at the top.

Respond

Add Comment

They changed the GRE about 8 years ago. Kevin P above is incorrect. An 800 on the old GRE quant was the 96 or 97 percentile, and anything over 700 on the GRE Verbal was 99th percentile. The Verbal was one of the best IQ predictors around, but while the math section was g-loaded, the math itself wasn't hard enough.

So they changed it. They removed all the g-factor from the GRE Verbal, making it basically a grammar and reading comp test, with no particularly difficult questions--and the difficult ones became a matter of logic, rather than verbal ability. And they turned the math section into an achievement test, bumping the top math from geometry to analytic geometry.

Respond

Add Comment

Respond

Add Comment

AS a fairly math oriented guy who is not in a grad program, I do advise students applying to grad programs, and I do tell them that for the vast majority, and especially the officially highly ranked ones, they need to have stellar quant GRE scores. That is just where it is at, for better or for worse.

However, I am sure you are right and you are in a position to know that there is less of a correlation between performance on those with ultimate professional performance in econ than many of those enforcing this might think. Of course the GMU program is less oriented towards being mathy than many programs, and I know some of your good performers have been people who have done well without being all that mathy.

There is, of course, the problem that given that so much of econ does involve a lot of math, some minimum standards must be met. But that does not mean totally focusing on maxing out on quant GREs as so many places insist on. There is certainly a lot more to economics knowledge than just the mathy stuff.

And how well do the math skills and facts tested on the gre gauge quantitative skills useful in economics? How often do triangle facts come up? Prime factorization games? The list goes on

None of the GRE math is post high-school - and it doesn't even test the math they do need to know.

But if GRE scores function as IQ tests, this doesn't matter. And if they function as a way to test whether you learned and retained the topics taught in school, it also doesn't matter.

+1

Yes, if they function as some proxy for IQ tests, than you might as well have people spend hundreds of dollars to count the number of consonants in a paragraph under time constraints.

At least this way you remove the noise of topics not being taught in school uniformly, or recently, for everyone.

Respond

Add Comment

Respond

Add Comment

If there's an argument for a high GRE cutoff it's that yeah, it tests pretty basic math, but OTOH you can prep for it and review whatever it is they're testing for. If you can't be bothered to do a little prep you're probably not that interested in grad school -- nor will you do well in grad school, where you also need to be ready to learn material that bores you.

I guess a funny tidbit about this is that math and physics programs actually don't care at all about your math GRE score. You may raise eyebrows if your score is low (in the low 80s?), but otherwise I don't even think it gets looked at.

I know someone who is involved in grad admissions in physics who believes the verbal reasoning score correlates more strongly with success.

I don't believe that physics and math programs don't care about your GRE math score. Even in my 30-50 rank economics PhD program, I don't think the staff screeners would allow an 80th percentile to get by them as worth the time of faculty readers.

I have heard that about economics programs. The difference may lie in the fact that there are other standardized assessments to look at. Both subjects have GREs specific to math and physics, which do matter.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I have been on admission committees, for “mathiness” we looked at math courses taken and grades in those. These days you better have minimum a math minor, but more is better: real analysis, diff equations, math statistics, numerical analysis

Respond

Add Comment

An 800 ("perfect" score) math GRE is only 90th percentile. So "the strictest standards" is not actually very strict. Strangely, a 700 on the verbal GRE is 99th percentile.

You need to check your facts. An 800 isn't even an allowed score anymore, and the highest percentile achievable by a perfect math score is 96.

Yeah, he's talking about the old one. None of that's true with the new test, including the scoring.

Respond

Add Comment

Respond

Add Comment

The cynic in me says that they did this for the benefit of the fragile male ego. Men tend to do better on math tests, so lots of them can get the top score. Women tend to do better on verbal tests, so there is no problem if most of them know they weren't "perfect".

I'm not always this cynical, but if you've read how the ETS jiggers its test scores so as not to embarrass the male sex, you would be hard pressed not to harbor some cynicism.

Wrong. Stupid wrong, in fact. Women don't tend to do better on tests of verbal IQ, which is what the earlier test was. If anything they changed the test to give women a chance at better scores.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

As someone who does pretty well with math, sciences, etc, I definitely think the move toward high math-GRE scores has lead to a lot of the problems we run into today. It makes p-hacking and fancy mathematical models the focus of research rather than explaining human behavior, which can certainly be done with numbers, but now when the data is as poor quality as we have.

+1

Respond

Add Comment

Respond

Add Comment

Such as making a very conscious emphasis on political orientation when looking at graduate admissions. Just in case you didn’t know that.

Come on prior. Say who funds Mercatus.

Mercatus does not grant degrees, and has nothing to do with GMU admissions.

So it’s all a conspiracy then. But how are the Kochs controlling GMU

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Thumbs down for cherry-picking studies that show GRE predicts grad school success. (Incidentally, nothing in the GRE predicts that a person knows how to evaluate academic literature and draw tenable conclusions from it.)

Yes, that’s the point. It is being used as a proxy for ability. The skills you mentioned are what they are supposed to learn in grad school so it doesn’t make sense to use them as admission criteria.

Respond

Add Comment

Respond

Add Comment

I prefer to focus on thickness and liquidity

Respond

Add Comment

While I agree that if you are afraid to talk about GRE scores you are afraid to talk about reality this insistence that your colleagues talk about them publicly in this way feels (surely unintended) like a bit of a dirty trick.

I mean imagine someone saying that if you are going to write papers on the effects of gene X on IQ (which hypothetically differs in prevalence between blacks and whites) you should be publicly acknowledging what this says about racial IQ differences.

Sure, in that case those who didn’t would be in a sense ignoring reality but we have norms that discourage us from raising certain topics. I don’t know if I ultimately judge such norms to be beneficial but there are good reasons for them.

Similarly, even if you think math GRE is a very strong predictor of future contributions to the subject there are good reasons for norms against trumpeting it. For one, given our social attitudes about intelligence and math, it can easily discourage or demoralize people who might be very good at the subject (good prediction isn’t perfect) and it is likely to discourage certain types of applicants even though they will actually do great on the math GRE (the smart people who, because of stereotypes about either them or math people, don’t see themselves as one).

Maybe it should be discussed more but I’m just saying I understand why one might be reluctant to make a strong public argument to the point that math GRE should be a huge factor.

I mean look at the ridiculous debate over the SAT now. Even though the SAT is almost surely less game able than the vague combo of activities and essays which are super responsive to time and money it’s nevertheless come to be that defending the SAT in college admissions has become associated with a defense of traditional privilege on social media.

I don’t like that so many people so carefully cultivate a particular social media image rather than saying what is true but I do accept that what people say on social media is as much an attempt to sculpt an image as advance ideas they believe in.

Not to worry.

Within 15 years there won’t be a GRE nor SAT at all. All top school admissions will be based on bloodline, with lower school admissions based on party loyalty and whether one’s parents are party apparatchiks.

And the nouveau rich will buy their way in regardless.

Respond

Add Comment

Respond

Add Comment

How do you test a grad program for "grad school success"? I'm confused by TC confounding individual test results and the groups those individuals self-filter into. Which is it? Whether or not GRE-S or SAT predicts success, TC is talking about the quality of the discipline - groups of groups. Total non sequitur. And then there's the paper's (tl;dr) definitions/metrics for success. I say (top of my head) that "success" for a grad student is tenure, partnership, or residency. Not graduate gpa or citation count. I'd also add as an alternative 5th yr incomes in the top quintile or decile for their selected discipline.
Surely, with a bit of money (more than a bit, perhaps) we can test this. in RCT (admitedly, it would likely take years to accumulate enough data). Once, that is, we agree on what graduate programs are supposed to provide - which is not grades or citations. imho.

Respond

Add Comment

I think conventionally intelligent people are well represented among those with really high GRE scores, and people on the spectrum between slight unconventionally intelligent to very unconventionally intelligent are more represented below the top GRE scores but still well above average. The latter group is almost entirely shut out of some elite programs but I reckon they contribute just as much to society, if not more. The remedy, in my opinion, is to increase the pool of admissions by maybe 20-30%.

That's close to backwards. Many conventionally intelligent people aren't particularly high IQ, while the unconventionally intelligent tend to hit the dinger and top out the test.

Respond

Add Comment

Respond

Add Comment

To some degree this is circular reasoning. I'm sure everyone knows who has the high scores, so you haven't done double blind testing. People with high scores are going to be presumed to be smarter, so they get a lot of legs up on the competition. Better counsel from peers, free handshakes with the brass, and so on.

If you had a veil of ignorance about scores, and removed barriers to entry, you may well find the top performers didn't have the highest scores.

However - this is important - standardized tests are better for poor kids. Sure there are problems. But a poor kid who wants success can study hard. They cannot go back in time and get a last name that the dean of admissions prefers.

So test scores aren't perfect. But they may be the best we've got.

"I'm sure everyone knows who has the high scores". Honestly, no. I did for three years in row the admission in my math graduate program. No one else in the department knew anything about the GRE scores of entrants, and even I had completely forgotten them in the next Fall.

Respond

Add Comment

Respond

Add Comment

So we have these institutions run by very high IQ and very dishonest people. That sounds like a plan for disaster.

Also, my concern is that while aptitude tests are very valid, they may lose some of their predictive power when you talk about the upper, upper percentiles. And they to some limited extent, they can be gamed.

So are we getting more the obedient straight A student more than the disagreeable geniuses we need?

"So are we getting more the obedient straight A student more than the disagreeable geniuses we need?" That's was my concern when I was doing the admission. To a mathematician always comes to mind the example of the absolute genius Galois, who failed the competitive exam of the Ecole Polytechnique because he threw his blackboard duster to the examiner. But the fact is that it is difficult to recognize an absolute genius just on a graduate application file.

Respond

Add Comment

"So are we getting more the obedient straight A student more than the disagreeable geniuses we need?"

Well the obedient straight A students are busy making models and the disageeable geniuses are creating smart phones, electric cars and online shopping. Maybe the current sorting works pretty well at the top.

More than likely it's the second/third tiers where the not so bright kids with excellent connections are pushing out their brighter (but not genius) level colleagues.

@Joel, for me Grothendieck comes to mind. I doubt he would have done particularly well on the math Gre subject test.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Cowen never ceases to impress. Ideas, clearly expressed, are better than ideas, vaguely expressed. It takes more than math skills to have a good idea and to express it clearly. Expressing a good idea clearly is analytical, just as math is analytical; and just as math can open the mind to refinement of an idea, language does too. A problem with much of economic research today is that the researchers can't express their ideas clearly. Does that mean they simply lack good communication skills, or does it mean they don't have clear ideas to express?

Bob May, a physicist with the uncanny ability to express complex ideas clearly, has died. https://www.nytimes.com/2020/05/11/science/robert-may-dead.html

Respond

Add Comment

Or, maybe they should have an English proficiency and reading test.

Respond

Add Comment

Respond

Add Comment

Or...Must be having a tough time recruiting graduate students into your program and have to take those who were not accepted elsewhere, telling them that GRE doesn't mean anything anyway..

Respond

Add Comment

Nothing has been posted on this blog to suggest that either Tyler or Alex are good at math, or for that matter, logical thinking.

Respond

Add Comment

"I would myself put greater stress on the old GRE subject test scores for economics, thus selecting for those with an initial interest in the economy rather than in mathematics."

I took the GRE, and the section for (my chosen field). As I walked out of the test I said to myself "wow, that thing is for people who are really into (my chosen field)."

It turns out I was not, and went off to do with things with computers instead.

I think I spent about 2 days with a GRE prep book before taking it. Simpler times.

Respond

Add Comment

Wait, I'm confused. Do GREs matter or not? Can we use them as shorthand for intelligence, effectiveness, and competence, or not? Does Trump have big hands or not? Are epidemiologists stupider than economists or not?

Indeed, I feel like we touched a nerve.

Respond

Add Comment

I take it to mean the GRE scores tell you something real, but they might not actually substitute for passion and doing the work.

And this is something that I think a lot of us are coming to realize. That believing in American meritocracy should not limit itself to the narrow view that it's all IQ.

If someone dumber gets to a better answer by working harder and longer, it's still a better answer.

I took a professional licensing exam once years ago. It measured ability to memorize rules-based solutions to problems. It also measured ability to take this sort of test in a time-limited pressure environment.

I also paid for a preparatory course.

The exam was composed of a bunch of multiple choice type short questions, and also some more complicated long problems worth more points.

Among the long problems, every single one had been thoroughly covered in the prep course. In fact, one of the questions on the exam appeared VERBATIM as it was presented in class just a couple weeks earlier. Same exact answer.

There is a large universe of possible topics, and questions about those topics, that the exam authorities could choose from. Very large. The prep firm nailed it 100%

It's possible that the people who ran the prep class are really good at guessing what would be on the test.

There are other possible explanations.

Bar exam? I guess it's different for every state. In mine, Bar Bri was able to tell me the % of bar exams which had a long-form question covering each topic, but they were not able to predict and of course, they covered all the topics anyway.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I could have written this post, Tyler, my feelings exactly. I was the head of grad admissions at my top 50 department for many years. Younger gunho AP’s in the committee would simply sort their applicant spreadsheet by GRE quant and want to admit the top N without reading anything else in their files! The older and wiser of us in the committee would read the whole file and rank applicants according to a mix of various criteria. Sometimes what I considered a promising student happen to have a “low GRE quant”, that means around 80 percentile! The young APs would tear their garnments when I proposed to admit them. Over the years I was proven right more often than not as the “weak GRE” students wrote good dissertations and got better placements than the perfect-GRE applicants.

Math is a tool to practice a trade and all that is needed is to have the appropriate tools in the tool box; extra, refined tools in the box are marginal and just signaling, unless the trade is mathematics!

Respond

Add Comment

Respond

Add Comment

Since I have high GRE scores, like tyler, I'll also cherry pick some papers that show no prediction of GRE with academic or later success.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0166742
https://journals.plos.org/plosone/article%3Fid=10.1371/journal.pone.0206570
https://advances.sciencemag.org/content/5/1/eaat7550?
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0201634

I guess Tyler might not like them as they are not twitter links of blog post.

Respond

Add Comment

Success seems to be self-fulfilling. High GRE's make you successful in physics, but string theory isn't producing anything? 'Cultural studies' has low GREs.

Suppose physics decided to try some more low GRE students and cultural studies some high GRE students. Might not both fields improve a bit?

I'm reminded here of the story about Steve Jobs wandering into a class on typesetting and later on realizing Apple needed to have different fonts. To the engineers, this made no sense. Text is text and fonts just take up memory and processing power. Yet whatever marginal loss there was in performance was more than made up for in value to consumers.

I could imagine a story where a 'low GRE' chap comes in, hates the math of string theory and suspects it's just a way for the high GRE people to show off and spits out a theory that is simpler but works better.

and then he hires an army of high GRE people to execute his plan

I suspect the GRE is a bit like the Confucian Civil Service exam, which I bet if you had data from back then you would also discover high Confucian exam scores correlated with high performing civil servants.

I suspect it has less to do with the GRE or old Confucian exams actually measuring anything directly but creating a type of class consciousness that let people who pass it work together easier. In many contexts this might be more efficient but in others it could stifle innovation.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

This is not exactly controversial, that standardized tests are essentially IQ tests and high performers have high IQs and therefore perform better since IQ is the largest predictor of success -- at most anything. The stuff about George Mason is opaque and hard to understand since the writer does not explain what he means by "for the better," etc. The applicants with lower scores who were admitted performed better (at what?) than the applicants with higher scores who were denied admission?

Hmmm

"“IQ” is a stale test meant to measure mental capacity but in fact mostly measures extreme unintelligence (learning difficulties), as well as, to a lesser extent (with a lot of noise), a form of intelligence, stripped of 2nd order effects — how good someone is at taking some type of exams designed by unsophisticated nerds. It is via negativa not via positiva. Designed for learning disabilities, and given that it is not too needed there (see argument further down), it ends up selecting for exam-takers, paper shufflers, obedient IYIs (intellectuals yet idiots), ill adapted for “real life”."
https://medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39

But we all know that's not true since it is the single best predictor of success in quite a variety of life outcomes, including job performance

No company I know of uses IQ or GRE tests to hire people.

Probably because their lawyers all know about Griggs v. Duke Power Co. and later.

Respond

Add Comment

Respond

Add Comment

Also the problem is that if you strip away the lower scores, it's value as a predictor falls a lot. That's the point of his piece. A person with an IQ of 70 almost always will have serious mental deficiencies. But then you won't need an IQ test to know that. A person with an IQ of 130, though, will not be as reliably better than a person with an IQ of 100.

Take out the lower extreme, which you don't need an IQ test to determine anyway, and your 'single best predictor' gets shitty really fast. Sorry to rain on the parade of those who think Charles Murray is a dispassionate 'scientist' who just follows the data to the truth.

Now we know who scored poorly on his/her SAT/ACT/GRE etc.

Respond

Add Comment

IQ is about the only thing in psychology that isn’t debunked via replication fails.

But whatever

Indeed it remains reliably unreliable for all but the lowest scores.

Do you really think IQ would be statistically insignificant for a Logit model of Nobel Prize winners?

Or mathematicians? Or physicists? Or engineers ?

I don’t have an axe to grind here. It’s just absurd on its face. Obviously G has a huge impact on outcomes. You seem to agree.

If you, say, were staffing a University Research Department and your Dean tells you he will judge you based on how many Nobels are produced in the next 20 years, you probably would not use IQ tests to select your candidates.

You could drop off anyone below 125 and not eliminate a single Nobel winner. That one would be Feynman, whose brilliance was almost exclusively mathematics based and won the Putnam mathematics competition exam.

The problem is not to eliminate non-Nobel winners but find future Nobel winners. Using IQ tests and a 125 cut off means you eliminated 95% of the population but that's about as impressive as it gets. 125 is the mean of PhD's in general so all you did was tell me you'd pick from the upper half of PhD's to try to create a team of Nobel winners. Considering your goal is to get to say the top 1/10th of 1% IQ has gotten you almost nowhere that a halfway intelligent person who lacked access to IQ scores could get by just applying common sense.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

It's pretty well-established that measures of intelligence predict job performance. We are talking about correlations and using terms like "reliably" or "best predictor" don't capture the nuance of statistics particularly well. Can Facebook's algorithms "reliably" determine which ads you will click on? They can certainly make money trying to do so. Likewise, the U.S. military can also administer the AFQT to a large pool of individuals and slot people into positions to cut down on the number of expensive and dangerous f-ups. The American Psychological Association's response to the Bell Curve did not take issue with Murray on this point.

How about an actual company outperforming its peers by hiring better quality people using IQ tests?

I think there's a term for that, oh yeah, Google.

Google hires on IQ tests?

Respond

Add Comment

Respond

Add Comment

Likewise, the U.S. military can also administer the AFQT to a large pool of individuals and slot people into positions to cut down on the number of expensive and dangerous f-ups.

Reinforces my point. It's fine for filtering out fuckups, although most organizations will simply have hiring managers who can easily tell one when they interview them.

Your point, though, was about finding and cultivating high performers.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Rather than a pass/fail admissions decision mightn't it be wiser to have an internship system where the applicant works with someone for a month, or three months, and then is expelled if he proves dud or uninterested? That might be done in the summer gap between students finishing their bachelors degrees and starting their graduate studies. What objections of principle are there to such an idea? After all, some firms find it worth doing.

Firms face market discipline manifest in their balance sheets and profit-and-loss statements.

For academic institutions, the internship would decay into a means for booting out the Mormons and evangelicals they didn't catch in their earlier screens.

Respond

Add Comment

Respond

Add Comment

When I did graduate admissions for George Mason University, I very consciously moved away from an emphasis on GRE scores, and for the better. My first goal was simply to take in more students, and a more diverse group of students,

IOW, to reduce the number of white gentiles admitted.

Yet strangely the portion of whites with college degrees has been growing steadily for over a generation now. Odd how that can happen if whites aren't being admitted to college...
https://www.pewsocialtrends.org/2016/06/27/1-demographic-trends-and-economic-well-being/

It's mostly Asians who suffer from race-based academic discrimination

Oddly they have even higher rates of degrees than whites. Strange type of 'suffering'.

Respond

Add Comment

Respond

Add Comment

And he deflects by switching gears to a discussion of undergraduate admissions.

The number of doctoral degrees to white citizens of the United States in 1977 was 73,000. The number awarded in 2016 was... 73,000.

Now, consider the following: the number of doctoral degrees in toto has increased by 92% in that time, driven in part by the capture by higher education of escalating proportions of each youth cohort and in part by credential inflation, especially in the realm of junior grade medical practitioners. (So, now you have the JD, the DPT, the PharmD, etc). And, of course, the raw number of such degrees granted the men among them have declined by more than 1/3 in that time (while the number of working physicians, to take one profession as an example, has in that time doubled).

https://www.equityinhighered.org/indicators/u-s-population-trends-and-educational-attainment/educational-attainment-by-race-and-ethnicity/

It appears even if you limit yourself to degrees above Bachelors Asians are more likely to have a degree than whites or blacks.

Which is, of course, irrelevant to my point. When you're ready to quit deflecting, get back to me. Blowhard.

You claimed 73K Phd's were awarded to whites back in the early 70's and that number is the same today. Not sure if that's right or not but about 2% of the US white population holds a PhD. & 2% holds a Professional degree. That would be about 4M for each.

So if you're figure is accurate at this moment there are about 55 years worth of white people with PhD's walking around. This would imply either that white PhD holders more or less became immortal around 1960 or so or your numbers are just wrong.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I was admitted to my math phd program with 800 math, 790 verbal. The chair of the admissions committee later told me that 'everyone' had 800s, so they used the verbal to make admissions decisions.

It's hard to see how a reasonable candidate for a math PhD could not get 800 on the quant GRE (the scoring scale is different now but the same idea applies). The subject exam, okay sure maybe not I guess.

By misreading a histogram?

The GRE tests middle school math. Algebra II is too advanced. For someone who is a math major, the math section is a test of conscientiousness. Can you stay focused and not make silly misteaks under time pressure?

I was not a math major, and lost points between the SAT and GRE. It tests maybe another years' worth of math, but by then I was more years away from when I learned it, and at least high school math you used the middle school math so you were in better practice.

I guess these tests are imperfect proxies for G. Surely we have better proxies. But coordination problems, institutional path dependence... Tamper proof test administration is not an easy thing to do. (See, for example, the movie Cheaters, about a test with much much lower stakes than the SAT, or the time the Arkansas bar lost an entire days' test scantrons).

This is correct.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I'm a physics professor at a major research university.

The people who scored better than me on SAT and GRE tests were smarter than me, however ill-defined my notion of "smart" is. And the obverse was also true. Those tests work, at least at that level.

Did those people who did better on the GRE end up with better careers in physics? Not all, although they all ended up doing just fine in wherever then ended up. And few who did worse than me ended up with a better career in physics.

The tests cannot predict success: I don't know of any metric that does. I do know that below a certain score on the GRE students have a very low probability of passing their courses and the qualifier.

I consider the move away from the SAT, in particular, from college admissions a gave mistake. High school grades are all over the map.

An old friend once told me, "Everyone at our level is a genius. Everyone went to the right schools and was the top in their class. Everyone knows their shit. So what distinguishes is not how good or smart you are. What distinguishes is what you do when crash into a wall. Do you fold or do you keep pushing?" This was decades before Angela Duckworth and the grit thing, but Ed is more accomplished and probably way more intelligent than Duckworth.

Respond

Add Comment

Respond

Add Comment

With a gender gap in GRE quant scores, an unintended consequence of only accepting applicants with stellar scores is that it prevents some qualified women from being admitted to top econ PhD programs. When I applied, 96 percentile verbal and 89 percentile quant wasn’t good enough.

Respond

Add Comment

White collar felons, felons-to-be and overall dickish sociopaths are probably over-represented in the high GRE, GMAT or LSAT categories, too. Some even swim in more than one pool.

Where is the decency score, or the honesty score, or the ethical behavior score? That latter one would be a useful supplement to those so-called ethics classes that so many lawyers seem to have skipped. Analyze those scores and observe the troubling distributions around DC and Wall Street.

Respond

Add Comment

Isn't this just a Moneyball situation? A lower tier program like GMU can't get the more traditionally desirable applicants, so it tries to find value by using other deprecated metrics.

Respond

Add Comment

The research on the effectiveness of the GRE as a predictor of graduate school success is not as conclusive as Tyler suggests. My colleague and I did some lit review on this topic as part of conversation at our institution about dropping the GRE. Here are some refs: Peterson, Sandra, et al. 2018. “Multi-institutional study of GRE scores as predictors of STEM PhD degree completion: GRE gets a low mark.” PLOS One. (link)

“Our results suggests that GRE scores are not an effective tool for identifying students who will be successful in completing STEM doctoral programs. Considering the high cost of attrition from PhD programs and its impact on future leadership for the U.S. STEM workforce, we suggest that it is time to develop more effective and inclusive admissions strategies.”

Hall, Joshua, et al. 2017. “Predictors of Student Productivity in Biomedical Graduate School Applications.” PLOS One. (link)

“We conclude that the most commonly used standardized test (the general GRE) is a particularly ineffective predictive tool, but that qualitative assessments by previous mentors are more likely to identify students who will succeed in biomedical graduate research. Based on these results, we conclude that admissions committees should avoid over-reliance on any single component of the application and de-emphasize metrics that are minimally predictive of student productivity.”

Miller, Casey, et al. 2019. “Typical physics Ph.D. admissions criteria limit access to underrepresented groups but fail to predict doctoral completion.” Science Advances. (link)

“It is notable that completion changed by less than 10% for U.S. physics major test takers scoring in the 10th versus 90th percentile on the Quantitative test. Aside from these limitations in predicting Ph.D. completion overall, overreliance on GRE scores in admissions processes also selects against underrepresented groups.”

“This decontextualized use of GRE scores embodies an admissions process that systematically filters out women of all races and national origins, Hispanics, Blacks, and Native peoples of all genders, and gives preference to international students over U.S. students. The weight of evidence in this paper contradicts conventional wisdom and indicates that lower than average scores on admissions exams do not imply a lower than average probability of earning a physics Ph.D. Continued overreliance on metrics that do not predict Ph.D. completion but have large gaps based on demographics works against both the fairness of admissions practices and the health of physics as a discipline.”

Benderly, Beryl Lieff. 2017. “GREs don't predict grad school success. What does?” Science. (link)

“It thus stands to reason that committees evaluating scientific potential, whether in grad school applicants or would-be faculty members, might benefit from paying more attention to what the scientists who know the candidates think of their minds and characters. Reading and considering such testimony would undoubtedly take more time and effort and could feel less “scientific” than looking at numbers, whether test scores, GPAs, or tallies of publications. But it appears more likely to pay off.”

Montena-Koehler, Liane, et al. 2017. “The Limitations of the GRE in Predicting Success in Biomedical Graduate School.” PLOS One. (link)

“Overall, the GRE did not prove useful in predicating who will graduate with a Ph.D., pass the qualifying exam, have a shorter time to defense, deliver more conference presentations, publish more first author papers, or obtain an individual grant or fellowship. GRE scores were found to be moderate predictors of first semester grades, and weak to moderate predictors of graduate GPA and some elements of a faculty evaluation.”

Langin, Katie. 2019. “A wave of graduate programs drops the GRE application requirement.” Science. (link)

“Others worry that the GRE may hinder diversity and inclusion efforts. ETS data show that women and members of underrepresented racial and ethnic minority groups score lower on the GRE than white men and Asian men do. (ETS argues that this reflects educational background and unequal access to opportunities, not bias against these groups per se.) Paying for training and taking the test—which costs $205 a pop, plus travel in some cases—can be a burden for low-income students.”

“Whether dropping the GRE requirement will diversify applicant pools is far from certain. But Jon Gottesman, director of the Office of Biomedical Graduate Research, Education and Training at the University of Minnesota Medical School in Minneapolis, hopes to find out. He and his colleagues sent out a survey to biomedical graduate programs last month, asking for information about their admissions process and data on their applicant pools, such as the total number of applicants and the percentage from underrepresented groups. “We’ll have to see,” he says. “I have a feeling we’re going to have to be looking at this for more years to really get a sense.”

Sternberg, Robert and Wendy Williams. 1997. “Does the Graduate Record Examination Predict Meaningful Success in the Graduate Training of Psychologists?” American Psychologist. (link)

“The test was found to be useful in predicting lst-year grades but not other kinds of performance, with one exception-- performance on the GRE Analytical test was predictive, but only for men. The authors conclude that there is a need to develop better theory-based tests”

Incredible. You people are still peddling this, no matter how many times range restriction, power, and collider bias are pointed out to you, and no matter how many papers showing predictive power with n into the millions people like Cowen preemptively cite. I'm glad you included those quotes about underrepresented groups so people can understand exactly what ideological hatchet you are grinding and why you so obtusely ignore everything psychometricians and industrial/org psychologists tell you about why those results are as expected given your absurdly tiny and selected samples.

Respond

Add Comment

As a trans person, I know the tough part of academia is finding mentors, networking, and being treated fairly by instructors. A test is just a test. Yes, there will be affects on tests that show up as a result of systemic discrimination but to discount a test for other measures that are the system is cruel and serves the privileged.

It's very obvious to talented minorities that the dialogue about inclusivity in academia is primarily about protecting access for the well-connected and privileged. It's not conscious but it's definitely what's going on. It's not even debatable when you look at discrimination against Asians.

If you're actually in favor of diversity in academia, try not disparaging a credential that's easily accessible but difficult to attain, rather than propping up credentials that are inaccessible but easy to attain upon access.

Respond

Add Comment

Respond

Add Comment

While the proposal to look at performance on econ GRE may be useful, my feeling is that the areas that help make someone a good economist besides math skills are ones not easily captured in the regular GREs, namely knowledge and interest in history and philosophy.

Respond

Add Comment

What happens to Americans that makes them rant about the Democrats (or Republicans) same time every four years?

Respond

Add Comment

You are confusing the prospective and the retrospective value of the GRE score.

For a student, a GRE score is a prediction of what is to come. It has some predictive power for grad school grades, etc.

For an established researcher, grad school is in the past. Everything that the GRE is supposed to predict has already happened. The relevant thing to look at is the publication record.

Also, the epidemiology models that were being talked about in the original GRE post involve *high school calculus* at most. Even the underwater basket weavers are qualified to weigh in.

Respond

Add Comment

Respond

Add Comment

I'm in despair because of this very issue.

I have good grades in uni. I do especially well in maths-related subjects. I love math. And for the life of my I can't get a good GRE math score. I know all of the materials of course, they're all very easy, but I always do enough errors in computation in the latter part to be heavily penalised. I've done it 3 times and scored 165 the best time, 163 the other two. Didn't study for the verbal at all and got always 167 or 168, though English is my second language. 5/6 in the essays every time.

I think I'm smart, and I was just admitted into a very good masters program in Economics. But I know I won't make a good PhD with this kind of score. I know I need a 168 or higher, but it seems impossible. I know it should be easy. I just need to not fail at basic mathematics. But I really don't know how to at this point.

I'd say you have 3 options:

1) If your errors on the test are mostly execution of things you are trying to do correctly, then doing practice problems will help. When I used to work in the physics help room, I found that people made more computational errors on problems where they were slightly out of their comfort zone. Expanding your comfort zone could result in a big score change without too much extra work.

2) You could enter your good masters program and reevaluate after you've been exposed to graduate level coursework. If you do well in the coursework, that would probably give you a leg up on a subsequent application. If you do poorly, you've got a well mapped exit route and a degree that's not a terrible return on your investment.

3) If (not necessarily you, but someone in a similar position) the GRE score is both an accurate representation of the abilities in question and a major factor in predicting people's success in grad school, a bad score can be a cheap indicator that this is not going to be a fruitful place to invest your time and energy.

That sounds bad, but people who are having a bad time in grad school can be *miserable* and lose several years of their life that would be better spent building a conventional career. If the test is telling you that's likely, then it's a really valuable (albeit unwelcome) piece of guidance.

I'm definitely not out of my comfort zone doing the GRE material, I do much harder stuff on an everyday basis at University, I always excel at mental calculus in interviews, I just do some computation errors, and perhaps spend too much time on the graphical analysis questions trying to measure things, but I always finish with a couple minutes to spare.

You're right that maybe graduate school isn't for me. I'll find that out during this masters. But I really don't know how the GRE could measure that, besides in a general-intelligence way; or perhaps measuring one's response to mental fatigue, as it lasts almost 4 hours with breaks. It's basic math. I honestly think I would score much better if this was multivariable calculus or analysis, I have done great in both. Those are about understanding the mathematics, in its crudest forms. The GRE is about applying formulas almost mindlessly.

Well, the good news is that the GRE has no way of looking into your soul. It's just a test of some basic mathematical concepts.

I'm not familiar with the grading scale: what percentile does 165/170 correspond to? It can't be that low.

If you're already looking at 90th percentile, nobody is going to care -- the tests aren't that precise.

It's the 86th percentile. Top schools (starting 20-30) weed out PhD applications bellow 167 or 168... which is why I am so worried, and why I will continue to give money to ETS in the hope to get a result that will make my application actually seen by the adcom.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I've never served on a graduate admissions committee so I don't know how they evaluate applicants who already have a masters degree in econ or at any rate a year of grad school. But I would guess that such an applicant has an information resource that is vastly more powerful and accurate than mere GRE scores or even undergraduate transcripts and letters of recommendation: they've got actual academic experience at the graduate econ level.

Graduate programs often have grade inflation but they can still be used as screening tools, an A- in a core econ class e.g. might be a sign that the applicant should be rejected. Ditto an A- in the math for economists, econometrics, statistics for economists, or whatever version of that class they have.

That'll weed out the weaker applicants. To distinguish the strong ones, that's where letters of recommendation might come in. And ideally some prof in the master's program is an old buddy or former student of someone in the PhD program and can put in a good word for the applicant.

That year in the masters program is a year of opportunity cost. But even if the applicant gets rejected by all of the good PhD programs, it's not a total loss: there's a big "opportunity benefit" that offsets the opportunity cost, namely a year of graduate econ education. Unless it's a crummy masters program, the student will learn a ton more than they learned in their undergrad classes. Econ grad school, at least at decent grad schools, is an advanced degree program that pays off in the job market. Heck, even undergraduates with degrees in economics make higher salaries than undergraduates in any other art & science field (engineers and IIRC pharmacists make more). That masters degree is likely to pay off as long as it's at a good program.

Respond

Add Comment

Respond

Add Comment

I'm not an expert, I've never been on an admission committee, but 165 out of 170 max is good enough. I suspect a low score can weed your application out, but a high score is just passing. I suspect a 165 is passing and higher scores wouldn't really impress. As you said, it's easy material.

Respond

Add Comment

Respond

Add Comment

I don't think of many of your books as benefiting from first person experience, but I wonder if that will be necessary in your exploration of how to spot talent (which is what I think your next book is supposed to be about).

Respond

Add Comment

The GREs are definitely useful. What does need to get something out of graduate school? You have to be able to be able to read pretty complicated stuff in English and mathematical notation. You have to be able to write complicated stuff in both as well. The GRE measures this moderately well.

The school I went to explicitly balanced test scores against other measures, so it valued high test scores, but if you had great grades, great outside projects or amazing enthusiasm, that counted for a lot as well. They even include a diagram showing this with the application package.

Think of it as optimization by simulated annealing. There's hill climbing, and there's random jumping around. You can get stuck climbing a hill, but if you are willing to climb back down a bit, you can often get much higher.

Respond

Add Comment

I had thought the people mocking you for a focus on GRE scores were overstating things, now, eh. This is speaking as someone who got a perfect verbal, its not that important.

Respond

Add Comment

Is there a difference in average IQ score for participants among fields? Probably. Differences in market demand, prestige and culture will likely select different groups of people into different fields. Is this perceived difference a basis for a smart outsider to dismiss the work of an entire academic body, or to discriminate against its members based only on the perceived difference? Hell no.

Respond

Add Comment

Respond

Add Comment