Medical school rankings

Malcolm Gladwell says there are two kinds of rankings. Comprehensive rankings compare similar things across multiple attributes (e.g. ballpoint pens, measured on weight, ink volume, smoothness, balance, grip). Heterogeneous rankings compare complex different things across narrow attributes (e.g. all cars, measured just on how fun they are to drive). School rankings fail when they attempt to be both comprehensive and heterogeneous. Schools are complex institutions with numerous moving parts and priorities that differ widely between schools (e.g. how much teaching responsibility does any given faculty member have at a certain school?). Assigning one number to all schools is a fallacious idea.

As a student choosing between schools, I see two problems with taking the USNews & World Report rankings at face value.

First, I don’t care about all the criteria; people have different priorities. Another way of saying is that I don’t agree with the relative weights given to certain measures. It doesn’t matter to me that Harvard and its affiliated hospitals received $1.7 billion of NIH grants last year. It matters to me, a little, how well-run the one hospital I end up doing rotations at is and, much more, how much money the one lab I end up joining gets. It doesn’t matter to me, once I’ve already gotten in, that WashU has the highest matriculant MCAT average (39). It matters to me, a little more, what the average USMLE step 1 score is and, much more, how well students match into residencies in the areas I’m interested in.

Second, the measures are not independent. In particular, the reputation ratings (peer and residency assessment scores) depend in part on a school’s research strength and student quality. (A study from the 70s found that 70% of the variance in reputation ratings was accounted for by NIH research and development grants.) The subjective measures, therefore, are partially redundant with the objective measures. The magazine has an incentive to choose and weigh objective measures in accordance with the subjective measures anyway because they need to create “face validity” – the ranking has to be ‘believable’ to the average reader. Gladwell suggests that, in fact, USNWR rankings are self-fulfilling prophecies. McGaghie and Thompson, writing in Academic Medicine in 2001, presented evidence that rankings don’t change much over decades despite enormous economic/social/education change.

After the jump/cut, I look at just the subjective criteria, aka reputation, for top 10 med schools, which creates a heterogeneous and general, but not comprehensive, ranking system.

I dug up the first study on the reputation of American medical schools, from 1977. Why is reputation important? The authors say it’s closely tied to quality in many ways:

  • Reputation affects whether students and faculty apply to or choose one school over another. This affects the quality of students and faculty and perpetuates the reputation.
  • Medical school is a first but critical stepping stone in your career; reputation can affect subsequent career mobility.
  • Reputation influences students’ self-esteem and self-perception within reference groups.
  • Reputation affects visibility and perceived ability of the faculty in the medical community.
  • Reputation can enhance or hinder ability to get grants or obtain resources/facilities to carry out research.

The data are depicted on the left side of the before-after chart below. Faculty were asked to rate the quality of faculty, as well as the quality of medical training, at a number of other schools on a 1-6 scale. These two correlated r=0.99. The margin of error of the entire data set of 94 schools was 0.16. The caveat is that response rate was just 30%.

[Side note: Other notable findings of the study are that 1) there is evidence of self-aggrandizement — faculty rate their current institution and their alma mater significantly higher than others rate them; 2) there is a halo effect — medical schools that are part of universities with national reputations are rated higher than faculty productivity predicts; 3) schools in the south are rated a little lower than predicted; 4) private schools in the northeast are rated higher than predicted; 5) older schools are rated (weakly) higher than predicted.]

My interpretation is that the tiers (in 1977) are 1) Harvard, 2) Hopkins, Stanford, UCSF, Yale, Columbia, and 3) everyone else in a smooth gradation.

On the left side of the graph, I added together the USNWR 2011 peer (deans) assessment and residency (program directors) assessment and rescaled to a max of 6. The caveat is that no margin of error is reported, and the response rates are 46% and 17% for the two respective measures.

There are two kind of conflicting trends. 1) Everyone is closer together. There are no longer discernible tiers. Harvard no longer dominates reputations in the minds of people in medicine. 2) The rank list is relatively static — it’s remarkably similar to the one from 35 years ago. The one point I left off the graph was Cornell (which was just above WashU in 1977 but has fallen out of the top 10 since). Other than that, everyone in the top 10 is still there. A previous analysis of the top 25 showed a relatively high correlation coefficient of 0.79 between the 1977 score and the 2000 USNWR ranking/score. The main difference in the last 35 years, then, is that WashU moved up 4-5 spots while Yale and Columbia moved down 4-5 spots.

All this said, I think many people place undue significance on the “top 10” because it’s a nice round number. Northwestern has an institutional goal of being in the top 10 medical schools and top 10 hospitals by 2020. A dean of something at Pitt told some applicants during my interview day that yea, they’re aware they’re not in the top 10; they’re pretty strong and have certain aspirations, but really, who would we kick out of the top 10? I took this to mean, look, those top 10 schools have been in the top 10 forever; it’s hard to change.

-c

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s