Our angst-ridden 20’s

A small part of me wonders why working on Wall Street or as a consultant was never a dream for me even though so many of my college friends went into these careers. I understood their dreams and their aspirations, and even admired a few of the ones whom I could tell would be successful or work themselves dead trying. And yet I always sat by smugly in my premed world, knowing that even though our work would be equally challenging (me as a surgeon, they doing what ever it is Wall Street people do at 3 a.m. – fixing spreadsheets? Yelling into expensive phones? Flying to Dubai? Making particularly persuasive Powerpoint slides?) – even though we would both work 80-100 hour weeks – I would be engaged in what would superficially seem the more noble profession.

Yet I never questioned why having to work a 80-100 hour week, whether as a physician or a lawyer or a banker, was something that I valued. After all, what could be more boring than that classmate who graduated college and took up a “corporate job” at some rural head office, working 9-5? Doesn’t that mean you’ve already given up in life and joined the workforce alongside our well-intentioned but oh-so-boring parents?

Recently, however, I read a fabulous Financial Times article about the emergence of a generation of young 20 and 30 somethings who struggle to define their life goals, clinging to what this writer deems “the popular fallacy that you can measure the value of your job (and, therefore, the amount you are learning from it) by the amount of time you spend on it.”

Way to burst the bubble of every eagerly-overworked medical student ever.

Read the article for yourself.

What would it take for Harvard Med to drop off #1 in USNWR?

HMS has had a choke-hold on the #1 spot in the USNWR research medical school ranking since the rankings existed. However, most people are not aware of the ranking methodology and, therefore, unaware why HMS is ranked #1. They might see the ranking as a reflection of its reputation among doctors, their number of Nobel prize winners, the size of their hospitals, their MCAT average, their Step 1 average, whether their girlfriend’s grandmother has heard of it.

T and I have wondered and suspected for a while about HMS’ consistently high ranking. I want to answer the question, why is HMS always numero uno under modern USNWR research medical school ranking methodology? (USNWR changes its methods every once in a while just to shake things up, but I won’t go into those differences.) More specifically, is there a primary factor to which I can attribute their dominance?

To address this question, I got a hold of USNWR’s raw data and tried to re-create this year’s rankings. Unfortunately, the magazine is not specific as to how the data are normalized, so my attempt is only an approximation, and its usefulness is limited to analyzing large hypothetical changes. I took the raw data, normalized each measure by dividing by the maximum value of the measure among the top 20 schools (except I inverted this logic for acceptance rate), multiplied each by the weighting USNWR gives (13% to MCAT, 6% GPA, 1% acceptance rate, 15% total NIH funding, 15% per faculty NIH funding, 10% to faculty-student ratio, 20% peer rating, 20% residency rating), and then re-normalized by expressing the score as a percentage of HMS’ score (which USNWR does, too).

re-calc ranks

HMS is just so much higher rated, more than 10% above second place, it’s ridiculous.

I hypothesized that HMS is #1 because it has academic affiliations with so many big hospitals in an arrangement where all their physicians are Harvard faculty (including tons of ‘instructors’) and their money is gets counted as Harvard money in USNWR’s book (but not in Blue Ridge‘s book).

I looked up some figures on the faculty of HMS and the NIH funding granted to its hospitals.

In an alternate universe, let’s say HMS never moved from its location next to MGH in 1847. It never settled into the open farm and marshland of the Fenway on Longwood Avenue, it never spurred the creation of several new hospitals that now make up the Longwood Medical Area. Let’s say the entity known as HMS was only made up of Massachusetts General Hospital and the preclinical departments of HMS. Today they can tout having the #1 hospital in the country, a recent Nobel prize winner, a famous humanitarian, a two-time TIME 100 stem cell scientistcase reports that go straight into the New England Journal. You’re not missing anything (not even a little children’s hospital-within-a-hospital).

That means no Brigham, Boston Children’s, Beth Israel Deaconness, Dana Farber, MEEI, Joslin, MEEI, Schepens, CHA, McLean, Pilgrim, Hebrew SeniorLife, Spaulding, Forsyth (what???). Say HMS got rid of the faculty affiliations of physicians who worked only at those hospitals and could could not lay claim to their grants. That’s okay; you still have a reasonable top medical school.

HMS+MGH has “only” $548.4 million in NIH grants in 2012, and “only” 3332 full-time faculty. Under this scenario, I’ll adjust the faculty:student ratio and grant figures in my ranking re-calculation accordingly and assume factors like reputation rating and student selectivity don’t change. With this grave hypothetical handicap, HMS is no longer #1!

re-calc MGH

Thus, the perennial edge of HMS could be attributed to its promiscuous affiliations with huge research hospitals. HMS is ranked #1 because it has the benefit of holding its crimson umbrella over 9000 grant-hungry physicians and biomedical scientists spread across several of Man’s Greatest Hospitals.

Note that HMS certainly doesn’t pay all those docs. Despite its large endowment, it can’t afford to. The hospitals are all independently owned (e.g. by “Partners”). Word on the street is that Harvard and its affiliates actually pay significantly worse than many peer institutions (because it’s Harvard, because it’s in Boston).

Note that HMS certainly can’t handle 9000 docs clogging up its promotion ladders. In fact, about half of the clinical faculty that contribute to Harvard’s enormous numbers are “instructors,” an entry-level position that seems neither tenure track nor non-tenure track, an amorphous basic starter pack position that few places use to the extent that Harvard does, a directionless space that Harvard could keep a junior faculty at until s/he were 40 if it wanted to.

The effect of Harvard’s vast resources on students is smaller than you might assume. Yes, you (hypothetical HMS student) do get more selection when deciding where to do research. But eventually you pick a lab or mentor and your work in that one lab is then your world and future success; the hundreds of other labs you could have chosen are irrelevant. Yes, you get to have more tutorials than lecture. But the jury is out whether this has a meaningful impact on a student’s learning compared to traditional methods of instruction. Yes, you could get exposure to lots of top hospital environments. But since 2008, your clerkship experiences will only be at one hospital.

Yes, the school looks like it has a lot of money on paper, but actually it doesn’t. The hospitals that are responsible for HMS’ ranking are independently owned and operated and do not have a standing agreement to feed HMS any money. HMS largely relies on grants and endowment funds to operate its research and educational mission. At other schools, clinical activities at the hospital contribute significantly to the school’s operating revenue (accounting for half at my school!). When economic times get rough, like in 2009, the endowment takes a big hit, and the dean of HMS has to call in favors and ask hospitals to chip in a couple tens of millions to keep the students well-fed (with knowledge and occasional free food). This means you’re not overflowing with cash to guarantee full funding for student summer research, to fund student travel to conferences, to throw lavish formals, to give merit scholarships.

My final impressions (TL;DR): HMS is rich, but only about as rich as other top 10 schools are. HMS is clearly a tip-top school, but not as far and away the best as USNWR might make it seem.

“Value added” in top medical schools? MCAT/GPA as predictors of USMLE scores

Previous research has established that Step 1 scores correlate moderately with pre-admission factors (MCAT, undergraduate GPA) on the individual level. The following data, from 2011/2012 available via USNWR, demonstrate that this correlation holds at the school-wide level at the very top ranked institutions.

The graphs below show a linear regression of the top 20 ranked schools (minus Pitt, which did not have USMLE data available). The 95% confidence interval of that best-fit line appears dashed. Schools that perform better than predicted based on pre-admission factors, beyond the 95% best fit intervals, appear as green. Schools that do worse appear as red. I’ve labeled these stand-out schools as well as the most extreme points that fall within the prediction zone.

Even at the elite school level, aggregate MCAT and GPA are moderately positively correlated with Step 1 score (R-squared > 0.5). There is no ‘ceiling’ of diminishing returns seen in this model.

There was a significant correlation between pre-admission factors and Step 2 score, but the association is only weakly positive (R-squared >= 0.2).

On Step 1, Baylor stands out as an over-performer, and UCSF stands out as an under-performer compared to peer schools. On Step 2, Baylor and Vanderbilt stand out as over-performers; and UCSF, UCSD, Stanford, and Duke stand out as under-performers compared to their peers. Schools that have the highest admission standards and predictably high Step scores are the ones we traditionally consider as good schools: HMS, WashU, Penn, Hopkins.

There are many possible explanations for deviations from the prediction:

  1. A certain school might have different student characteristics that are not captured by MCAT/GPA statistics but are selected for during the admissions process. For instance, UCSF has a higher percentage of URM and in-state students than its peers. Other relevant factors might include % of science majors (who could have been exposed to preclinical material in college that is not reflected in the MCAT) and proportion of students aiming for less competitive specialties or non-clinical careers (who might value boards scores less highly). These are hard to tease out.
  2. A school requires students to take the boards at a different time during their training (i.e. a minority of schools have students take Step 1 after clinical clerkships).
  3. A school’s formal curriculum is aligned differently with boards-tested content.
  4. A school’s assessment policies (e.g. grading) motivates students to master boards-(ir)relevant content to different extents.
  5. A school permits different amounts of dedicated time for individual boards preparation.

With the exception of #1 above, these considerations give some insight into the effect of the school on boards scores. People often spew the wisdom that boards performance is entirely an individual function — it doesn’t matter what curriculum you have, your scores are solely dependent on how hard you push yourself to study. In this case, I want to make the logical hop that preadmission factors (MCAT and GPA) are an indicator of a person’s aptitude to take standardized tests (e.g. not freaking out and shutting down the day of the test), their study efficiency, and and their study ethic. These are all individual factors that carry forward into medical school. If we can assume that the pool of students that these top schools draw from are not vastly different (#1 above), and this is a big IF, we can conclude that any large deviations from the prediction are indicative of non-individual factors at play, i.e. factors in the school’s control.

If you consider boards scores as a reasonable partial assessment of the learning necessary to become a good doctor, or even as an important factor in achieving your future goals (e.g. get into a competitive residency program), then you can consider this analysis one of the ‘value added’ of a particular medical school education.

Three final caveats I want to point out:

  1. These data are from one year only. There are often fluctuations from year-to-year at each school up to 2-3 points in the absence of any significant changes in curriculum. Unfortunately, longitudinal data is not available to do a more thorough/stable analysis.
  2. These data are probably self-reported from the school to USNWR. Schools could lie, and there is rampant speculation on SDN that schools do manipulate their statistics when presenting them to applicants because publicly available data on Step scores from the NBME are not available. A number of schools have admitted fudging their undergraduate data. For all we know, Baylor could be pulling BS on everyone. But I trust people. =)
  3. This is only the top 20 schools (because I’m lazy). Below that could be a different picture.

MCAT vs Step 1

GPA vs Step 1

MCAT vs Step 2GPA vs Step 2