Dude, where’s my hover-gurney?

Star Trek Into Darkness came out when I was in med school, and so like a good little geek I went with some friends to go see it.  The movie was entertaining enough, but for my fellow med students, the best part had nothing to do with spaceships and lasers.  It was a brief scene near the beginning, set in a futuristic hospital.  The building was a glass and steel tower and the doctors were carrying tablets, but the real gee-whiz gadgetry was the hover-gurneys floating in the background as orderlies carried patients from place to place.  As soon as we got out of the movie, we turned to each other excitedly: “Did you see the hover-gurney?” “Neat, hover-gurneys!”  But to me, the hover-gurney was more than a cool prop – it was, probably by accident, a perfect encapsulation of how healthcare adopts new technology.

Think about it.  Let’s say you had all of this miraculous technology.  You’ve got robots with practically human intelligence, teleportation, let alone whenever advances in genetics and nanotechnology they’ve invented.  Imagine how different the hospital would look if instead of having to do a operation, you could just teleport out a tumor.  Or if you could program a nanobot to sweep out a patient’s coronary arteries.  When it comes to moving patients from place to place, you could teleport patients to their destination, getting them there in perfect comfort.  Or maybe you could have robotic gurneys, eliminating the need to have an orderly whose job is just to push patients around all day.

Instead, what we saw on the screen was a hospital that simply swapped out one technology (wheels) with a slightly superior technology (antigravity) while leaving the rest of the process exactly intact.  I don’t know if the producers had a healthcare consultant on staff, but that scene was a great illustration of how healthcare tends to react to change.

Take a doctor from 1900 and transport him to 1960 and he wouldn’t even know where to begin.  In that interval we developed antibiotics, chemotherapy, x-rays, and the modern medical school curriculum.  Life expectancy in the US increased by over twenty years.  Since then, progress has been much more incremental.  Our chemotherapy is more refined, and we’ve developed a few less invasive surgical techniques.  But take a doctor from 1960 and transport him to modern times, and after little studying and a lot of computer training, he’d be ready to get back into practice.

Why the slowdown?  In part, you might chalk it up to low-hanging fruit.  Going from “no antibiotics” to “antibiotics” is a huge leap, one that’s hard to replicate by, say, developing a next-generation antibiotic that happens to have a couple fewer side effects.  But to a large extent there are also active forces that make it harder to incorporate technology.  Many a promising idea using proven technology has failed to revolutionize the clinic, because it somehow failed to navigate the institutional incentives towards becoming accepted into daily practice.  Let’s take a closer look at four promising technologies.  Two of them have successfully been adopted within healthcare.  Two of them failed to make the grade.

Back when doctors still scribbled on paper charts, there were two great ideas about how the computer might revolutionize medicine.  The first was the electronic medical record: basically taking the same documentation that doctors did and putting it into a computer so it can be searched and accessed anywhere.  The second was the expert systems: designing computer algorithms that could diagnose patients on their own, replicating the thought patterns of clinicians when they first hear about a patient.

Recently IBM’s Watson wowed the world by beating Jeopardy contestants, and there’ve been some exciting news stories about using the same system for medical diagnosis.  But in fact even as early as the 70s, researchers have developed “expert systems,” computer programs that can diagnose as well as fully trained doctors.  INTERNIST-I was a general purpose diagnostic system at the University of Pittsburgh, Stanford’s MYCIN system beat out infectious disease specialists in diagnosing infections and recommending antibiotics.  These proven technologies had a ton of potential – just imagine having expert diagnostic talent deployed in third-world countries, or having a computer system double check your doctor’s work to reduce medical errors.  And yet, while the electronic medical record has taken off, these expert systems were never deployed in clinical practice and remain mothballed in academic labs.

Why did EMRs get adopted while expert systems languish in obscurity?  A big part of the answer is that EMRs fit slid neatly into an already-existing niche in a hospital’s workflow.  Instead of writing a paper chart, doctors and nurses typed the same stuff into a computer.  There was no need to change any other part of the process.  As time went on, EMRs started to be increasingly optimized for billing insurance companies.  They could then promise hospitals not just searchable medical records but also more billing revenue, and so installing EMRs increasingly became a no-brainer.

Expert systems, on the other hand, didn’t fit into a niche.  There’s no way that an algorithm would entirely replace doctors in practice – patients expect to see a comforting person in a white coat, and of course doctors would fight tooth and nail for their jobs.  So how would you integrate this system into the hospital?  The best you could do is use it as a double check on doctor’s judgment.  But busy doctors wouldn’t really have time for this delay, and no insurance company is offering to pay extra for computer diagnosis.  So even though in theory expert systems had a lot of potential to produce better decisions and deliver better patient care, its very disruptiveness prevented it from making any headway.  As Peter Thiel said, disruptive technology may be cool, but disruptive kids get sent to the principal’s office.

Our next set of case studies involves a different sort of computer-aided diagnosis – computer diagnosis of pap smears and lung cancer screening CTs.  Every woman between 20 and about 60 is scheduled for a Pap smear every couple of years.  This test, a screen for early signs of cervical cancer, involves interpreting a large microscope slide full of cells.  Today, almost every single one of the slides is read by a computer, with no human intervention at all.  At the same time, many elderly smokers are recommended to get a high resolution CT scan to look for early signs of lung cancer.  Interpreting this study means scrolling through many images of the lung on a CT, looking for small lung nodules.  But despite being a similar needle-in-a-haystack problem, almost nobody uses computers for this study.

Why is the use of computer aided diagnosis so different in these very similar cases?  Again, a lot of it goes back to history.  By the time computers came on the scene, it wasn’t doctors reading most Pap smears.  Instead, a trained technician screened all the studies – the vast majority were normal – and only passed on the really tough cases to the physician to read.  So the notion of having a pre-reading screening process was already well established.

Not only that, at that point in time, there was an undersupply of these technicians, and new work hour rules were limiting the number of slides that a technician could read in a day.  This became a real bottleneck in doctors’ ability to read lots of slides (and bill for the service.)  So when computer-aided diagnosis company started peddling their wares, doctors were eager to take them up on their offer.  The largest of these companies, Cytyc, was worth $6.2 billion when it was acquired.

For radiology, there’s no such tradition of pre-reading studies.  So adding a computer into the mix would represent a much bigger disruption of the workflow.  In theory, radiologists could adopt the same workflow as the pathologists and let computers pre-read the studies, letting them read faster and bill more.  But this would represent significant behavioral change, and there isn’t as much legal precedent to make them feel comfortable trusting their medical licenses to the computer.  There might still be a fortune to be made here, but the fact that it has not happened yet suggests that the barriers are a good deal greater.

And this is the overall sense I get when it comes to incorporating technology into healthcare.  If a new gadget neatly replaces an existing gadget, it’s adopted relatively quickly.  If it directly impacts the bottom line, such as EMRs offering hospitals better billing revenues, so much the better.  But true disruption – something that fundamentally changes the way medicine is done – is extremely difficult to sell, even if the technology works and the benefits could be enormous.  That’s why doctors are happy to carry around iPads but only a few are willing to Skype with patients.  That’s why radiologists were happy to swap keyboards for Dictaphones, but screening patients’ genomes is a niche pursuit.

In some ways, this conservatism is a necessity.  There aren’t many cowboy doctors left; the trend today is towards highly specialized medicine and enormous hospitals.  A solo practitioner might be able to tinker with new technology, but for something as complex as a large hospital, any disruption has huge ripple effects that administrators are eager to avoid.  And yet, there is an unfortunate side too.  A lot of the great advances in medicine – sterile surgery, radiation therapy, even the residency training process – were innately disruptive, with some doctors losing turf while others gained, and with everybody having to figure out how to deliver medical care in fundamentally different ways.  It seems no coincidence that the death of the cowboy doctor and the rise of bureaucratized medicine has come with a slowdown in medical innovation.  But savvy medical entrepreneurs still find ways to produce some innovations that can make patients’ lives better.  And one day, I hope, they’ll at least give us that hover-gurney, because that thing looked pretty awesome.