AI For the Lab: Transcending "Faster Horse Syndrome"

April 23, 2019

AI for the Lab: Transcending "Faster Horse Syndrome"

“If I had asked people what they wanted, they would have said faster horses," automotive pioneer Henry Ford once famously said. Consumers couldn't envision a vehicle without a horse—much less a vehicle that tells the driver how to get to the destination, takes phone calls, streams every kind of music imaginable, and automatically calls for help in a crash.

Artificial intelligence (AI) is transforming our lives in even more ways than the horseless carriage. In just a few short years, many of us have gone from being slightly suspicious of AI to allowing Alexa and Siri to choose our music, order us pizza, and decide what temperature our house should be.

In healthcare, AI is currently experiencing “faster horse syndrome," mostly used to perform familiar tasks more quickly and accurately. Patient monitors alert ICU staff to impending crises, and decision support systems flag the most recent research developments for a given diagnosis, saving the physician from a time-consuming literature search.

The healthcare AI transformation is in its earliest stages. As health information systems come to depend on AI, a plethora of new uses—and eventually a whole new infrastructure—will redefine how care is delivered, in the same way the automobile has transformed our landscape, hopefully without harmful side effects analogous to urban sprawl and air pollution.

Article highlights:

  • Artificial intelligence holds the promise of transforming healthcare the way the automobile transformed transportation.
  • AI can "see" pathology details that a human might miss, and analyze large datasets quickly enough for medical decision making.
  • Labs must make sure AI algorithms are fully transparent, to insure patient safety.


Contributing Lab Leaders

Peter Gershkovich

Peter Gershkovich

Yale University

Ed Hammond

Ed Hammond

Duke University

Brian Jackson

Brian Jackson, M.D., M.S.




“Computers are beginning to program themselves and access electronic data in a timeframe that permits making decisions based on huge amounts of data," says Ed Hammond, director of the Center for Health Informatics at Duke University, and a pioneer of clinical computing. He points to robots that can increasingly handle unscripted conversations, and he predicts that driverless cars will eventually make roads safer and solve at least some of our traffic problems. “Computers and robots aren't going to be perfect, but they're going to learn from their mistakes and not make the same mistake twice. AI systems continue to learn, and I don't think clinicians do. They make the same mistakes more than once."

Another Set of Eyes

What does the growth of AI mean for clinical labs? Humans who are worried about computers stealing their jobs should remain calm, at least for now. Though they're getting there, computers are still not as good as a trained pathologist at identifying cancer on a slide, notes Peter Gershkovich, director of pathology informatics at Yale University Medical School. But they can make excellent partners because they are better at aggregating large amounts of data and flagging questionable areas for human readers to take a closer look.

“We have a problem that a diagnosis is being made by one person looking at a set of slides and saying, 'This is cancer, and this is not cancer,'" he says. “That dictates patient treatment. But what if that person just made a mistake?" Having a computer “read" the slide provides a second set of “eyes" that can analyze an image pixel by pixel and catch anomalies that the human eye might pass over.

AI-enabled systems can theoretically do the same kind of data-sifting for any type of lab test, but not automatically, Gershkovich adds. “If you trust Waze to get you from point A to point B, you wouldn't then ask it to manage your investment portfolio," he says. “So with the systems that we have in healthcare, we should be careful of not expanding the functionality beyond the scope of what they can actually do. If they can say a slide is cancer or not in one subspecialty, we can't trust that they can do it in another subspecialty." In this respect, AI systems learn somewhat the way humans do, by experience, and building that experience base will take time.

Additional resources

Previous Article
How to Refine Your Organizational Structure to Secure Consolidation Success
How to Refine Your Organizational Structure to Secure Consolidation Success

Put the right experts in the right places—both inside and outside the lab—to make sure your vision doesn't ...

Next Article
Why Labs Should Control Their Own IT
Why Labs Should Control Their Own IT

In a world of integrated hospital information systems, there's a case for the lab to control its own IT des...