Welcome to the first post of Straight Talk about AI in Healthcare. In each post I’ll choose a recent research paper about AI in healthcare, explain the work in non-technical terms, and highlight practical lessons for healthcare organizations.
My goal is to share concrete insights that are applicable today. A lot of inspirational work out there will surely have a big impact in the long run, but that's not my focus. I want to help you understand what you can do today and what should be on your radar for tomorrow.
If you are not on the mailing list, please subscribe!
Let’s kick off this series with a paper from the Machine Learning for Health workshop at the Conference on Neural Information Processing Systems (NeurIPS) in December 2019, arguably the most influential conference in the machine learning and artificial intelligence world.
This work, by researchers from the University of Toronto, addresses a problem in automatic medical text processing that is commonly encountered when developing products to support use cases like clinical decision support, quality reporting, and utilization management.
Medical text is rich in abbreviations and acronyms, whose meaning is sometimes ambiguous: does "RA" stand for "right atrium" or "rheumatoid arthritis"? Getting it wrong can derail the algorithm and lead to inaccurate results and errors. This problem is called “abbreviation disambiguation”, and the goal is to infer the meaning of an abbreviation from its context.
Most natural language processing (NLP) algorithms use context from adjacent words. Some modern methods even use context from one or two adjacent sentences. But abbreviations in medical text often present a bigger challenge: in the sentence ”the patient underwent RT to treat the condition”, "RT" could mean "radiation therapy" or "respiratory therapy". But it’s not enough to look at just one or two sentences: context from the full medical note is needed.
With this insight in mind, the researchers devised the main innovation in the paper: a simple but clever method to represent context from the full note and combine it with context from a sentence. This enhancement, together with a couple of smaller tweaks, resulted in a 4-14% improved performance relative to previous methods. A 4% improvement may seem small, but it’s actually on the higher end for relatively mature existing methods.
Medical abbreviations are a pain in the butt, so this work can help improve medical text processing. But it also highlights broader lessons for healthcare organizations when implementing AI solutions.
The researchers’ first insight was identifying abbreviation disambiguation as an important task in automatic medical text processing and focusing on it exclusively. In general, the most powerful AI solutions are developed for very targeted use cases. Consequently, machine learning and AI teams are most effective when working on focused tasks rather than generic use cases. Identifying these tasks and evaluating their business impact should be an ongoing collaboration between the technical team and the broader organization.
The second insight was understanding that medical text differs from more general NLP tasks, and improving existing methods specifically for the medical context. While the research team in this case was academic, this type of improvement is within reach of skilled data teams in the industry.
Finally, the work achieves good results on one particular challenge. But what is the broader impact for medical text processing and its practical applications? It is likely to be real, but small. This is most often the case for machine learning systems. Machine learning is a game of continuous iteration and small improvements, where slow and steady wins the race.
Not on the mailing list? Subscribe to this newsletter!