Why has artificial intelligence not replaced doctors yet? In the first instalment of their data science and medicine series CodeBlue not only answers that question but also what it takes to bring AI from bench to bedside and why the answer is: You!
In this (recurring) section of our column, you can find definitions that we deem to need clarification. Feel free to skip them for now and return here once you encounter them in the text!
Artificial intelligence (AI): refers to either the field of study of, or artificial systems designed to make decisions. For most applications, this involves computers and machine learning but doesn’t have to.
Narrow AI: is every form of AI that is developed with a specific task in mind. Everything being labelled “AI” these days is narrow AI and can be interpreted as the continuation of automation that started with the industrial revolution.
Machine learning (ML): is a branch of artificial intelligence centred around the idea of using data to iteratively train an algorithm rather than design the algorithm manually.
Data Science: is the field of study of extracting knowledge from data. Tools include machine learning and statistics.
Ten years ago, the newspapers were full of artificial intelligence (AI) and how it was quickly taking over work previously done by humans: Driving a car, understanding speech, or diagnosing disease. In only a few years, it seemed, AI was set to graduate from the tool that recommends what song to play next to autonomous driving. Before starting our dissertations in radiology almost three and four years ago, we were both warned that “radiologists will be replaced by AI in a few years”.1
That in mind, today’s perspective seems odd: Most of us have used digital assistants like Alexa and Siri but would still rather talk to a person in customer service than chat with a bot. Contrastingly, in mainstream journalism, AI can still seem like an all-powerful tool, and “AI-driven” is a popular marketing term for start-ups and researchers alike.
So why has Artificial Intelligence not
Clinicians may unconsciously interact with AI many times throughout their workday: Modern dictation devices use speech-to-text AI to speed up writing reports. Automated image segmentation is used to “suppress” blood vessels in lung CT or to help radiation oncologists construct their target zone. More generally, these are tasks to enhance physicians’ performance, speed, or ease. So why has AI not replaced doctors yet?
The first barrier isn't unique to AI, but to full automation: Any algorithm presents a chance for failure when an unforeseen scenario is encountered. Unlike humans, computer intelligence is limited to the specific task it is trained for and will therefore fail or underperform at anything remotely outside its scope. A human customer service phone operator could switch companies with little to no training where a chat bot cannot. We call this the difference between a general and a narrow (read: task-specific) AI: To date, no AI has come close to the general intelligence of humans.2 Even worse, most AI algorithms will not realise when they are facing a task out of their scope – an x-ray diagnostic algorithm might give you a cancer diagnosis off your dog’s photograph3. With the stakes involved in medicine, any involvement of AI will be limited to augmenting a physician’s performance for the foreseeable future.4
The second barrier comes from the way AI is developed: Where conventional algorithms will be programmed to perform a set of pre-defined operations to arrive at an unknown answer, AI reverses this process: AI is provided with a set of inputs and their correct answers and will try to find the operations necessary to get there5. This training data, therefore, determines what the algorithm can and cannot do and presents the most crucial part in development. Unfortunately, well-curated data sets are laborious (read: expensive) to curate6, and bad ones can not only decrease performance, but introduce bias7.
Third, once AI algorithms are developed, they need to be integrated into existing workflows. So, in addition to the data scientists developing the core algorithm, it needs traditional software engineers, system engineers and administrators to create a useful tool.
To get AI into clinical practice, clinicians need to be trained in AI: Some to actively drive the development of new algorithms; everyone to utilise them safely and effectively. Identifying problems suited for AI, curating data sets, developing algorithms, building the tools around it, and transitioning it into the clinical setting are all steps at which physician involvement is indispensable for success. It requires clinicians and non-clinicians to have a basic understanding of each other’s work to communicate effectively. From a clinician’s perspective, this comes with a vast number of useful transferable skills: Data analysis, workflow design, and coding, just to name a few.
Radiologists and physicians are here to stay – but so is AI.
The gap between the vast potential of AI-powered tools and their current use makes it easy to answer the question constantly on our mind: “Would we recommend students and aspiring researchers to pursue artificial intelligence and data science in their respective fields?”. And despite the pessimistic introduction, that answer is a resounding yes: Data science is extremely fun! It’s a wonderful way to gain insight into familiar topics from a new, multidisciplinary perspective. It fosters creativity and skills applicable to not only other areas of research and development but also life in general. And while the hype on AI is calming down, it’s still a discipline just starting to mature, with ample opportunities to shape the future of clinical practice. Radiologists and physicians are here to stay – but so is AI.