AI Model Accurately Identifies Tumors and Diseases with Visual Map Explanation

Category Technology

tldr #

Researchers at the Beckman Institute have developed an AI model that accurately identifies tumors and diseases in medical images and explains each diagnosis with a visual map. The model's transparency allows for easy follow-up and explanation to patients. Deep learning, a more advanced form of machine learning, is used to create the model's deep neural networks, which can make complex decisions but are not infallible.


content #

Medical diagnostics expert, doctor’s assistant, and cartographer are all fair titles for an artificial intelligence model developed by researchers at the Beckman Institute for Advanced Science and Technology. Their new model accurately identifies tumors and diseases in medical images and is programmed to explain each diagnosis with a visual map. The tool’s unique transparency allows doctors to easily follow its line of reasoning, double-check for accuracy, and explain the results to patients. The study’s lead author, Sourya Sengupta, a graduate research assistant at the Beckman Institute, explains, “The idea is to help catch cancer and disease in its earliest stages — like an X on a map — and understand how the decision was made. Our model will help streamline that process and make it easier on doctors and patients alike.” This research appeared in IEEE Transactions on Medical Imaging.

This new AI model was developed by researchers at the Beckman Institute for Advanced Science and Technology.

First conceptualized in the 1950s, artificial intelligence — the concept that computers can learn to adapt, analyze, and problem-solve like humans do — has reached household recognition, due in part to ChatGPT and its extended family of easy-to-use tools. Machine learning, or ML, is one of many methods researchers use to create artificially intelligent systems. ML is to AI what driver’s education is to a 15-year-old: a controlled, supervised environment to practice decision-making, calibrating to new environments, and rerouting after a mistake or wrong turn. Deep learning — machine learning’s wiser and worldlier relative — can digest larger quantities of information to make more nuanced decisions. Deep learning models derive their decisive power from the closest computer simulations we have to the human brain: deep neural networks. These networks — just like humans, onions, and ogres — have layers, which makes them tricky to navigate. The more thickly layered, or nonlinear, a network’s intellectual thicket, the better it performs complex, human-like tasks. Consider a neural network trained to differentiate between pictures of cats and pictures of dogs. The model learns by reviewing images in each category and filing away their distinguishing features (like size, color, and anatomy) for future reference. Eventually, the model learns to watch out for whiskers and cry Doberman at the first sign of a floppy tongue. But deep neural networks are not infallible — much like overzealous toddlers, said Sengupta, who studies biomedical imaging in the University of Illinois Urbana-Champaign Department of Electrical and Computer Engineering. “They get it right sometimes, maybe even most of the time, but it might not always be for the right reasons,” he said. “I’m sure everyone knows a child who saw a brown, four-legged dog once and then thought that every brown, four-legged animal was a dog.” Sengupta’s gripe? If you ask a toddler how they decided, they will probably tell you. “But you can’t ask a deep neural network how it arrived at an answer,” he said.

The model has the ability to explain its reasoning for each diagnosis through a visual map.

hashtags #
worddensity #

Share