Unlocking the Black Box: Researchers Develop Explainable AI with Human-Inspired Approach

Category Artificial Intelligence

tldr #

Researchers have developed an explainable AI called 'deep distilling' by combining brain network principles and a traditional AI approach. This AI can generate easy-to-understand explanations for its conclusions and even produce executable code. It has shown success in difficult tasks and outperformed human-designed algorithms. Explainable AI is crucial in high-risk domains like healthcare and scientific research to minimize errors. The team had previously developed an AI using 'symbolic reasoning,' but found it less accurate in complex tasks.


content #

Children are natural scientists. They observe the world, form hypotheses, and test them out. Eventually, they learn to explain their (sometimes endearingly hilarious) reasoning. However, artificial intelligence does not possess this same ability. Although deep learning, a form of machine learning inspired by the brain, has greatly advanced technology in recent years, it has a major weakness: it cannot explain its decisions .

This study was published in the journal Nature Computational Science on February 27, 2024.

This lack of transparency hinders its use in high-risk situations, such as medicine, where patients want to understand why they have received a certain diagnosis or treatment. To address this issue, a team from the University of Texas Southwestern Medical Center turned to the human mind for inspiration. In their study published in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach to create an explainable AI .

The team from the University of Texas Southwestern Medical Center used a combination of brain network principles and a traditional AI approach to develop their explainable AI.

Similar to a child, the AI simplifies complex information into 'hubs' and then translates them into coding guidelines that can be read by humans in plain English. It can even generate executable programming code for testing purposes. Known as 'deep distilling,' this AI was tested on a range of tasks and demonstrated superior performance compared to human-designed algorithms. In high-risk domains like healthcare and scientific research, explainable AI is essential in order to minimize errors and ensure accountability .

The AI, called 'deep distilling,' can generate simple explanations for its conclusions in plain English and even produce executable code.

Prior to this work, the team had also developed an AI that used 'symbolic reasoning' to encode explicit rules and experiences. While this approach required less computational power and was faster in processing smaller data sets, it struggled to accurately handle complex tasks.


hashtags #
worddensity #

Share