Unlocking the Black Box: Making Machine Learning Models Interpretable with Cynthia Rudin
Category Computer Science Sunday - April 30 2023, 03:57 UTC - 1 year ago Cynthia Rudin, an expert in interpretable machine learning from Duke University, wants to replace 'black box' machine learning models with models that are easier to interpret and have real consequences. Her team is working to create accurate neural networks that display their work, which could be used for medical decision-making processes.
Machine learning models are incredibly powerful tools. They extract deeply hidden patterns in large data sets that our limited human brains can’t parse. These complex algorithms, then, need to be incomprehensible "black boxes," because a model that we could crack open and understand would be useless. Right? .
That’s all wrong, at least according to Cynthia Rudin, who studies interpretable machine learning at Duke University. She’s spent much of her career pushing for transparent but still accurate models to replace the black boxes favored by her field.
The stakes are high. These opaque models are becoming more common in situations where their decisions have real consequences, like the decision to biopsy a potential tumor, grant bail or approve a loan application. Today, at least 581 AI models involved in medical decisions have received authorization from the Food and Drug Administration. Nearly 400 of them are aimed at helping radiologists detect abnormalities in medical imaging, like malignant tumors or signs of a stroke.
Many of these algorithms are black boxes — either because they’re proprietary or because they’re too complicated for a human to understand. "It makes me very nervous," Rudin said. "The whole framework of machine learning just needs be changed when you’re working with something higher-stakes." .
But changed to what? Recently, Rudin and her team set out to prove that even the most complex machine learning models, neural networks doing computer vision tasks, can be transformed into interpretable glass boxes that show their work to doctors.
Rudin, who grew up outside Buffalo, New York, grew to share her father’s love of physics and math — he’s a medical physicist who helped calibrate X-ray machines — but she realized she preferred to solve problems with computers. Now she leads Duke’s Interpretable Machine Learning lab, where she and her colleagues scrutinize the most complex puzzle boxes in machine learning — neural networks — to create accurate models that show their work.
Quanta spoke with Rudin about these efforts, ethical obligations in machine learning and weird computer poetry. The interviews have been condensed and edited for clarity.
Did you always dream of being a computer scientist? .
No, definitely not. As a kid, I wanted to be an orchestra conductor, or something like it. And I wanted to be a composer and write music.
What kind of music? .
That’s the problem. I write French music from the turn of the previous century, like Ravel and Debussy. And then I realized that few people cared about that kind of music, so I decided not to pursue it as a career. As an undergraduate, I wanted to be an applied mathematician — but I went in the opposite direction, which was machine learning.
When did you begin thinking about interpretability? .
After I graduated, I ended up working at Columbia with the New York City power company, Con Edison. And they were doing real-world work. We were supposed to predict which manholes were going to have a fire or an explosion — at the time, it was about 1% of the manholes in Manhattan every year. I joked that I was always trying to take a picture of myself on the "most likely to explode" manhole— though I never actually did.
I found out very quickly that this was not a problem that machine learning was helping with, be- cause all the models were too complicated to understand. And that made me want to make them interpretable.
Share