UMaine researcher aims to open the ‘black box’ of AI, putting users back in control
From interpreting a medical scan to sorting family photos, artificial intelligence (AI) makes snap judgments users often trust blindly.
Chaofan Chen, assistant professor of electrical and computer engineering at the University of Maine, aims to change that by creating AI systems that explain their results and learn from the people who use them.
To accomplish this, Chen is developing tools that show users how an AI model reaches a decision and that allow users to correct decisions when something appears inaccurate. His team will build AI systems that illustrate their reasoning processes when making predictions and adjust those reasonings based on user feedback.
The goal is a two-way conversation between AI that shows its work and people who can help improve it.
“We live in an exciting era of AI breakthroughs, and my mission is to create systems that don’t just give answers but reveal their reasoning and can improve themselves based on human feedback,” said Chen, who received a National Science Foundation CAREER Award to support his work.
The five-year, $584,034 project — “CAREER: Opening the Black Box: Advancing Interpretable Machine Learning for Computer Vision” — aims to bring greater transparency and accountability to AI-powered computer vision systems used in everyday and high-stakes settings.
Modern computer-vision models can detect diseases, identify objects and generate images with remarkable accuracy, but they typically operate as “black boxes,” offering little insight into how decisions are reached. That lack of interpretability prevents users from evaluating whether a choice was sound, identifying flawed assumptions or correcting mistakes.
In fields such as health care, public safety and scientific research, those blind spots can pose serious risks.
“In high-stakes settings, black-box AI isn’t just a mystery — it’s a risk. When we can’t see how decisions are made, we can’t trust the outcomes that matter most,” Chen said. “In healthcare, for example, a black-box model recommending a diagnosis or treatment could leave clinicians guessing at its reasoning — an uncertainty that patients simply can’t afford. In this case, interpretability isn’t a luxury; it’s a safeguard for real people’s lives.”
Chen’s project seeks to replace that opacity with clarity. Chen will develop multimodal models that provide richer, more accessible insights into the decision-making processes of AI systems. He also plans to design generative models that break down how images are created, rather than presenting only a final result.
The research will extend into reinforcement learning, exploring ways to ensure AI decision-making policies remain interpretable.
A major component of the project is strengthening human-AI interaction. Chen aims to create methods that allow users to correct a model’s reasoning directly and integrate that feedback into the training process so the system becomes more accurate and aligned with human expectations over time.
“Dr. Chen’s CAREER project tackles one of AI’s most urgent challenges, opening the black box so computer-vision systems explain their decisions in ways people can trust, especially in high-stakes settings,” said Yifend Zhu, professor and chair of UMaine’s Department of Electrical and Computer Engineering. “Equally exciting, he’s partnering with the Maine Mathematics and Science Alliance to bring interpretable AI into Maine classrooms, empowering teachers and inspiring the next generation of innovators.”
As part of the award, Chen will collaborate with the Maine Mathematics and Science Alliance to develop high school lesson plans introducing responsible and interpretable AI concepts. The effort aims to help students understand not only how AI works but also how to question and guide it.
The project is jointly funded by the NSF Robust Intelligence and EPSCoR programs and will run through June 30, 2030.
Story by William Bickford, graduate student writer
Contact: Marcus Wolf, 207.581.3721; marcus.wolf@maine.edu
