БЛОГ

Nov 4, 2020

Computer scientist researches interpretable machine learning, develops AI to explain its discoveries

Posted by in categories: biotech/medical, robotics/AI

Artificial intelligence helps scientists make discoveries, but not everyone can understand how it reaches its conclusions. One UMaine computer scientist is developing deep neural networks that explain their findings in ways users can comprehend, applying his work to biology, medicine and other fields.

Interpretable machine learning, or AI that creates explanations for the findings it reaches, defines the focus of Chaofan Chen’s research. The assistant professor of computer science says interpretable machine learning also allows AI to make comparisons among images and predictions from data, and at the same time, elaborate on its reasoning.

Scientists can use interpretable machine learning for a variety of applications, from identifying birds in images for wildlife surveys to analyzing mammograms.

Comments are closed.