Explainable Machine Learning for Knowledge Discovery in Advanced Manufacturing Systems
Yossi Cohen, PhD
Rutgers University
Abstract: In recent years, machine learning has become popular for optimizing the operations of advanced manufacturing systems. These data-driven algorithms have enabled key performance indicator improvements such as reducing machine downtime, improving quality, yield, and sustainability. However, as advanced data analytics and digital twin platforms mature in the age of generative AI, there is an unmet need to critically evaluate model outputs to gauge trustworthiness. This presentation will focus on recent efforts in advancing explainable anomaly detection, fault diagnosis, and prognosis of high-dimensional applications in semiconductor and aerospace manufacturing systems, as well as renewable power systems. This seminar will discuss a new framework for adopting explainable AI in industrial settings under two realistic constraints: a) privacy-encoded inputs and b) weakly labeled or entirely unlabeled datasets. The talk will also detail an approach for semisupervised clustering based on Shapley value analysis, with the added capability of accurately describing clusters with measurable inputs. The problem of explaining global model behavior as it pertains to anomaly detection will be addressed via a "cluster-anddescribe" approach, allowing for operators to better trace the model’s decision pathways when predicting anomalous behavior.
Biography: Joseph (Yossi) Cohen is an incoming Assistant Professor in the Department of Mechanical and Aerospace Engineering at Rutgers University. He received his Bachelor’s, Master’s, and Ph.D. degrees in Mechanical Engineering from the University of Michigan. His thesis explored industrial artificial intelligence concepts for fault diagnosis, advancing prognostics and health management research for complex manufacturing systems. His research interests include human-centered augmented intelligence, responsible artificial intelligence in industry, and sustainable manufacturing. As a Schmidt AI in Science Postdoctoral Research Fellow, he is currently investigating the intersection of explainable artificial intelligence with uncertainty quantification to improve the reliability of model explanation methods. At Rutgers, his Trustworthy, Robust, and Understandable SysTems in Mechanical Engineering (TRUST-ME) Lab will aim to construct responsible and human-centered methodologies for intelligent optimization of systems and operations to improve decision-making in industry, with applications to manufacturing, aerospace, and renewable power systems.