Search

AutoML Process Intelligence

Project Overview

This project aims to develop Automated Machine Learning (AutoML) process intelligence that supports Explainable AI (XAI). By combining process modeling, automation, event logging, and explainability, the research seeks to transform the current opaque “black-box” machine learning workflows into transparent and interpretable systems. The study introduces methods to model and automate AutoML workflows, record execution histories through event logs, and generate explainable outcomes for users’ decision-making.
Research Subject: Research on Automated Machine Learning Process Intelligence for Supporting Explainable AI
Program: Basic Research Program
Funding Agency: National Research Foundation of Korea (NRF)
Research Periods: Sep 2021 – Aug 2023 (36 months)

Research Background

Recent advances in deep learning have expanded AI applications into high-impact domains such as autonomous driving, robotics, and translation. However, the black-box nature of AI models limits their interpretability and hinders trust in decision-making. AutoML has emerged to automate the complex machine learning pipeline, but it also increases process opacity and reduces user understanding.
To address these issues, this research introduces process intelligence technologies—including process modeling, automation, logging, and mining—to enhance AutoML with transparency, flexibility, and explainability. This fusion is expected to overcome the limitations of existing AutoML platforms while supporting responsible and accountable AI.

Research Objectives

1.
AutoML Process Modeling
Design reusable AutoML tasks using microservice architecture.
Develop a modeling language for customizable AutoML workflows, integrated with BPMN tools.
2.
AutoML Automation & Event Logging
Implement a process engine to automate and control AutoML workflows.
Develop real-time event logging modules (Apache Kafka-based) to record execution histories.
3.
Explainable AI for AutoML
Apply XAI strategies (interpretable models, model induction, deep explanation learning) to analyze AutoML logs.
Design visualization interfaces (e.g., heatmaps) to provide users with clear, explainable insights.
Conduct case studies (e.g., deep learning–based object detection) to evaluate the effectiveness of XAI integration.