Learning and inferencing challenges in human-in-the-loop decision systems
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The computational capabilities of AI engines integrated with human knowledge and experience can help create intelligent human-in-the-loop (HITL) decision systems. In safety-critical applications that require a certain level of human supervision, human and AI engine errors can be costly. Thus, it is crucial to identify challenges prevailing at several levels of HITL decision systems that hinder the learning and inferencing processes, and subsequently address them within the learning scheme. This dissertation designates the learning and inferencing challenges in HITL systems at three different levels, namely, representation-level, feature-level and model-level challenges and addresses these challenges within an Active Learning (AL) context.
There are several hindrances, such as unavailability of labels for the AL algorithm at the beginning; unreliable external source of labels during the querying process; or incompatible mechanisms to evaluate the performance of Active Learner. Inspired by these practical challenges, this dissertation presents a hybrid query strategy-based AL framework that addresses three practical challenges simultaneously: cold-start, oracle uncertainty and performance evaluation of Active Learner in the absence of ground truth. The heuristics obtained during the querying process serve as the fundamental premise for accessing the performance of Active Learner. The idea of AL is further extended to representation learning in non-Euclidean space like graphs as well. Both node attributes and topological information are incorporated in the learning scheme. The node features are exploited while training the GNN-based decision model and topological information is considered during selective sampling of the nodes.
Modeling human behavior in collaborative human-AI decision setup is not straightforward. This dissertation, for the first time, presents a systematic framework for simulation, modeling, tracking and adaptation of behavioral biases in a collaborative HITL decision environment within an AL context. The issue of poor generalization performance and overfitting of decision models is addressed by incorporating observational biases while training the decision models. This dissertation presents two case studies demonstrating ways to incorporate observational biases within the learning frameworks. Despite the fact that AI-powered systems have provided competitive benefits in the recent years, the black-box nature prohibits the explainability of their decisions and drives them to lack transparency. This issue prompted the development of explainable artificial intelligence (XAI), which supports AI systems that can explain their internal processes and decision-making methods. This dissertation presents two case studies to demonstrate different ways of explaining the predictions made by decision models.
Conventional NN do not furnish uncertainty estimates associated with their predictions, and are therefore ill-calibrated. Uncertainty quantification techniques offer probability distributions or confidence intervals to represent the uncertainty associated with NN predictions, instead of solely presenting the point predictions/estimates. Once the uncertainty in NN is quantified, it is crucial to leverage this information to modify training objectives and improve accuracy and reliability of the corresponding decision models. This dissertation establishes a novel framework to utilize the knowledge of input and output uncertainties in NN to guide querying process in the context of Active Learning. The lower and upper bounds for label complexity are derived analytically.
The methods proposed in this dissertation are highly beneficial for safety-critical applications, that demand significant human monitoring and any error due to human and AI components can be expensive. For example, an effective and rigorous decision support tool in medical diagnosis can help doctors/clinicians nudge the possibility of prescribing further tests for better diagnosis, thereby making a well-informed decision with higher confidence.