Learning and inferencing challenges in human-in-the-loop decision systems

dc.contributor.authorAgarwal, Deepesh
dc.date.accessioned2024-04-09T14:31:14Z
dc.date.available2024-04-09T14:31:14Z
dc.date.graduationmonthMay
dc.date.issued2024
dc.description.abstractThe computational capabilities of AI engines integrated with human knowledge and experience can help create intelligent human-in-the-loop (HITL) decision systems. In safety-critical applications that require a certain level of human supervision, human and AI engine errors can be costly. Thus, it is crucial to identify challenges prevailing at several levels of HITL decision systems that hinder the learning and inferencing processes, and subsequently address them within the learning scheme. This dissertation designates the learning and inferencing challenges in HITL systems at three different levels, namely, representation-level, feature-level and model-level challenges and addresses these challenges within an Active Learning (AL) context. There are several hindrances, such as unavailability of labels for the AL algorithm at the beginning; unreliable external source of labels during the querying process; or incompatible mechanisms to evaluate the performance of Active Learner. Inspired by these practical challenges, this dissertation presents a hybrid query strategy-based AL framework that addresses three practical challenges simultaneously: cold-start, oracle uncertainty and performance evaluation of Active Learner in the absence of ground truth. The heuristics obtained during the querying process serve as the fundamental premise for accessing the performance of Active Learner. The idea of AL is further extended to representation learning in non-Euclidean space like graphs as well. Both node attributes and topological information are incorporated in the learning scheme. The node features are exploited while training the GNN-based decision model and topological information is considered during selective sampling of the nodes. Modeling human behavior in collaborative human-AI decision setup is not straightforward. This dissertation, for the first time, presents a systematic framework for simulation, modeling, tracking and adaptation of behavioral biases in a collaborative HITL decision environment within an AL context. The issue of poor generalization performance and overfitting of decision models is addressed by incorporating observational biases while training the decision models. This dissertation presents two case studies demonstrating ways to incorporate observational biases within the learning frameworks. Despite the fact that AI-powered systems have provided competitive benefits in the recent years, the black-box nature prohibits the explainability of their decisions and drives them to lack transparency. This issue prompted the development of explainable artificial intelligence (XAI), which supports AI systems that can explain their internal processes and decision-making methods. This dissertation presents two case studies to demonstrate different ways of explaining the predictions made by decision models. Conventional NN do not furnish uncertainty estimates associated with their predictions, and are therefore ill-calibrated. Uncertainty quantification techniques offer probability distributions or confidence intervals to represent the uncertainty associated with NN predictions, instead of solely presenting the point predictions/estimates. Once the uncertainty in NN is quantified, it is crucial to leverage this information to modify training objectives and improve accuracy and reliability of the corresponding decision models. This dissertation establishes a novel framework to utilize the knowledge of input and output uncertainties in NN to guide querying process in the context of Active Learning. The lower and upper bounds for label complexity are derived analytically. The methods proposed in this dissertation are highly beneficial for safety-critical applications, that demand significant human monitoring and any error due to human and AI components can be expensive. For example, an effective and rigorous decision support tool in medical diagnosis can help doctors/clinicians nudge the possibility of prescribing further tests for better diagnosis, thereby making a well-informed decision with higher confidence.
dc.description.advisorBalasubramaniam Natarajan
dc.description.degreeDoctor of Philosophy
dc.description.departmentDepartment of Electrical and Computer Engineering
dc.description.levelDoctoral
dc.description.sponsorshipNational Science Foundation
dc.identifier.urihttps://hdl.handle.net/2097/44182
dc.language.isoen_US
dc.publisherKansas State University
dc.rights© the author. This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectHuman-in-the-loop decision systems
dc.subjectActive learning
dc.subjectBehavioral biases
dc.subjectUncertainty-guided active learning
dc.subjectExplainability of decision models
dc.subjectObservational biases
dc.titleLearning and inferencing challenges in human-in-the-loop decision systems
dc.typeDissertation

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
DeepeshAgarwal2024.pdf
Size:
8.35 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.6 KB
Format:
Item-specific license agreed upon to submission
Description: