Towards explainable artificial intelligence (XAI) based on contextualizing data with knowledge graphs

dc.contributor.authorSarker, Md Kamruzzaman
dc.date.accessioned2020-11-13T20:20:08Z
dc.date.available2020-11-13T20:20:08Z
dc.date.graduationmonthDecemberen_US
dc.date.issued2020-12-01
dc.date.published2020en_US
dc.description.abstractArtificial intelligence (AI)---including the sub-fields of machine learning and deep learning has advanced considerably in recent years. In tandem with these performance improvements, understanding how AI systems make decisions has become increasingly difficult due to many nonlinear transformations of input data and the complex nature of the algorithms involved. Explainable AI (XAI) are the techniques to examine these decision processes. A main desideratum of XAI is user understandability, while explanations should take into account the context and domain knowledge of the problem. Humans understand and reason mostly in terms of concepts and combinations thereof. A knowledge graph (KG) embodies such understanding in links between concepts; such a natural conceptual network creates a pathway to use knowledge graphs in XAI applications to improve overall understandability of complex AI algorithms. Over the course of this dissertation, we outline a number of contributions towards explaining the AI decision in a human friendly way. We show a proof-of-concept on how domain knowledge can be used to analyze the input and output data of AI algorithms. We materialize the domain knowledge into knowledge graph (more technically ontology) and by using concept induction algorithm find the pattern between input and output. After demonstrating this, we start to experiment on a large scale, as we found that the current state of the art concept induction algorithm does not scale well with large amounts of data. To solve this runtime issue, we develop a new algorithm efficient concept induction (ECII), which improves the runtime significantly. During this process, we also find that current tools are not adequate to create and edit the knowledge graphs, as well as that there is scarcity to quality knowledge graph. We make the creation and editing process easier, by creating OWLAx and ROWLTab plugin for the industry-standard ontology editor, Protégé. We also develop a large knowledge graph from the Wikipedia category hierarchy. Overall, these research contributions improved the software support to create knowledge graph, developed a better knowledge graph, and showed a new direction on how AI decision making can be explained by using a contextual knowledge graph.en_US
dc.description.advisorPascal Hitzleren_US
dc.description.degreeDoctor of Philosophyen_US
dc.description.departmentDepartment of Computer Scienceen_US
dc.description.levelDoctoralen_US
dc.identifier.urihttps://hdl.handle.net/2097/40945
dc.language.isoen_USen_US
dc.subjectExplainable Artificial Intelligenceen_US
dc.subjectBlack-box Deep Learningen_US
dc.subjectKnowledge Graphen_US
dc.subjectOntologyen_US
dc.subjectConcept Inductionen_US
dc.subjectOntology Builderen_US
dc.titleTowards explainable artificial intelligence (XAI) based on contextualizing data with knowledge graphsen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MdKamruzzamanSarker2020.pdf
Size:
4.89 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.62 KB
Format:
Item-specific license agreed upon to submission
Description: