Towards explainable artificial intelligence (XAI) based on contextualizing data with knowledge graphs

Date

2020-12-01

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Artificial intelligence (AI)---including the sub-fields of machine learning and deep learning has advanced considerably in recent years. In tandem with these performance improvements, understanding how AI systems make decisions has become increasingly difficult due to many nonlinear transformations of input data and the complex nature of the algorithms involved.

Explainable AI (XAI) are the techniques to examine these decision processes. A main desideratum of XAI is user understandability, while explanations should take into account the context and domain knowledge of the problem. Humans understand and reason mostly in terms of concepts and combinations thereof. A knowledge graph (KG) embodies such understanding in links between concepts; such a natural conceptual network creates a pathway to use knowledge graphs in XAI applications to improve overall understandability of complex AI algorithms.

Over the course of this dissertation, we outline a number of contributions towards explaining the AI decision in a human friendly way. We show a proof-of-concept on how domain knowledge can be used to analyze the input and output data of AI algorithms. We materialize the domain knowledge into knowledge graph (more technically ontology) and by using concept induction algorithm find the pattern between input and output. After demonstrating this, we start to experiment on a large scale, as we found that the current state of the art concept induction algorithm does not scale well with large amounts of data. To solve this runtime issue, we develop a new algorithm efficient concept induction (ECII), which improves the runtime significantly.

During this process, we also find that current tools are not adequate to create and edit the knowledge graphs, as well as that there is scarcity to quality knowledge graph. We make the creation and editing process easier, by creating OWLAx and ROWLTab plugin for the industry-standard ontology editor, Protégé. We also develop a large knowledge graph from the Wikipedia category hierarchy.

Overall, these research contributions improved the software support to create knowledge graph, developed a better knowledge graph, and showed a new direction on how AI decision making can be explained by using a contextual knowledge graph.

Description

Keywords

Explainable Artificial Intelligence, Black-box Deep Learning, Knowledge Graph, Ontology, Concept Induction, Ontology Builder

Graduation Month

December

Degree

Doctor of Philosophy

Department

Department of Computer Science

Major Professor

Pascal Hitzler

Date

2020

Type

Dissertation

Citation