Testing bias in analysis of convolutional neural networks

Date

2021-05-01

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Deep convolution neural networks (DCNNs) have become extremely common in computer vision, and due to the availability of easy-to-use libraries, their impact has gone far beyond the domain of computer vision. Since a DCNN acts like a black box, it is very often difficult for the user to understand which features of the image contribute to the learning of the network. The purpose of this work is to explore the reliability of DCNNs as general solutions to machine vision problems and identify possible weaknesses in which DCNNs can lead to biased or misleading results. A first experiment shows that for a basic classification of spiral and elliptical galaxies, the position of the galaxies plays role in the classification. That small but consistent and statistically significant bias can lead to misleading results when applied to large datasets. The second experiment has been done with a variety of prominent datasets in the computer vision domain. Only a portion of the background without any significant content descriptor has been used, but still, the LeNet5 architecture is able to predict the image better than the mere chance accuracy. That shows that the classification accuracy, even when using commonly used datasets, can be biased.

Description

Keywords

Deep learning, Convolution neural network, Data acquisition bias

Graduation Month

May

Degree

Master of Science

Department

Department of Computer Science

Major Professor

Lior Shamir

Date

2021

Type

Thesis

Citation