Abstract

Ideally, when a neural network makes a wrong decision or encounters an out-of-distribution example, its predictive confidence should be as low as possible. Three primary contributions in this dissertation address this challenge. The first two contributions are new approaches to mitigate overconfident predictions in modern neural networks. In the first (1), called competitive overcomplete output layer neural networks, several classifiers, as part of the same output layer, are trained simultaneously and later their consensus produces more reliable predictions. The second approach (2) reformulates the original classification problem into several new versions by combining classes together and training a classifier on each. Experiments show that the resulting classifier aggregate, called fitted ensemble, is able to rectify predictive confidence values significantly better than conventional ensembles without sacrificing classification performance. Finally (3), a framework for evaluating the consistency of predictions called separable concept learning (SCL) is introduced. Together these contributions take a step towards achieving more reliable decisions under suboptimal conditions.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2019

Semester

Summer

Advisor

Stanley, Kenneth

Degree

Doctor of Philosophy (Ph.D.)

College

College of Engineering and Computer Science

Department

Computer Science

Degree Program

Computer Science

Format

application/pdf

Identifier

CFE0008089; DP0023228

URL

https://purls.library.ucf.edu/go/DP0023228

Language

English

Release Date

February 2023

Length of Campus-only Access

3 years

Access Status

Doctoral Dissertation (Campus-only Access)

Share

COinS