Keywords

Support Vector Machine, decision theory, posterior probabilities, matrix-variate normal

Abstract

In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

2005

Semester

Fall

Advisor

Pensky, Marianna

Degree

Doctor of Philosophy (Ph.D.)

College

College of Arts and Sciences

Department

Mathematics

Degree Program

Mathematics

Format

application/pdf

Identifier

CFE0000753

URL

http://purl.fcla.edu/fcla/etd/CFE0000753

Language

English

Release Date

January 2006

Length of Campus-only Access

None

Access Status

Doctoral Dissertation (Open Access)

Included in

Mathematics Commons

Share

COinS