Support Vector Machine, decision theory, posterior probabilities, matrix-variate normal
In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.
Doctor of Philosophy (Ph.D.)
College of Arts and Sciences
Length of Campus-only Access
Doctoral Dissertation (Open Access)
Bradshaw, David, "Decision Theory Classification Of High-dimensional Vectors Based On Small Samples" (2005). Electronic Theses and Dissertations. 533.