Title

Learning Affine Transformations

Keywords

Artificial neural networks; Object recognition

Abstract

Under the assumption of weak perspective, two views of the same planar object are related through an affine transformation. In this paper, we consider the problem of training a simple neural network to learn to predict the parameters of the affine transformation. Although the proposed scheme has similarities with other neural network schemes, its practical advantages are more profound. First of all, the views used to train the neural network are not obtained by taking pictures of the object from different viewpoints. Instead, the training views are obtained by sampling the space of affine transformed views of the object. This space is constructed using a single view of the object. Fundamental to this procedure is a methodology, based on singular-value decomposition (SVD) and interval arithmetic (IA), for estimating the ranges of values that the parameters of affine transformation can assume. Second, the accuracy of the proposed scheme is very close to that of a traditional least squares approach with slightly better space and time requirements. A front-end stage to the neural network, based on principal components analysis (PCA), shows to increase its noise tolerance dramatically and also to guides us in deciding how many training views are necessary in order for the network to learn a good, noise tolerant, mapping. The proposed approach has been tested using both artificial and real data. © 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

Publication Date

1-1-1999

Publication Title

Pattern Recognition

Volume

32

Issue

10

Number of Pages

1783-1799

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1016/S0031-3203(98)00178-2

Socpus ID

0032668921 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/0032668921

This document is currently not available here.

Share

COinS