Using self-organizing maps to learn geometric hash functions for model-based object recognition



G. Bebis; M. Georgiopoulos;N. D. Lobo


Authors: contact us about adding a copy of your work at STARS@ucf.edu

Abbreviated Journal Title

IEEE Trans. Neural Netw.


geometric hashing; neural networks; object recognition; self-organization; Computer Science, Artificial Intelligence; Computer Science, Hardware &; Architecture; Computer Science, Theory & Methods; Engineering, ; Electrical & Electronic


A major problem associated with geometric hashing and methods which have emerged from it is the nonuniform distribution of invariants over the hash space. This has two serious effects on the performance of the method. First, it can result in an inefficient storage of data which can increase recognition time. Second, given that geometric hashing is highly amenable to parallel implementation, a nonuniform distribution of data poses difficulties in tackling the load-balancing problem. Finding a "good" geometric hash function which redistributes the invariants uniformly over the hash space is not easy. Current approaches make assumptions about the statistical characteristics of the data and then use techniques from probability theory to calculate a transformation that maps the nonuniform distribution of invariants to a uniform one. In this paper, a new approach is proposed based on an elastic hash fable. In contrast to existing approaches which try to redistribute the invariants over the hash bins, we proceed oppositely by distributing the hash bins over the invariants. The key idea is to associate the hash bins with the output nodes of a self-organizing feature map (SOFM) neural network which is trained using the invariants as training examples. In this way, the location of a hash bin in the space of invariants is determined by the weight vector of the node associated with the hash bin, During training, the SOFM spreads the hash bins proportionally to the distribution of invariants (i.e., more hash bins are assigned to higher density areas while less hash bins are assigned to lower density areas) and adjusts their size so that they eventually hold almost the same number of invariants. The advantage of the proposed approach is that it is a process that adapts to the invariants through learning, Hence, it makes absolutely no assumptions about the statistical characteristics of the invariants and the geometric hash function is actually computed through learning. Furthermore, SOFM's "topology preserving" property ensures that the computed geometric hash function should be well behaved. The proposed approach, was shown to perform well on both artificial and real data.

Journal Title

Ieee Transactions on Neural Networks





Publication Date


Document Type




First Page


Last Page


WOS Identifier