Title

Neural Network Number Systems

Abstract

Three fundamental representation schemes for numbers in a digital neural network are explored: the fixed-point number, the floating-point number, and the exponential number. These three numeric representation schemes are analyzed with emphasis on the memory efficiency, precision, and dynamic-range tradeoffs associated with each when used to compute neural network vector dot products. Specifically, the authors explore a small image-processing problem, an 8 × 8-pixel image with 256 shades of resolution, to investigate the effects of using these various number formats on the total required memory in a neural network. It is concluded that, by carefully matching number formats to the precision and dynamic-range requirements of each layer in a neural network, one can optimize the memory utilization for the particular class of problem involved. Because it is impractical to design and build hardware for each particular problem to be solved with a neural network, the authors emphasize the importance of building neural network hardware which can handle heterogeneous number formats, dynamically programmable from software.

Publication Date

12-1-1990

Publication Title

IJCNN. International Joint Conference on Neural Networks

Number of Pages

903-908

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

0025547286 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/0025547286

This document is currently not available here.

Share

COinS