DSpace Repository

A novel hardware efficient Digital Neural Network architecture implemented in 130nm technology

Show simple item record

dc.contributor.author Gupta, Anu
dc.date.accessioned 2023-02-10T10:32:25Z
dc.date.available 2023-02-10T10:32:25Z
dc.date.issued 2010
dc.identifier.uri https://ieeexplore.ieee.org/document/5452015
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/9158
dc.description.abstract Digital Neural Network implementations based on the perceptron model require the use of multi-bit representation of signals and weights. This results in the usage of multi-bit multipliers in each neuron, leading to prohibitively large chip areas. Another problem with hardware implementations of neural networks is the low utilization of chip area due to complex interconnection requirements between successive neuron layers. In this paper we propose an architecture having a single layer of digital neurons that is reused multiple number of times with different weight vectors in order to achieve significant reduction in the required silicon area. The proposed architecture results in a significantly reduced power consumption (55% reduction for an 8 layer, 4 neuron per layer network). The paper also includes the results obtained on implementing the proposed architecture in 130 nm technology using MAGMA blast-fusion design tool. en_US
dc.language.iso en en_US
dc.publisher IEEE en_US
dc.subject EEE en_US
dc.subject Neural network hardware en_US
dc.subject Neurons en_US
dc.subject Biological neural networks en_US
dc.subject Computer architecture en_US
dc.subject Computer networks en_US
dc.subject Paper technology en_US
dc.title A novel hardware efficient Digital Neural Network architecture implemented in 130nm technology en_US
dc.type Article en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account