DSpace Repository

An Adaptive Block Pinning Cache for Reducing Network Traffic in Multi-core Architectures

Show simple item record

dc.contributor.author Chaturvedi, Nitin
dc.date.accessioned 2023-03-15T07:23:58Z
dc.date.available 2023-03-15T07:23:58Z
dc.date.issued 2013
dc.identifier.uri https://ieeexplore.ieee.org/document/6658033/keywords#keywords
dc.identifier.uri http://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/9741
dc.description.abstract With advent of new technologies there is exponential increase in multi-core processor (CMP) cache sizes accompanied by growing on-chip wire delays make it difficult to implement traditional caches with single, uniform access latency. Non-Uniform Cache Architecture (NUCA) designs have been proposed to address this issue. A NUCA partitions the complete cache memory into smaller multiple banks and allows banks near the processor cores to have lower access latencies than those further away, thus reducing the effects of the cache's internal wire delays. Traditionally, NUCA organizations have been classified as static (S-NUCA) and dynamic (D- NUCA). While in S-NUCA a data block is mapped to a unique bank in the NUCA cache, D-NUCA allows a data block to be mapped in multiple banks. In D-NUCA designs a data blocks can migrate towards the processor core that access them most frequently. This migration of data blocks will increase network traffic. The short life time of data blocks and low spatial locality in many applications results in eviction of block with few unused words. This effectively increases miss rate, and waste on chip network bandwidth. Unused word transfers also wastes a large fraction of on chip energy consumption.In this paper, we present an efficient and implementable cache design that eliminate unnecessary coherence traffic and match data movements to an applications spatial locality. It also presents one way to scale on-chip coherence with less costeffective techniques such as shared caches augmented to track cached copies, explicit eviction notification and hierarchal design. Based on our scalability analysis of this cache design we predict that this design consistently reduce miss rate and improve the fraction of data transmitted that is actually utilized by the application en_US
dc.language.iso en en_US
dc.publisher IEEE en_US
dc.subject EEE en_US
dc.subject Non-Uniform Cache Architecture (NUCA) en_US
dc.subject Last Level Cache (LLC) en_US
dc.subject Multi-core Processors (CMP) en_US
dc.title An Adaptive Block Pinning Cache for Reducing Network Traffic in Multi-core Architectures en_US
dc.type Article en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account