DSpace logo

Please use this identifier to cite or link to this item: http://dspace.bits-pilani.ac.in:8080/jspui/xmlui/handle/123456789/9741
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChaturvedi, Nitin-
dc.date.accessioned2023-03-15T07:23:58Z-
dc.date.available2023-03-15T07:23:58Z-
dc.date.issued2013-
dc.identifier.urihttps://ieeexplore.ieee.org/document/6658033/keywords#keywords-
dc.identifier.urihttp://dspace.bits-pilani.ac.in:8080/xmlui/handle/123456789/9741-
dc.description.abstractWith advent of new technologies there is exponential increase in multi-core processor (CMP) cache sizes accompanied by growing on-chip wire delays make it difficult to implement traditional caches with single, uniform access latency. Non-Uniform Cache Architecture (NUCA) designs have been proposed to address this issue. A NUCA partitions the complete cache memory into smaller multiple banks and allows banks near the processor cores to have lower access latencies than those further away, thus reducing the effects of the cache's internal wire delays. Traditionally, NUCA organizations have been classified as static (S-NUCA) and dynamic (D- NUCA). While in S-NUCA a data block is mapped to a unique bank in the NUCA cache, D-NUCA allows a data block to be mapped in multiple banks. In D-NUCA designs a data blocks can migrate towards the processor core that access them most frequently. This migration of data blocks will increase network traffic. The short life time of data blocks and low spatial locality in many applications results in eviction of block with few unused words. This effectively increases miss rate, and waste on chip network bandwidth. Unused word transfers also wastes a large fraction of on chip energy consumption.In this paper, we present an efficient and implementable cache design that eliminate unnecessary coherence traffic and match data movements to an applications spatial locality. It also presents one way to scale on-chip coherence with less costeffective techniques such as shared caches augmented to track cached copies, explicit eviction notification and hierarchal design. Based on our scalability analysis of this cache design we predict that this design consistently reduce miss rate and improve the fraction of data transmitted that is actually utilized by the applicationen_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectEEEen_US
dc.subjectNon-Uniform Cache Architecture (NUCA)en_US
dc.subjectLast Level Cache (LLC)en_US
dc.subjectMulti-core Processors (CMP)en_US
dc.titleAn Adaptive Block Pinning Cache for Reducing Network Traffic in Multi-core Architecturesen_US
dc.typeArticleen_US
Appears in Collections:Department of Electrical and Electronics Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.