Improved caching for modern processors

Cache memory is one of the most important components of a computer system. The cache allows quickly accessing frequently used data items, avoiding the high latency and limited bandwidth of main memory. Caching has been studied extensively for single threaded computer systems. But new problems ari...

Full description

Bibliographic Details
Main Author: Zhang, Wei
Other Authors: Fan Rui
Format: Thesis
Language:English
Published: 2016
Subjects:
Online Access:http://hdl.handle.net/10356/66398
Description
Summary:Cache memory is one of the most important components of a computer system. The cache allows quickly accessing frequently used data items, avoiding the high latency and limited bandwidth of main memory. Caching has been studied extensively for single threaded computer systems. But new problems arise for modern processor architectures allowing multiple concurrently executing threads. These systems often feature a shared cache accessible by all threads. Thus, in addition to deciding what data items to cache and evict, we also need to determine which thread's accesses to service at each time step. In addition, since data from one thread may be evicted by other threads, cache contention becomes an important issue. Overall system performance is strongly affected by the control and allocation policies for this shared cache, and this forms the topic for the first part of this thesis. In addition to its impact on performance, the cache also consumes substantial energy. For many processors it accounts for almost half of the overall energy consumption. Yet this energy is sometimes wasted when running applications with limited cache demand. Energy considerations are becoming increasingly important in the modern era of mobile devices and large scale data centers, and the second part of this thesis focuses on methods to minimize the energy consumption for caching while maintaining competitive system performance.