收录:
摘要:
Cache memory helps in expediting the speed of data retrieval time in processors in heterogeneous multi-core architecture, which is the main factor that affects system performance and power consumption. The implementation algorithm of cache replacement in current heterogeneous multi-core environment is thread-blinded, leading to a lower utilization of the cache. In fact, each of the CPU and GPU applications has its own characteristics, where CPU is responsible for the implementation of tasks and serial logic control, while GPU has a great advantage in parallel computing, which causes the need of cache blocks for CPU more sensitive than those for GPU. With that in mind, this research gives full consideration to the increment of thread priority in the cache replacement algorithm and takes a novel strategy to improve the work efficiency of last-level-cache (LLC), where the CPU and GPU applications share LLC dynamically and not in an absolutely fair status. Furthermore, our methodology switches policies between the LRU (Least Recently Used) and LFU (Least Frequently Used) effectively by comparing the number of cache misses on the LLC, which takes both the time and frequency of the accessing cache block into consideration. The experimental results indicate that this optimization method can effectively improve system performance.
关键词:
通讯作者信息:
电子邮件地址: