CFP last date
22 April 2024
Reseach Article

Cache Friendly and Capacity Conscious Scheduling in Multi-core Systems

by Sheela Kathavate, N. K. Srinath
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 178 - Number 21
Year of Publication: 2019
Authors: Sheela Kathavate, N. K. Srinath
10.5120/ijca2019919077

Sheela Kathavate, N. K. Srinath . Cache Friendly and Capacity Conscious Scheduling in Multi-core Systems. International Journal of Computer Applications. 178, 21 ( Jun 2019), 39-43. DOI=10.5120/ijca2019919077

@article{ 10.5120/ijca2019919077,
author = { Sheela Kathavate, N. K. Srinath },
title = { Cache Friendly and Capacity Conscious Scheduling in Multi-core Systems },
journal = { International Journal of Computer Applications },
issue_date = { Jun 2019 },
volume = { 178 },
number = { 21 },
month = { Jun },
year = { 2019 },
issn = { 0975-8887 },
pages = { 39-43 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume178/number21/30662-2019919077/ },
doi = { 10.5120/ijca2019919077 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:51:05.465749+05:30
%A Sheela Kathavate
%A N. K. Srinath
%T Cache Friendly and Capacity Conscious Scheduling in Multi-core Systems
%J International Journal of Computer Applications
%@ 0975-8887
%V 178
%N 21
%P 39-43
%D 2019
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Current generation high performance multi-core processors have large shared cache memories. This shared cache memory is accessible by multiple cores. Concurrently running threads under each core do not always demand the entire capacity of the shared cache. Threads running on different cores accessing shared cache concurrently may result in higher cache miss rate and significant performance degradation due to inter-thread cache conflicts and lack of cache space. The cache capacity is the quantity of physical cache memory available with the processor. To achieve certain higher degree of processing performance on multi-core processors, efficient shared cache memory usage plays the defining role. The overall processor performance gets more sensitive to the problem of shortage of cache capacity, as threads sharing the cache compete for their requirement of the cache sizes. In this paper, a cache friendly and capacity conscious thread scheduling strategy is proposed for multi-core processors with multiple shared caches. The proposed scheduling policy ensures that the shared cache is optimally used by the competing threads which minimizes inter-thread resource conflict and hence reduces performance degradation. According to the experimental results the proposed policy reduces shared cache contention significantly thereby improving the overall performance among threads by up to 5%.

References
  1. P. Kongetira, et al, 2005 “A 32-Way Multithreaded SPARC Processor”, IEEE Micro, vol. 25 Mar, (2005).
  2. J. Mars, L. Tang, and M. L. Soffa, 2011 “Directly characterizing cross core interference through contention synthesis”, In Proceedings of the 6th International Conference on High Performance and Embedded Architectures and Compilers, Pages 167–176.
  3. Y. Xie and G. H. Loh, 2008 “Dynamic classification of program memory behaviors in CMPs”, In Proceedings of CMP-MSI.
  4. Ed Suh, Larry Rudolph, Srini Devadas, 2001 “Dynamic cache partitioning for simultaneous multithreading systems”, in the proceedings of 13th IASTED International Conf. on Parallel and Distributed Computing and Systems, 116–127.
  5. M. K. Qureshi and Y. N. Patt, 2006 “Utility-based cache partitioning: A low-overhead, high-performance, run- time mechanism to partition shared caches”, in the proceedings of 39th Annual IEEE/ACM International Symposium on Microarchitecture, 423–432.
  6. S. Kim, D. Chandra, and Y. Solihin, 2004 “Fair cache sharing and partitioning in a chip multiprocessor architecture”, in the proceedings of 13th International Conference on Parallel Architecture and Compilation Techniques, 111–122,
  7. X. Jia, J. Jiang, T. Zhao, S. Qi, and M. Zhang, 2010 “Towards online application cache behaviors identification in CMPs”, in the Proceedings of the 12th IEEE International Conference on High Performance Computing and Communications, pages 1 – 8.
  8. James H. Anderson, J M Calendrino, and U C Devi, 2006 “Real time scheduling on multi core platforms”, in the proceedings of the 12th IEEE Real Time and Embedded Technology and Applications Symposium (RTAS ’06), San Jose California, USA, April 4-7.
  9. S. Shekofteh, H. Deldari, and M. B. Khalkhali, 2010 “Reducing cache contention in a multi-core processor via a scheduler”, International Conference on Advanced Computer Theory and Engineering.
  10. J. Kihm, A. Settle, A. Janiszewski, and D. Connors, 2005 “Understanding the impact of inter-thread cache interference on ILP in modern SMT processors”, The Journal of Instruction-Level Parallelism, 7.
  11. Carol-Jean Wu, Margaret Martonosi, 2011 “Adaptive Timekeeping Replacement: Fine Grained Capacity Management for Shared CMP Caches”, ACM Transactions on Architecture and Code Optimization (TACO), Vol. 8, Issue 1, Article 3.
  12. Sharanyan Srikanthan, Sandhya Dwarkadas, and Kai Shen 2015, “Data sharing or resource contention: toward performance transparency on multicore systems”, in the Proceedings of the 2015 USENIX Annual Technical Conference (USENIC ATC ’15)”, July 8–10, 2015, Santa Clara, CA, USA, pp. 529-540.
  13. Baptiste Lepers, Willy Zwaenepoel, Jean-Pierre Lozi, Nicolas Palix, Redha Gouicem, 2017 “Towards Proving Optimistic Multicore Schedulers”, HotOS 2017 - 16th Workshop on Hot Topics in Operating Systems, ACM SIGOPS, Whistler, British Columbia, Canada.
  14. D. Chandra, F. Guo, S. Kim, and Y. Solihin, 2005 “Predicting inter-thread cache contention on a chip multi-processor architecture”, In the proceedings of 11th International Symposium on High Performance Computer Architecture, pp.340–351.
  15. H. Kobayashi, I. Kotera, and H. Takizawa 2005 “Locality analysis to control dynamically way-adaptable caches,” ACM SIGARCH Computer Architecture News, vol.33, no.3, pp.25–32.
  16. Nathan Binkert, Bradford Beckman, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, Derek. R. Hower, Tushar Krishna, Somayeh Sardashti, Rathijit Sen, Korey Sewell, Muhammad Shoaib, Nilay Vaish, Mark D. Hill and David A. Wood. 2011 "The gem5 simulator", ACM SIGARCH Computer Architecture News, Vol. 39, No. 2 pp. 1-7.
  17. K. Luo, J. Gummaraju, and M. Franklin, 2001 “Balancing throughput and fairness in SMT processors”, In ISPASS, pages 164–171.
Index Terms

Computer Science
Information Sciences

Keywords

CMP Cache Capacity Thread Scheduling Shared Cache.