Call for Paper - January 2023 Edition
IJCA solicits original research papers for the January 2023 Edition. Last date of manuscript submission is December 20, 2022. Read More

Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation

Print
PDF
International Journal of Computer Applications
© 2014 by IJCA Journal
Volume 86 - Number 9
Year of Publication: 2014
Authors:
Waseem Ahmed
10.5120/15010-3303

Waseem Ahmed. Article: Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation. International Journal of Computer Applications 86(9):1-7, January 2014. Full text available. BibTeX

@article{key:article,
	author = {Waseem Ahmed},
	title = {Article: Bridging the Performance Gap between Manual and Automatic Compilers with Intent-based Compilation},
	journal = {International Journal of Computer Applications},
	year = {2014},
	volume = {86},
	number = {9},
	pages = {1-7},
	month = {January},
	note = {Full text available}
}

Abstract

In spite of years of research in automatic parallelization, progress has been slow in terms of tools that can consistently generate scalable, portable and efficient code for multiple architectures. Moreover, a substantial difference in efficiency exists between code generated automatically and code generated by an expert programmer. Although the fact that the best sequential algorithm for a problem can be very different from the best parallel algorithm is well known, the feature of algorithm substitution is absent from most tools available today. However, automatically identifying an algorithm used in code is not trivial considering the nuances in programming style, algorithmic representations and expressions. This paper presents a novel Intent Based Compilation approach that uses a rule-based Expert System Engine to identify the intent of the algorithm used in the code based on fine- and coarse-grained features extracted from code. Using this information, the most optimized algorithm for the target architecture is then substituted. Results obtained by using Amoeba, a framework that incorporates this methodology, on codes obtained from the public domain are presented.

References

  • Javier Diaz, Camelia Mu˜noz-Caro, and Alfonso Ni˜no. A survey of parallel programming models and tools in the multi and many-core era. Parallel and Distributed Systems, IEEE Transactions on, 23(8), 2012.
  • Bryan Catanzaro, Armando Fox, Kurt Keutzer, David Patterson, Bor-Yiing Su, Marc Snir, Kunle Olukotun, Pat Hanrahan, and Hassan Chafi. Ubiquitous parallel computing from Berkeley, Illinois, and Stanford. Micro, IEEE, 30(2), 2010.
  • Mehdi Amini, Ronan Keryell, Beatrice Creusillet, Corinne Ancourt, and Fran˜Axois Irigoin. Program sequentially, carefully, and benefit from compiler advances for parallel heterogeneous computing. Technical report, MINES-ParisTech CRI, 2012.
  • John Cavazos. Intelligent compilers. In in Proceedings of the IEEE Int. Conf. on Cluster Computing, 2008.
  • Ananta Tiwari, Chun Chen, Jacqueline Chame, Mary Hall, and Jeffrey K Hollingsworth. A scalable auto-tuning framework for compiler optimization. In Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on, 2009.
  • M. Frigo. A fast Fourier transform compiler. In Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation, May 1999.
  • R. C. Whaley and Jack Dongarra. Automatically tuned linear algebra software. In Proceedings of Supercomputing, Nov. 1998.
  • R. Vuduc, , J. W. Demmel, and K. A. Yelick. Oski: A library of automatically tuned sparse matrix kernels. Journal of Physics: Conference Series, 16:521–530, 2005.
  • Chirag Dave, Hansang Bae, Seung-Jai Min, Seyong Lee, Rudolf Eigenmann, and Smauel Midkiff. Cetus: A sourceto- source compiler infrastructure for multicores. IEEE Computer, December 2009.
  • Matthew J. Bridges, Neil Vachharajani, Yun Zhang, Thomas Jablin, and David I. August. Revisiting the sequential progamming model for the multicore era. IEEE Micro, January/ February 2008.
  • Christian Bienia, Sanjeev Kumar, Jaswinder Pal Singh, and Kai Li. The PARSEC benchmark suite: Characterization and architectural implications. In Proceedings of PACT'08, 2008.
  • J. Ceng, J. Castrillon, W. Sheng, H. Scharwachter, R. Leupers, G. Ascheid, H Meyr, T. Isshiki, and H. Kunieda. MAPS : An integrated framework for MPSoC application parallelization. In in Proceedings of the Design Automation Conference (DAC), 2008.
  • Anja Guzzi, Lile Hattori, Michele Lanza, Martin Pinzger, and Arie van Deursen. Collective code bookmarks for program comprehension. In Proceedings of the IEEE 19th Int. Conf. on Program Comprehension (ICPC), 2011.
  • Shih-Wei Liao. Suif Explorer: An Interactive and Interprocedural Parallelizer. PhD thesis, Stanford, 2000.
  • Wen-mei Hwu, Shane Ryoo, Sain-Zee Ueng, John H. Kelm, Isaac Gelado, Sam S. Stone, Robert E. Kidd, Sara S. Baghsorkhi, Aqeel A. Mahesri, Stephanie C. Tsao, Nacho Navarro, Steve S. Lumetta, Matthew I. Frank, and Sanjay J. Patel. Implicitly parallel programming models for thousand-core microprocessors. In DAC '07: Proceedings of the 44th annual conference on Design automation, pages 754–759, New York, NY, USA, 2007. ACM.
  • Georgios Tournavitis, Zheng Wang, Bj¨orn Franke, and Michael FP O'Boyle. Towards a holistic approach to autoparallelization: integrating profile-driven parallelism detection and machine-learning based mapping. In Proceedings of PLDI'09, volume 44, pages 177–187, 2009.
  • Jason Ansel. Petabricks: a language and compiler for algorithmic choice. Master's thesis, MIT, 2009.
  • Barry Wilkinson and Michael Allen. Parallel Programming - Techniques and Applications using Networked Workstations and Parallel Computers. Pearson Education, 2 edition, 2005.
  • Michael J. Quinn. Parallel Programming in C with MPI and OpenMP. Tata McGraw-Hill, 2003.
  • Krste Asanovic, Ras Bodik, Bryan Christopher Catanzaro, Joseph James Gebis, Parry Husbands, Kurt Keutzer, David A. Patterson, William Lester Plishker, John Shalf, Samuel Webb Williams, and Katherine A. Yelick. The landscape of parallel computing research: A view from Berkeley. Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, Dec 2006.
  • Pieter Custers. Algorithmic species: Classifying program code for parallel computing. Master's thesis, Eindhoven University of Technology, 2012.
  • Cedric Nugteren, Pieter Custers, and Henk Corporaal. Algorithmic species: A classification of affine loop nests for parallel programming. ACM Transactions on Architecture and Code Optimization (TACO), 9, 2013.
  • Joseph C. Giarratano and Gary D. Riley. Expert Systems - Principles and Programming. Thomson, 2005.
  • Ian Foster. Designing and Building Parallel Programs: Concepts and tools for parallel software engineering. Reading, MA: Addison-Wesley, 1995.
  • S. G. Akl. Parallel Sorting Algorithms. Orlando FL: Academic Press, 1985.
  • W. J. Camp, S. J. Plimpton, B. A. Hendrikson, and R. W. Leland. Massively parallel methods for engineering and science problems. Communications of the ACM, 37(4):30–41, 1994.
  • Chirag Dave and Rudolf Eigenmann. Automatically tuning parallel and parallelized programs. Languages and Compilers for Parallel Computing, pages 126–139, 2010.
  • Dheya Mustafa and Rudolf Eigenmann. Portable section-level tuning of compiler parallelized applications. In Proceedings of the 2012 ACM/IEEE Conference on Supercomputing. IEEE Press, 2012.