Call for Paper - January 2023 Edition
IJCA solicits original research papers for the January 2023 Edition. Last date of manuscript submission is December 20, 2022. Read More

Comparative Study of Parallel Programming Models to Compute Complex Algorithm

Print
PDF
International Journal of Computer Applications
© 2014 by IJCA Journal
Volume 96 - Number 19
Year of Publication: 2014
Authors:
Mukul Sharma
Pradeep Soni
10.5120/16900-6961

Mukul Sharma and Pradeep Soni. Article: Comparative Study of Parallel Programming Models to Compute Complex Algorithm. International Journal of Computer Applications 96(19):9-12, June 2014. Full text available. BibTeX

@article{key:article,
	author = {Mukul Sharma and Pradeep Soni},
	title = {Article: Comparative Study of Parallel Programming Models to Compute Complex Algorithm},
	journal = {International Journal of Computer Applications},
	year = {2014},
	volume = {96},
	number = {19},
	pages = {9-12},
	month = {June},
	note = {Full text available}
}

Abstract

The main goal of this research is to use OpenMP, Posix Threads and Microsoft Parallel Patterns libraries to design an algorithm to compute Matrix Multiplication effectively. By using the libraries of OpenMP, Posix Threads and Microsoft Parallel Patterns Libraries, one can optimize the speedup of the algorithm. First step is to write simple program which calculates a predetermined Matrix and gives the results, after compilation and execution of the code. In this stage only single core processor is used to calculate the Matrix multiplication. Later on, in this research OpenMP, Posix Threads and Microsoft Parallel Patterns libraries are added separately and use some functions in the code to parallelize the computation, by using those functions multi-cores of a processor are allowed. Then execute the program and check its run time, then a timer function is added to the code which periodically checks the time it took for the computer to do the parallelization. First the program is run without the Parallel libraries, and then with the OpenMP, Posix Threads and with Microsoft Parallel Patterns libraries code. Then program is executed for each input Matrix size and result is collected. Maximum 5 trials for each input size are conducted and record the time it took for the computer to parallelize the Matrix multiplication. Finally comparison of the performance in terms of execution time and speed up for OpenMP, Posix Threads and Microsoft Parallel Patterns libraries is done using different Matrix Dimensions and different number of processors.

References

  • Blaise Barney, "Introduction to Parallel Computing", Lawrence Livermore National Laboratory, January 2009.
  • Anshul Gupta,"Introduction to Parallel Computing", IBM T. J. Watson Research Center, Yorktown Heights, 2003.
  • George Karypis,"Parallel Algorithms and Applications", University of Minnesota, Minneapolis, March 2012.
  • George Mozdzynski, "Concepts of Parallel Computing",European Centre for Medium-Range Weather Forecasts, March 2012.
  • S. Salvini, Unlocking the Power of OpenMP, Invited lecture at 5th European Workshop on OpenMP (EWOMP '03), September 2003.
  • Dheeraj Bhardwaj, "Parallel Computing- A Key to Performance", Department of Computer Science & Engineering, Indian Institute of Technology Delhi, August 2011.
  • R. Parikh,"Accelerating quicksort on the intel Pentium 4 processor with hyper–threading technology",Software Community Intel, October 2007.
  • Werner Backes, Sussane Wetzel, "A Parallel LLL using Posix Threads", Department of computer science, Stevens Institute of Technology.
  • J. Balart, A. Duran, M. Gonz`alez, X. Martorell, E. Ayguad´e, and J. Labarta. Nanos Mercurium,"A Research Compiler for OpenMP", 6th European Workshopon OpenMP (EWOMP '04), pages 103–109, September 2004.
  • D. an Mey,"Two OpenMP programming patterns", Proceedings of the Fifth European Workshop on OpenMP - EWOMP'03, September 2003.