Call for Paper - January 2020 Edition
IJCA solicits original research papers for the January 2020 Edition. Last date of manuscript submission is December 20, 2019. Read More

Communication Optimization for Multi GPU Implementation of Smith-Waterman Algorithm

Print
PDF
International Journal of Computer Applications
© 2013 by IJCA Journal
Volume 80 - Number 12
Year of Publication: 2013
Authors:
Sampath Kumar N V S S P
P. K. Baruah
10.5120/13910-1121

Sampath Kumar N V S S P and P K Baruah. Article: Communication Optimization for Multi GPU Implementation of Smith-Waterman Algorithm. International Journal of Computer Applications 80(12):1-7, October 2013. Full text available. BibTeX

@article{key:article,
	author = {Sampath Kumar N V S S P and P. K. Baruah},
	title = {Article: Communication Optimization for Multi GPU Implementation of Smith-Waterman Algorithm},
	journal = {International Journal of Computer Applications},
	year = {2013},
	volume = {80},
	number = {12},
	pages = {1-7},
	month = {October},
	note = {Full text available}
}

Abstract

GPU parallelism for real applications can achieve enormous performance gain. CPU-GPU Communication is one of the major bottlenecks that limit this performance gain. Among several libraries developed so far to optimize this communication, DyManD (Dynamically Managed Data) provides better communication optimization strategies and achieves better performance on a single GPU. Smith-Waterman is a well known algorithm in the field of computational biology for finding functional similarities in a protein database. CUDA implementation of this algorithm speeds up the process of sequence matching in the protein database. When input databases are large, multi-GPU implementation gives better performance than single GPU implementation. Since this algorithm acts upon large databases, there is need for optimizing CPU-GPU communication. DyManD implementation provides efficient data management and communication optimization only for single GPU. For providing communication optimization on multiple GPUs, an approach of combining DyManD with a multi-threaded framework called GPUWorker was proposed. Our contribution in this work is to propose an optimized CUDA implementation of this algorithm on multiple GPUs i. e. , GPUWorker-DyManD which reduces the communication overhead between CPU and multiple GPUs. This implementation combines DyManD functionality with GPUWorker for optimizing communication. The performance gain obtained for the GPUWorker-DyManD implementation of this algorithm over default multi-GPU implementation is 3. 5x.

References

  • D. M. Dang, C. Christara and K. Jackson. GPU pricing of exotic cross-currency interest rate derivatives with a foreign exchange volatility skew model. SSRN eLibrary, 2010.
  • Thomas B. Jablin, Prakash Prabhu, James A. Jablin, Nick P. Johnson, Stephen R. Beard, and David I. August. Automatic cpu-gpu communication management and optimization. In Mary W. Hall and David A. Padua, editors, PLDI , pages 142-151. ACM, 2011.
  • NVIDIA Corporation. CUDA C Best Practices Guide 3. 2,2010.
  • SARJAT SAHNI JUNJIE LI & SANJAY RANKA. Pairwise sequence alignment for very long sequences on gpu.
  • A. Basumallik and R. Eigenmann. Optimizing irregular shared-memory applications for distributed-memory systems. Number 3, 2006. .
  • Thomas B. Jablin, James A. Jablin, Prakash Prabhu, Feng Liu, and David I. August. Dynamically managed data for cpu-gpu architectures. In Proceedings of the Tenth International Symposium on Code Generation and Optimization, CGO '12, pages 165{174, New York, NY, USA, 2012. ACM.
  • GPUWorker master/ slave multi-GPU approach. https://devtalk. nvidia. com/default/topic/390598/gpuworker-master-slave-multi-gpu-approach/