CFP last date
20 May 2024
Reseach Article

High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster

by Dimitrios Papakyriakou, Ioannis S. Barbounakis
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 185 - Number 25
Year of Publication: 2023
Authors: Dimitrios Papakyriakou, Ioannis S. Barbounakis
10.5120/ijca2023923005

Dimitrios Papakyriakou, Ioannis S. Barbounakis . High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster. International Journal of Computer Applications. 185, 25 ( Jul 2023), 11-19. DOI=10.5120/ijca2023923005

@article{ 10.5120/ijca2023923005,
author = { Dimitrios Papakyriakou, Ioannis S. Barbounakis },
title = { High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster },
journal = { International Journal of Computer Applications },
issue_date = { Jul 2023 },
volume = { 185 },
number = { 25 },
month = { Jul },
year = { 2023 },
issn = { 0975-8887 },
pages = { 11-19 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume185/number25/32847-2023923005/ },
doi = { 10.5120/ijca2023923005 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:27:02.154061+05:30
%A Dimitrios Papakyriakou
%A Ioannis S. Barbounakis
%T High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster
%J International Journal of Computer Applications
%@ 0975-8887
%V 185
%N 25
%P 11-19
%D 2023
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper focuses on a High Performance Linpack (HPL) benchmarking performance analysis of a state of the Art Beowulf cluster deployed with 24 Raspberry Pi’s 4 (model B) (8GB RAM) computers with a CPU clocked at 1.5 GHz, 64-bit quad-core ARMv8 Cortex-A72. In particular, it presents the increased HPL performance of a Beowulf cluster with the use of the default microSD usage in all the RPi’s in the cluster (SDCS2 64GB micro SDXC 100R A1 C10) compared to using a cluster set-up where the master-node uses a Samsung (1TB) 980 PCI-E 3 NVMe M.2 SSD and the slave-nodes uses each a (256GB) Patriot P300P256GM28 NVME M.2 2280). Moreover, it presents the test results of a multithread execution of a C++ pi calculation program by using one to four cores in one RPi 4 B (8GB) using the above-mentioned microSD. In addition, it presents the test results of a multithread execution of a C++ with MPI (pi) calculation program by using 24 RPi’s 4B with the above-mentioned microSD. In terms of the HPL benchmarking performance testing of a Beowulf cluster where the NVMe M.2 SSD disks are used, RPi 4-B supports and deployed the option to use the entire SSD (MVMe) as a bootable external disk which the boot and root partition (where the actual HPL runs) is hosted in the external SSD. All of them are connected over two Gigabit switches (TL-SG1024D) in a parallel mode of operation so that to build a supercomputer.

References
  1. Beowulf Computer Cluster. [Online]. Available: https://www.spacefoundation.org/space_technology_hal/beowulf-computing-cluster/.
  2. Sterling T. 2001. Beowulf clusters computing with Linux. Cambridge, Massachusetts: MIT Press.
  3. MPI. MPI Forum. [Online]. Available: http://mpi-forum.org/
  4. MPI. MPICH. [Online]. Available: https://www.mpich.org/
  5. Five Trends Shaping Next-gen, Data-intensive Supercomputing. [Online]. Available: https://www.huawei.com/en/huaweitech/publication/202202/data-intensive-supercomputing.
  6. An Analysis of System Balance and Architectural Trends Based on Top500 Supercomputers. [Online]. Available: https://dl.acm.org/doi/10.1145/3432261.3432263.
  7. High-Performance Linpack (HPL) benchmarking on UL HPC platform. [Online]. Available: https://ulhpc-tutorials.readthedocs.io/en/latest/parallel/mpi/HPL/
  8. Open Basic Linear Algebra Subprograms (OpenBLAS). [Online]. Available: http://sporadic.stanford.edu/reference/spkg/openblas.html.
  9. Raspberry Pi 4 Model B. [Online]. Available: raspberrypi.com/products/raspberry-pi-4-model-b/.
  10. Dimitrios Papakyriakou and Ioannis S Barbounakis. Benchmarking and Review of Raspberry Pi (RPi) 2B vs RPi 3B vs RPi 3B+ vs RPi 4B (8GB). International Journal of Computer Applications 185(3):37-52, April 2023.
  11. Raspberry Pi Operating System images. [Online]. Available: https://www.raspberrypi.com/software/operating-systems/#raspberry-pi-os-64-bit
  12. Message Passing Interface Chameleon MPICH. [Online]. Available: https://www.mpich.org/
  13. Netlib. HPL. [Online]. Available: http://www.netlib.org/benchmark/hpl/
  14. LU factorization. [Online]. Available: https://www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/
  15. Mathematics. LU Decomposition of a System of Linear Equations. [Online]. Available: https://www.geeksforgeeks.org/l-u-decomposition-system-linear-equations/
  16. Dimitrios Papakyriakou, Dimitra Kottou and Ioannis Kostouros. (April 2018). “Benchmarking Raspberry Pi 2 Beowulf Cluster. International Journal of Computer Applications” 179(32):21-27.
  17. Petitet, A., R. C. Whaley, J. Dongarra, and A. Cleary. “HPL – A portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers.” Accessed December 15, 2016
  18. Dunlop, D., Varrette, S. and Bouvry, P. 2010. Deskilling HPL, Vol. 6068 of Lecture Notes in Computer Science, Springer, Heidelberg, Berlin, 102–114.
  19. Luszczek, P., Dongarra, J., Koester, D., Rabenseifner, R., Lucas, B., Kepner, J., McCalpin, J., Bailey, D. and Takahashi, D. 2005. Introduction to the HPC Challenge Benchmark Suite, Technical Report, ICL, University of Tennessee at Knoxville.
  20. Netlib. HPL Tuning. http://www.netlib.org/benchmark/hpl/tuning.html#tips
  21. Dunlop, D., Varrette, S. and Bouvry, P. 2008. On the use of a genetic algorithm in high performance computer benchmark tuning, Proceedings of the International Symposium on Performance Evaluation of Computer and Telecommunication Systems, SPECTS 2008, Art. No.:4667550, 105-113
  22. Mathieu GAILLARD. (August 2022) How to compile HLP LINPACK on Ubuntu 22.04. [Online]. Available: https://www.mgaillard.fr/2022/08/27/benchmark-with-hpl.html
  23. HPL Frequently Asked Questions. [Online]. Available: http://www.netlib.org/benchmark/hpl/faqs.html
  24. Sindi, M. 2009. HowTo – High Performance Linpack (HPL), Technical Report, Center for Research Computing, University of Notre Dame
  25. Dimitrios Papakyriakou and Ioannis S Barbounakis. Benchmarking and Review of Raspberry Pi (RPi) 2B vs RPi 3B vs RPi 3B+ vs RPi 4B (8GB). International Journal of Computer Applications 185(3):37-52, April 2023.
  26. NETTOP. Approved and Official Raspberry Pi Reseller in Greece. [Online]. Available: https://nettop.gr/
Index Terms

Computer Science
Information Sciences

Keywords

Raspberry Pi 4 cluster Beowulf Cluster Message Passing Interface (MPI) MPICH BLAS High Performance Linpack (HPL) Benchmarking HPL RPi clusters Distributed Systems.