CFP last date
20 May 2024
Reseach Article

Architecture of SIMD Type Vector Processor

by Mohammad Suaib, Abel Palaty, Kumar Sambhav Pandey
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 20 - Number 4
Year of Publication: 2011
Authors: Mohammad Suaib, Abel Palaty, Kumar Sambhav Pandey
10.5120/2418-3233

Mohammad Suaib, Abel Palaty, Kumar Sambhav Pandey . Architecture of SIMD Type Vector Processor. International Journal of Computer Applications. 20, 4 ( April 2011), 42-45. DOI=10.5120/2418-3233

@article{ 10.5120/2418-3233,
author = { Mohammad Suaib, Abel Palaty, Kumar Sambhav Pandey },
title = { Architecture of SIMD Type Vector Processor },
journal = { International Journal of Computer Applications },
issue_date = { April 2011 },
volume = { 20 },
number = { 4 },
month = { April },
year = { 2011 },
issn = { 0975-8887 },
pages = { 42-45 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume20/number4/2418-3233/ },
doi = { 10.5120/2418-3233 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:06:56.692283+05:30
%A Mohammad Suaib
%A Abel Palaty
%A Kumar Sambhav Pandey
%T Architecture of SIMD Type Vector Processor
%J International Journal of Computer Applications
%@ 0975-8887
%V 20
%N 4
%P 42-45
%D 2011
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Throughput and performance are the major constraints in designing system level models. As vector processor used deeply pipelined functional unit, the operation on elements of vector was performed concurrently. It means the elements were processed one by one. Improvement can be made in vector processing by incorporating parallelism in execution of these concurrent operations so that these operations can be performed simultaneously. This paper presents a design and implementation of SIMD-Vector processor that implements this parallelism on short vectors having 4 words. The operation on these words is performed simultaneously i.e. the operation on these words is performed in one cycle. This reduces the clock cycles per instruction (CPI). To implement parallelism in vector processing requires parallel issue and execution of vector instructions. Vector processor operates on a vector and superscalar processor issues multiple instructions at a time. This means parallel pipelines are implemented and then made these to support vector data. SIMD-Vector processor will operate on short vector say 4 words vector in a superscalar fashion i.e. 4 words will be fetched at a time and then executed in parallel. This requires redundant functional units e.g. if addition is to be performed on two vectors multiple adders are needed. We have designed the architecture of SIMD type Vector processor. All the designing parameters are explained.

References
  1. Shin, J., Hall, M.W., Chame, J.: Superword-Level Parallelism in the Presence of Control Flow. In: CGO 2005, pp. 165–175 (2005).
  2. Lee, R.: Multimedia Extensions for General-purpose Processors. In: SIPS 1997, pp. 9–23 (1997).
  3. Talla, D.: Architectural techniques to accelerate multimedia applications on general-purpose processors, Ph.D. Thesis, The University of Texas at Austin (2001).
  4. Diefendorff, K., et al.: Altivec Extension to PowerPC Accelerates Media Processing. IEEE Micro 2000 20(2), 85–95 (2000).
  5. Corbal, J., Espasa, R., Valero, M.: Exploiting a New Level of DLP in Multimedia Applications. In: MICRO 1999 (1999).
  6. Kozyrakis, C.E., Patterson, D.A.: Scalable Vector Processors for Embedded Systems. IEEE Micro 23(6), 36–45 (2003).
  7. K. Yeager, “The MIPS R10000 Superscalar Microprocessor”, in Proceedings of IEEE Micro, Vol. 16, No. 2, pp. 28-41, April 1996.
  8. James E. Smith, Gurindar S. Sohi, “The Microarchitecture of Superscalar Processors”, in Proceedings of IEEE, Vol. 83, No. 12, pp. 1609-1624, December 1995).
  9. Open SystemC Initiative (OSCI), www.systemc.org.
Index Terms

Computer Science
Information Sciences

Keywords

SIMD type Vector processor vertical and horizontal parallelism ILP