Jump to content

Multiple instruction, single data

From Wikipedia, the free encyclopedia

Incomputing,multiple instruction, single data(MISD) is a type ofparallel computingarchitecturewhere many functional units perform different operations on the same data.Pipelinearchitectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline.Fault toleranceexecuting the same instructions redundantly in order to detect and mask errors, in a manner known astask replication,may be considered to belong to this type. Applications for this architecture are much less common thanMIMDandSIMD,as the latter two are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources. However, one prominent example of MISD in computing are theSpace Shuttleflight control computers.[2]

Systolic arrays[edit]

Systolic arrays(<wavefrontprocessors), first described byH. T. KungandCharles E. Leisersonare an example ofMISDarchitecture. In a typical systolic array,parallelinputdataflows through a network of hard-wiredprocessornodes,resembling the humanbrainwhich combine, process,mergeorsortthe input data into a derived result.

Systolic arrays are often hard-wired for a specific operation, such as "multiply and accumulate", to perform massivelyparallelintegration,convolution,correlation,matrix multiplicationor data sorting tasks. A systolic array typically consists of a large monolithic network of primitive computingnodes,which can be hardwired or software-configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. More generalwavefrontprocessors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. Because thewave-like propagation of data through asystolicarray resembles thepulseof the human circulatory system, the name systolic was coined from medical terminology.

A significant benefit of systolic arrays is that all operand data and partial results are contained within (passing through) the processor array. There is no need to access external buses, main memory, or internal caches during each operation, as with standard sequential machines. The sequential limits on parallel performance dictated byAmdahl's lawalso do not apply in the same way because data dependencies are implicitly handled by the programmable node interconnect.

Therefore, systolic arrays are extremely good at artificial intelligence, image processing, pattern recognition, computer vision, and other tasks that animal brains do exceptionally well. Wavefront processors, in general, can also be very good at machine learning by implementing self-configuring neural nets in hardware.

While systolic arrays are officially classified as MISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is notSISD.Since theseinputvalues are merged and combined into the result(s) and do not maintain theirindependenceas they would in aSIMDvector processing unit, thearraycannot be classified as such. Consequently, the array cannot be classified as aMIMDeither, since MIMD can be viewed as a mere collection of smaller SISD and SIMD machines.

Finally, because the dataswarmis transformed as it passes through the array from node to node, the multiple nodes are not operating on the same data, which makes the MISD classification amisnomer.The other reason why a systolic array should not qualify as aMISDis the same as the one which disqualifies it from the SISD category: The input data is typically a vector, not asingledata value, although one could argue that any given input vector is a single dataset.

The above notwithstanding, systolic arrays are often offered as a classic example of MISD architecture in textbooks on parallel computing and in the engineering class. If the array is viewed from the outside asatomicit should perhaps be classified asSFMuDMeR=single function, multiple data, merged result(s).[3][4][5][6]

Footnotes[edit]

  1. ^Flynn, Michael J.(September 1972)."Some Computer Organizations and Their Effectiveness"(PDF).IEEE Transactions on Computers.C-21(9): 948–960.doi:10.1109/TC.1972.5009071.
  2. ^Spector, A.; Gifford, D. (September 1984)."The space shuttle primary computer system".Communications of the ACM.27(9): 872–900.doi:10.1145/358234.358246.S2CID39724471.
  3. ^Michael J. Flynn, Kevin W. Rudd.Parallel Architectures.CRC Press, 1996.
  4. ^Quinn, Michael J.Parallel Programming in C with MPI and OpenMP.Boston: McGraw Hill, 2004.
  5. ^Ibaroudene, Djaffer. "Parallel Processing, EG6370G: Chapter 1, Motivation and History." St Mary's University, San Antonio, TX. Spring 2008.
  6. ^Null, Linda; Lobur, Julia (2006).The Essentials of Computer Organization and Architecture.468: Jones and Bartlett.{{cite book}}:CS1 maint: location (link)