Jump to content

Block matrix

From Wikipedia, the free encyclopedia
(Redirected fromBlock tridiagonal matrix)

Inmathematics,ablock matrixor apartitioned matrixis amatrixthat is interpreted as having been broken into sections calledblocksorsubmatrices.[1][2]

Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, orpartitionit, into a collection of smaller matrices.[3][2]For example, the 3x4 matrix presented below is divided by horizontal and vertical lines into four blocks: the top-left 2x3 block, the top-right 2x1 block, the bottom-left 1x3 block, and the bottom-right 1x1 block.

Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.

This notion can be made more precise for anbymatrixby partitioninginto a collection,and then partitioninginto a collection.The original matrix is then considered as the "total" of these groups, in the sense that theentry of the original matrix corresponds in a1-to-1way with someoffsetentry of some,whereand.[4]

Block matrix algebra arises in general frombiproductsincategoriesof matrices.[5]

A 168×168 element block matrix with 12×12, 12×24, 24×12, and 24×24 sub-matrices. Non-zero elements are in blue, zero elements are grayed.

Example

[edit]

The matrix

can be visualized as divided into four blocks, as

.

The horizontal and vertical lines have no special mathematical meaning,[6][7]but are a common way to visualize a partition.[6][7]By this partition,is partitioned into four 2×2 blocks, as

The partitioned matrix can then be written as

[8]

Formal definition

[edit]

Let.Apartitioningofis a representation ofin the form

,

whereare contiguous submatrices,,and.[9]The elementsof the partition are calledblocks.[9]

By this definition, the blocks in any one column must all have the same number of columns.[9]Similarly, the blocks in any one row must have the same number of rows.[9]

Partitioning methods

[edit]

A matrix can be partitioned in many ways.[9]For example, a matrixis said to bepartitioned by columnsif it is written as

,

whereis theth column of.[9]A matrix can also bepartitioned by rows:

,

whereis theth row of.[9]

Common partitions

[edit]

Often,[9]we encounter the 2x2 partition

,[9]

particularly in the form whereis a scalar:

.[9]

Block matrix operations

[edit]

Transpose

[edit]

Let

where.(This matrixwill be reused in§ Additionand§ Multiplication.) Then its transpose is

,[9][10]

and the same equation holds with the transpose replaced by the conjugate transpose.[9]

Block transpose

[edit]

A special form of matrixtransposecan also be defined for block matrices, where individual blocks are reordered but not transposed. Letbe ablock matrix withblocks,the block transpose ofis theblock matrixwithblocks.[11]As with the conventional trace operator, the block transpose is alinear mappingsuch that.[10]However, in general the propertydoes not hold unless the blocks ofandcommute.

Addition

[edit]

Let

,

where,and letbe the matrix defined in§ Transpose.(This matrixwill be reused in§ Multiplication.) Then if,,,and,then

.[9]

Multiplication

[edit]

It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "conformablepartitions "[12]between two matricesandsuch that all submatrix products that will be used are defined.[13]

Two matricesandare said to be partitioned conformally for the product,whenandare partitioned into submatrices and if the multiplicationis carried out treating the submatrices as if they are scalars, but keeping the order, and when all products and sums of submatrices involved are defined.

— Arak M. Mathai and Hans J. Haubold,Linear Algebra: A Course for Physicists and Engineers[14]

Letbe the matrix defined in§ Transpose,and letbe the matrix defined in§ Addition.Then the matrix product

can be performed blockwise, yieldingas anmatrix. The matrices in the resulting matrixare calculated by multiplying:

[6]

Or, using theEinstein notationthat implicitly sums over repeated indices:

Depictingas a matrix, we have

.[9]

Inversion

[edit]

If a matrix is partitioned into four blocks, it can beinverted blockwiseas follows:

whereAandDare square blocks of arbitrary size, andBandCareconformablewith them for partitioning. Furthermore,Aand the Schur complement ofAinP:P/A=DCA−1Bmust be invertible.[15]

Equivalently, by permuting the blocks:

[16]

Here,Dand the Schur complement ofDinP:P/D=ABD−1Cmust be invertible.

IfAandDare both invertible, then:

By theWeinstein–Aronszajn identity,one of the two matrices in the block-diagonal matrix is invertible exactly when the other is.

Determinant

[edit]

The formula for the determinant of a-matrix above continues to hold, under appropriate further assumptions, for a matrix composed of four submatrices.The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving theSchur complement,is

[16]

Using this formula, we can derive thatcharacteristic polynomialsofandare same and equal to the product of characteristic polynomials ofand.Furthermore, Iforisdiagonalizable,thenandare diagonalizable too. The converse is false; simply check.

Ifisinvertible,one has

[16]

and ifis invertible, one has

[17][16]

If the blocks are square matrices of thesamesize further formulas hold. For example, ifandcommute(i.e.,), then

[18]

This formula has been generalized to matrices composed of more thanblocks, again under appropriate commutativity conditions among the individual blocks.[19]

Forand,the following formula holds (even ifanddo not commute)

[16]

Special types of block matrices

[edit]

Direct sums and block diagonal matrices

[edit]

Direct sum

[edit]

For any arbitrary matricesA(of sizem×n) andB(of sizep×q), we have thedirect sumofAandB,denoted byABand defined as

[10]

For instance,

This operation generalizes naturally to arbitrary dimensioned arrays (provided thatAandBhave the same number of dimensions).

Note that any element in thedirect sumof twovector spacesof matrices could be represented as a direct sum of two matrices.

Block diagonal matrices

[edit]

Ablock diagonal matrixis a block matrix that is asquare matrixsuch that the main-diagonal blocks are square matrices and all off-diagonal blocks are zero matrices.[16]That is, a block diagonal matrixAhas the form

whereAkis a square matrix for allk= 1,...,n.In other words, matrixAis thedirect sumofA1,...,An.[16]It can also be indicated asA1A2⊕... ⊕An[10]or diag(A1,A2,...,An)[10](the latter being the same formalism used for adiagonal matrix). Any square matrix can trivially be considered a block diagonal matrix with only one block.

For thedeterminantandtrace,the following properties hold:

[20][21]and
[16][21]

A block diagonal matrix is invertibleif and only ifeach of its main-diagonal blocks are invertible, and in this case its inverse is another block diagonal matrix given by

[22]

Theeigenvalues[23]and eigenvectorsofare simply those of thes combined.[21]

Block tridiagonal matrices

[edit]

Ablock tridiagonal matrixis another special block matrix, which is just like the block diagonal matrix asquare matrix,having square matrices (blocks) in the lower diagonal,main diagonaland upper diagonal, with all other blocks being zero matrices. It is essentially atridiagonal matrixbut has submatrices in places of scalars. A block tridiagonal matrixhas the form

where,andare square sub-matrices of the lower, main and upper diagonal respectively.[24][25]

Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g.,computational fluid dynamics). Optimized numerical methods forLU factorizationare available[26]and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix. TheThomas algorithm,used for efficient solution of equation systems involving atridiagonal matrixcan also be applied using matrix operations to block tridiagonal matrices (see alsoBlock LU decomposition).

Block triangular matrices

[edit]

Upper block triangular

[edit]

A matrixisupper block triangular(orblock upper triangular[27]) if

,

wherefor all.[23][27]

Lower block triangular

[edit]

A matrixislower block triangularif

,

wherefor all.[23]

Block Toeplitz matrices

[edit]

Ablock Toeplitz matrixis another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, as aToeplitz matrixhas elements repeated down the diagonal.

A matrixisblock Toeplitziffor all,that is,

,

where.[23]

Block Hankel matrices

[edit]

A matrixisblock Hankeliffor all,that is,

,

where.[23]

See also

[edit]
  • Kronecker product(matrix direct product resulting in a block matrix)
  • Jordan normal form(canonical form of a linear operator on a finite-dimensional complex vector space)
  • Strassen algorithm(algorithm for matrix multiplication that is faster than the conventional matrix multiplication algorithm)

Notes

[edit]
  1. ^Eves, Howard(1980).Elementary Matrix Theory(reprint ed.). New York: Dover. p.37.ISBN0-486-63946-0.Retrieved24 April2013.We shall find that it is sometimes convenient to subdivide a matrix into rectangular blocks of elements. This leads us to consider so-calledpartitioned,orblock,matrices.
  2. ^abDobrushkin, Vladimir."Partition Matrices".Linear Algebra with Mathematica.Retrieved2024-03-24.
  3. ^Anton, Howard (1994).Elementary Linear Algebra(7th ed.). New York: John Wiley. p. 30.ISBN0-471-58742-7.A matrix can be subdivided orpartitionedinto smaller matrices by inserting horizontal and vertical rules between selected rows and columns.
  4. ^Indhumathi, D.; Sarala, S. (2014-05-16)."Fragment Analysis and Test Case Generation using F-Measure for Adaptive Random Testing and Partitioned Block based Adaptive Random Testing"(PDF).International Journal of Computer Applications.93(6): 13.doi:10.5120/16218-5662.
  5. ^Macedo, H.D.; Oliveira, J.N. (2013). "Typing linear algebra: A biproduct-oriented approach".Science of Computer Programming.78(11): 2160–2191.arXiv:1312.4818.doi:10.1016/j.scico.2012.07.012.
  6. ^abcJohnston, Nathaniel (2021).Introduction to linear and matrix algebra.Cham, Switzerland: Springer Nature. pp. 30, 425.ISBN978-3-030-52811-9.
  7. ^abJohnston, Nathaniel (2021).Advanced linear and matrix algebra.Cham, Switzerland: Springer Nature. p. 298.ISBN978-3-030-52814-0.
  8. ^Jeffrey, Alan (2010).Matrix operations for engineers and scientists: an essential guide in linear algebra.Dordrecht [Netherlands]; New York: Springer. p. 54.ISBN978-90-481-9273-1.OCLC639165077.
  9. ^abcdefghijklmnStewart, Gilbert W. (1998).Matrix algorithms. 1: Basic decompositions.Philadelphia, PA: Soc. for Industrial and Applied Mathematics. pp. 18–20.ISBN978-0-89871-414-2.
  10. ^abcdeGentle, James E. (2007).Matrix Algebra: Theory, Computations, and Applications in Statistics.Springer Texts in Statistics. New York, NY: Springer New York Springer e-books. pp. 47, 487.ISBN978-0-387-70873-7.
  11. ^Mackey, D. Steven (2006).Structured linearizations for matrix polynomials(PDF)(Thesis). University of Manchester.ISSN1749-9097.OCLC930686781.
  12. ^Eves, Howard(1980).Elementary Matrix Theory(reprint ed.). New York: Dover. p.37.ISBN0-486-63946-0.Retrieved24 April2013.A partitioning as in Theorem 1.9.4 is called aconformable partitionofAandB.
  13. ^Anton, Howard (1994).Elementary Linear Algebra(7th ed.). New York: John Wiley. p. 36.ISBN0-471-58742-7....provided the sizes of the submatrices of A and B are such that the indicated operations can be performed.
  14. ^Mathai, Arakaparampil M.; Haubold, Hans J. (2017).Linear Algebra: a course for physicists and engineers.De Gruyter textbook. Berlin Boston: De Gruyter. p. 162.ISBN978-3-11-056259-0.
  15. ^ Bernstein, Dennis (2005).Matrix Mathematics.Princeton University Press. p. 44.ISBN0-691-11802-7.
  16. ^abcdefghAbadir, Karim M.; Magnus, Jan R. (2005).Matrix Algebra.Cambridge University Press. pp. 97, 100, 106, 111, 114, 118.ISBN9781139443647.
  17. ^Taboga, Marco (2021). "Determinant of a block matrix", Lectures on matrix algebra.
  18. ^Silvester, J. R. (2000)."Determinants of Block Matrices"(PDF).Math. Gaz.84(501): 460–467.doi:10.2307/3620776.JSTOR3620776.Archived fromthe original(PDF)on 2015-03-18.Retrieved2021-06-25.
  19. ^Sothanaphan, Nat (January 2017). "Determinants of block matrices with noncommuting blocks".Linear Algebra and Its Applications.512:202–218.arXiv:1805.06027.doi:10.1016/j.laa.2016.10.004.S2CID119272194.
  20. ^Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000).Numerical mathematics.Texts in applied mathematics. New York: Springer. pp. 10, 13.ISBN978-0-387-98959-4.
  21. ^abcGeorge, Raju K.; Ajayakumar, Abhijith (2024)."A Course in Linear Algebra".University Texts in the Mathematical Sciences:35, 407.doi:10.1007/978-981-99-8680-4.ISBN978-981-99-8679-8.ISSN2731-9318.
  22. ^Prince, Simon J. D. (2012).Computer vision: models, learning, and inference.New York: Cambridge university press. p. 531.ISBN978-1-107-01179-3.
  23. ^abcdeBernstein, Dennis S. (2009).Matrix mathematics: theory, facts, and formulas(2 ed.). Princeton, NJ: Princeton University Press. pp. 168, 298.ISBN978-0-691-14039-1.
  24. ^Dietl, Guido K. E. (2007).Linear estimation and detection in Krylov subspaces.Foundations in signal processing, communications and networking. Berlin; New York: Springer. pp. 85, 87.ISBN978-3-540-68478-7.OCLC85898525.
  25. ^Horn, Roger A.; Johnson, Charles R. (2017).Matrix analysis(Second edition, corrected reprint ed.). New York, NY: Cambridge University Press. p. 36.ISBN978-0-521-83940-2.
  26. ^Datta, Biswa Nath (2010).Numerical linear algebra and applications(2 ed.). Philadelphia, Pa: SIAM. p. 168.ISBN978-0-89871-685-6.
  27. ^abStewart, Gilbert W. (2001).Matrix algorithms. 2: Eigensystems.Philadelphia, Pa: Soc. for Industrial and Applied Mathematics. p. 5.ISBN978-0-89871-503-3.

References

[edit]