The product of a matrix with its adjugate gives adiagonal matrix(entries not on the main diagonal are zero) whose diagonal entries are thedeterminantof the original matrix:
whereIis theidentity matrixof the same size asA.Consequently, the multiplicative inverse of aninvertible matrixcan be found by dividing its adjugate by its determinant.
In more detail, supposeRis a unitalcommutative ringandAis ann × nmatrix with entries fromR.The(i,j)-minorofA,denotedMij,is thedeterminantof the(n− 1) × (n− 1)matrix that results from deleting rowiand columnjofA.Thecofactor matrixofAis then × nmatrixCwhose(i,j)entry is the(i,j)cofactorofA,which is the(i,j)-minor times a sign factor:
The adjugate ofAis the transpose ofC,that is, then × nmatrix whose(i,j)entry is the(j, i)cofactor ofA,
The above formula implies one of the fundamental results in matrix algebra, thatAisinvertibleif and only ifdet(A)is aninvertible elementofR.When this holds, the equation above yields
It is easy to check the adjugate is theinversetimes the determinant,−6.
The−1in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor ofA.This cofactor is computed using thesubmatrixobtained by deleting the third row and second column of the original matrixA,
The (3,2) cofactor is a sign times the determinant of this submatrix:
This can beprovedin three ways. One way, valid for any commutative ring, is a direct computation using theCauchy–Binet formula.The second way, valid for the real or complex numbers, is to first observe that for invertible matricesAandB,
Because every non-invertible matrix is the limit of invertible matrices,continuityof the adjugate then implies that the formula remains true when one ofAorBis not invertible.
Acorollaryof the previous formula is that, for any non-negativeintegerk,
IfAis invertible, then the above formula also holds for negativek.
From the identity
we deduce
Suppose thatAcommuteswithB.Multiplying the identityAB=BAon the left and right byadj(A)proves that
IfAis invertible, this implies thatadj(A)also commutes withB.Over the real or complex numbers, continuity implies thatadj(A)commutes withBeven whenAis not invertible.
Finally, there is a more general proof than the second proof, which only requires that ann × nmatrix has entries over afieldwith at least 2n+ 1 elements (e.g. a 5 × 5 matrix over the integersmodulo11).det(A+tI)is a polynomial intwithdegreeat mostn,so it has at mostnroots.Note that theij th entry ofadj((A+tI)(B))is a polynomial of at most ordern,and likewise foradj(A+tI) adj(B).These two polynomials at theij th entry agree on at leastn+ 1 points, as we have at leastn+ 1 elements of the field whereA+tIis invertible, and we have proven the identity for invertible matrices. Polynomials of degreenwhich agree onn+ 1 points must be identical (subtract them from each other and you haven+ 1 roots for a polynomial of degree at mostn– a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value oft.Thus, they take the same value whent= 0.
Using the above properties and other elementary computations, it is straightforward to show that ifAhas one of the following properties, thenadj Adoes as well:
IfAisskew-symmetric,thenadj(A)is skew-symmetric for evennand symmetric for oddn.Similarly, ifAisskew-Hermitian,thenadj(A)is skew-Hermitian for evennand Hermitian for oddn.
IfAis invertible, then, as noted above, there is a formula foradj(A)in terms of the determinant and inverse ofA.WhenAis not invertible, the adjugate satisfies different but closely related formulas.
Ifrk(A) ≤n− 2,thenadj(A) =0.
Ifrk(A) =n− 1,thenrk(adj(A)) = 1.(Some minor is non-zero, soadj(A)is non-zero and hence hasrankat least one; the identityadj(A) A=0implies that thedimensionof thenullspaceofadj(A)is at leastn− 1,so its rank is at most one.) It follows thatadj(A) =αxyT,whereαis a scalar andxandyare vectors such thatAx=0andATy=0.
Letbbe a column vector of sizen.Fix1 ≤i≤nand consider the matrix formed by replacing columniofAbyb:
Laplace expand the determinant of this matrix along columni.The result is entryiof the productadj(A)b.Collecting these determinants for the different possibleiyields an equality of column vectors
Separating the constant term and multiplying the equation byadj(A)gives an expression for the adjugate that depends only onAand the coefficients ofpA(t).These coefficients can be explicitly represented in terms oftracesof powers ofAusing complete exponentialBell polynomials.The resulting formula is
wherenis the dimension ofA,and the sum is taken oversand all sequences ofkl≥ 0satisfying the linearDiophantine equation
Abstractly,isisomorphictoR,and under any such isomorphism the exterior product is aperfect pairing.Therefore, it yields an isomorphism
Explicitly, this pairing sendsv∈Vto,where
Suppose thatT:V→Vis alinear transformation.Pullback by the(n− 1)st exterior power ofTinduces a morphism ofHomspaces. TheadjugateofTis the composite
IfV=Rnis endowed with itscanonical basise1,…,en,and if the matrix ofTin thisbasisisA,then the adjugate ofTis the adjugate ofA.To see why, givethe basis
Fix a basis vectoreiofRn.The image ofeiunderis determined by where it sends basis vectors:
On basis vectors, the(n− 1)st exterior power ofTis
Each of these terms maps to zero underexcept thek=iterm. Therefore, the pullback ofis the linear transformation for which
that is, it equals
Applying the inverse ofshows that the adjugate ofTis the linear transformation for which
Consequently, its matrix representation is the adjugate ofA.
IfVis endowed with aninner productand a volume form, then the mapφcan be decomposed further. In this case,φcan be understood as the composite of theHodge star operatorand dualization. Specifically, ifωis the volume form, then it, together with the inner product, determines an isomorphism
This induces an isomorphism
A vectorvinRncorresponds to the linear functional
By the definition of the Hodge star operator, this linear functional is dual to*v.That is,ω∨∘ φequalsv↦ *v∨.
LetAbe ann × nmatrix, and fixr≥ 0.Therth higher adjugateofAis anmatrix, denotedadjrA,whose entries are indexed by sizersubsetsIandJof{1,...,m}[citation needed].LetIcandJcdenote thecomplementsofIandJ,respectively. Also letdenote the submatrix ofAcontaining those rows and columns whose indices are inIcandJc,respectively. Then the(I,J)entry ofadjrAis
whereσ(I)andσ(J)are the sum of the elements ofIandJ,respectively.
Basic properties of higher adjugates include[citation needed]:
^Claeyssen, J.C.R. (1990). "On predicting the response of non-conservative linear vibrating systems by using dynamical matrix solutions".Journal of Sound and Vibration.140(1): 73–84.Bibcode:1990JSV...140...73C.doi:10.1016/0022-460X(90)90907-H.
^Chen, W.; Chen, W.; Chen, Y.J. (2004). "A characteristic matrix approach for analyzing resonant ring lattice devices".IEEE Photonics Technology Letters.16(2): 458–460.Bibcode:2004IPTL...16..458C.doi:10.1109/LPT.2003.823104.