Jump to content

Covariance matrix

From Wikipedia, the free encyclopedia

Abivariate Gaussian probability density functioncentered at (0, 0), with covariance matrix given by
Sample points from abivariate Gaussian distributionwith a standard deviation of 3 in roughly the lower left–upper right direction and of 1 in the orthogonal direction. Because thexandycomponents co-vary, the variances ofanddo not fully describe the distribution. Acovariance matrix is needed; the directions of the arrows correspond to theeigenvectorsof this covariance matrix and their lengths to the square roots of theeigenvalues.

Inprobability theoryandstatistics,acovariance matrix(also known asauto-covariance matrix,dispersion matrix,variance matrix,orvariance–covariance matrix) is a squarematrixgiving thecovariancebetween each pair of elements of a givenrandom vector.

Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in theanddirections contain all of the necessary information; amatrix would be necessary to fully characterize the two-dimensional variation.

Anycovariancematrix issymmetricandpositive semi-definiteand its main diagonal containsvariances(i.e., the covariance of each element with itself).

The covariance matrix of a random vectoris typically denoted by,or.

Definition[edit]

Throughout this article, boldfaced unsubscriptedandare used to refer to random vectors, and Roman subscriptedandare used to refer to scalar random variables.

If the entries in thecolumn vector arerandom variables,each with finitevarianceandexpected value,then the covariance matrixis the matrix whoseentry is thecovariance[1]: 177  where the operatordenotes the expected value (mean) of its argument.

Conflicting nomenclatures and notations[edit]

Nomenclatures differ. Some statisticians, following the probabilistWilliam Fellerin his two-volume bookAn Introduction to Probability Theory and Its Applications,[2]call the matrixthevarianceof the random vector,because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it thecovariance matrix,because it is the matrix of covariances between the scalar components of the vector.

Both forms are quite standard, and there is no ambiguity between them. The matrixis also often called thevariance-covariance matrix,since the diagonal terms are in fact variances.

By comparison, the notation for thecross-covariance matrixbetweentwo vectors is

Properties[edit]

Relation to the autocorrelation matrix[edit]

The auto-covariance matrixis related to theautocorrelation matrixby where the autocorrelation matrix is defined as.

Relation to the correlation matrix[edit]

An entity closely related to the covariance matrix is the matrix ofPearson product-moment correlation coefficientsbetween each of the random variables in the random vector,which can be written as whereis the matrix of the diagonal elements of(i.e., adiagonal matrixof the variances offor).

Equivalently, the correlation matrix can be seen as the covariance matrix of thestandardized random variablesfor.

Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Eachoff-diagonal elementis between −1 and +1 inclusive.

Inverse of the covariance matrix[edit]

The inverse of this matrix,,if it exists, is the inverse covariance matrix (or inverse concentration matrix), also known as theprecision matrix(orconcentration matrix).[3]

Just as the covariance matrix can be written as the rescaling of a correlation matrix by the marginal variances:

So, using the idea ofpartial correlation,and partial variance, the inverse covariance matrix can be expressed analogously: This duality motivates a number of other dualities between marginalizing and conditioning for Gaussian random variables.

Basic properties[edit]

Forand,whereis a-dimensional random variable, the following basic properties apply:[4]

  1. ispositive-semidefinite,i.e.
  2. issymmetric,i.e.
  3. For any constant (i.e. non-random)matrixand constantvector,one has
  4. Ifis another random vector with the same dimension as,thenwhereis thecross-covariance matrixofand.

Block matrices[edit]

The joint meanandjoint covariance matrixofandcan be written in block form where,and.

andcan be identified as the variance matrices of themarginal distributionsforandrespectively.

Ifandarejointly normally distributed, then theconditional distributionforgivenis given by[5] defined byconditional mean andconditional variance

The matrixis known as the matrix ofregressioncoefficients, while in linear algebrais theSchur complementofin.

The matrix of regression coefficients may often be given in transpose form,,suitable for post-multiplying a row vector of explanatory variablesrather than pre-multiplying a column vector.In this form they correspond to the coefficients obtained by inverting the matrix of thenormal equationsofordinary least squares(OLS).

Partial covariance matrix[edit]

A covariance matrix with all non-zero elements tells us that all the individual random variables are interrelated. This means that the variables are not only directly correlated, but also correlated via other variables indirectly. Often such indirect,common-modecorrelations are trivial and uninteresting. They can be suppressed by calculating the partial covariance matrix, that is the part of covariance matrix that shows only the interesting part of correlations.

If two vectors of random variablesandare correlated via another vector,the latter correlations are suppressed in a matrix[6] The partial covariance matrixis effectively the simple covariance matrixas if the uninteresting random variableswere held constant.

Covariance matrix as a parameter of a distribution[edit]

If a column vectorofpossibly correlated random variables isjointly normally distributed,or more generallyelliptically distributed,then itsprobability density functioncan be expressed in terms of the covariance matrixas follows[6] whereandis thedeterminantof.

Covariance matrix as a linear operator[edit]

Applied to one vector, the covariance matrix maps a linear combinationcof the random variablesXonto a vector of covariances with those variables:.Treated as abilinear form,it yields the covariance between the two linear combinations:.The variance of a linear combination is then,its covariance with itself.

Similarly, the (pseudo-)inverse covariance matrix provides an inner product,which induces theMahalanobis distance,a measure of the "unlikelihood" ofc.[citation needed]

Which matrices are covariance matrices?[edit]

From the identity just above, letbe areal-valued vector, then which must always be nonnegative, since it is thevarianceof a real-valued random variable, so a covariance matrix is always apositive-semidefinite matrix.

The above argument can be expanded as follows:where the last inequality follows from the observation thatis a scalar.

Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, supposeis asymmetric positive-semidefinite matrix. From the finite-dimensional case of thespectral theorem,it follows thathas a nonnegative symmetricsquare root,which can be denoted byM1/2.Letbe anycolumn vector-valued random variable whose covariance matrix is theidentity matrix. Then

Complex random vectors[edit]

Thevarianceof acomplexscalar-valuedrandom variable with expected valueis conventionally defined usingcomplex conjugation: where the complex conjugate of a complex numberis denoted;thus the variance of a complex random variable is a real number.

Ifis a column vector of complex-valued random variables, then theconjugate transposeis formed bybothtransposing and conjugating. In the following expression, the product of a vector with its conjugate transpose results in a square matrix called thecovariance matrix,as its expectation:[7]: 293  The matrix so obtained will beHermitianpositive-semidefinite,[8]with real numbers in the main diagonal and complex numbers off-diagonal.

Properties
  • The covariance matrix is aHermitian matrix,i.e..[1]: 179 
  • The diagonal elements of the covariance matrix are real.[1]: 179 

Pseudo-covariance matrix[edit]

For complex random vectors, another kind of second central moment, thepseudo-covariance matrix(also calledrelation matrix) is defined as follows:

In contrast to the covariance matrix defined above, Hermitian transposition gets replaced by transposition in the definition. Its diagonal elements may be complex valued; it is acomplex symmetric matrix.

Estimation[edit]

Ifandare centereddata matricesof dimensionandrespectively, i.e. withncolumns of observations ofpandqrows of variables, from which the row means have been subtracted, then, if the row means were estimated from the data, sample covariance matricesandcan be defined to be or, if the row means were known a priori,

These empirical sample covariance matrices are the most straightforward and most often used estimators for the covariance matrices, but other estimators also exist, including regularised or shrinkage estimators, which may have better properties.

Applications[edit]

The covariance matrix is a useful tool in many different areas. From it atransformation matrixcan be derived, called awhitening transformation,that allows one to completely decorrelate the data[citation needed]or, from a different point of view, to find an optimal basis for representing the data in a compact way[citation needed](seeRayleigh quotientfor a formal proof and additional properties of covariance matrices). This is calledprincipal component analysis(PCA) and theKarhunen–Loève transform(KL-transform).

The covariance matrix plays a key role infinancial economics,especially inportfolio theoryand itsmutual fund separation theoremand in thecapital asset pricing model.The matrix of covariances among various assets' returns is used to determine, under certain assumptions, the relative amounts of different assets that investors should (in anormative analysis) or are predicted to (in apositive analysis) choose to hold in a context ofdiversification.

Use in optimization[edit]

Theevolution strategy,a particular family of Randomized Search Heuristics, fundamentally relies on a covariance matrix in its mechanism. The characteristic mutation operator draws the update step from a multivariate normal distribution using an evolving covariance matrix. There is a formal proof that theevolution strategy's covariance matrix adapts to the inverse of theHessian matrixof the search landscape,up toa scalar factor and small random fluctuations (proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation).[9] Intuitively, this result is supported by the rationale that the optimal covariance distribution can offer mutation steps whose equidensity probability contours match the level sets of the landscape, and so they maximize the progress rate.

Covariance mapping[edit]

Incovariance mappingthe values of theormatrix are plotted as a 2-dimensional map. When vectorsandare discreterandom functions,the map shows statistical relations between different regions of the random functions. Statistically independent regions of the functions show up on the map as zero-level flatland, while positive or negative correlations show up, respectively, as hills or valleys.

In practice the column vectors,andare acquired experimentally as rows ofsamples, e.g. whereis thei-th discrete value in samplejof the random function.The expected values needed in the covariance formula are estimated using thesample mean,e.g. and the covariance matrix is estimated by thesample covariancematrix where the angular brackets denote sample averaging as before except that theBessel's correctionshould be made to avoidbias.Using this estimation the partial covariance matrix can be calculated as where the backslash denotes theleft matrix divisionoperator, which bypasses the requirement to invert a matrix and is available in some computational packages such asMatlab.[10]

Figure 1: Construction of a partial covariance map of N2molecules undergoing Coulomb explosion induced by a free-electron laser.[11]Panelsaandbmap the two terms of the covariance matrix, which is shown in panelc.Paneldmaps common-mode correlations via intensity fluctuations of the laser. Panelemaps the partial covariance matrix that is corrected for the intensity fluctuations. Panelfshows that 10% overcorrection improves the map and makes ion-ion correlations clearly visible. Owing to momentum conservation these correlations appear as lines approximately perpendicular to the autocorrelation line (and to the periodic modulations which are caused by detector ringing).

Fig. 1 illustrates how a partial covariance map is constructed on an example of an experiment performed at theFLASHfree-electron laserin Hamburg.[11]The random functionis thetime-of-flightspectrum of ions from aCoulomb explosionof nitrogen molecules multiply ionised by a laser pulse. Since only a few hundreds of molecules are ionised at each laser pulse, the single-shot spectra are highly fluctuating. However, collecting typicallysuch spectra,,and averaging them overproduces a smooth spectrum,which is shown in red at the bottom of Fig. 1. The average spectrumreveals several nitrogen ions in a form of peaks broadened by their kinetic energy, but to find the correlations between the ionisation stages and the ion momenta requires calculating a covariance map.

In the example of Fig. 1 spectraandare the same, except that the range of the time-of-flightdiffers. Panelashows,panelbshowsand panelcshows their difference, which is(note a change in the colour scale). Unfortunately, this map is overwhelmed by uninteresting, common-mode correlations induced by laser intensity fluctuating from shot to shot. To suppress such correlations the laser intensityis recorded at every shot, put intoandis calculated as panelsdandeshow. The suppression of the uninteresting correlations is, however, imperfect because there are other sources of common-mode fluctuations than the laser intensity and in principle all these sources should be monitored in vector.Yet in practice it is often sufficient to overcompensate the partial covariance correction as panelfshows, where interesting correlations of ion momenta are now clearly visible as straight lines centred on ionisation stages of atomic nitrogen.

Two-dimensional infrared spectroscopy[edit]

Two-dimensional infrared spectroscopy employscorrelation analysisto obtain 2D spectra of thecondensed phase.There are two versions of this analysis:synchronousandasynchronous.Mathematically, the former is expressed in terms of the sample covariance matrix and the technique is equivalent to covariance mapping.[12]

See also[edit]

References[edit]

  1. ^abcPark, Kun Il (2018).Fundamentals of Probability and Stochastic Processes with Applications to Communications.Springer.ISBN978-3-319-68074-3.
  2. ^William Feller (1971).An introduction to probability theory and its applications.Wiley.ISBN978-0-471-25709-7.Retrieved10 August2012.
  3. ^Wasserman, Larry (2004).All of Statistics: A Concise Course in Statistical Inference.Springer.ISBN0-387-40272-1.
  4. ^Taboga, Marco (2010)."Lectures on probability theory and mathematical statistics".
  5. ^Eaton, Morris L. (1983).Multivariate Statistics: a Vector Space Approach.John Wiley and Sons. pp. 116–117.ISBN0-471-02776-6.
  6. ^abW J Krzanowski "Principles of Multivariate Analysis" (Oxford University Press, New York, 1988), Chap. 14.4; K V Mardia, J T Kent and J M Bibby "Multivariate Analysis (Academic Press, London, 1997), Chap. 6.5.3; T W Anderson" An Introduction to Multivariate Statistical Analysis "(Wiley, New York, 2003), 3rd ed., Chaps. 2.5.1 and 4.3.1.
  7. ^Lapidoth, Amos (2009).A Foundation in Digital Communication.Cambridge University Press.ISBN978-0-521-19395-5.
  8. ^Brookes, Mike."The Matrix Reference Manual".
  9. ^Shir, O.M.; A. Yehudayoff (2020)."On the covariance-Hessian relation in evolution strategies".Theoretical Computer Science.801.Elsevier: 157–174.arXiv:1806.03674.doi:10.1016/j.tcs.2019.09.002.
  10. ^L J Frasinski "Covariance mapping techniques"J. Phys. B: At. Mol. Opt. Phys.49152004 (2016),open access
  11. ^abO Kornilov, M Eckstein, M Rosenblatt, C P Schulz, K Motomura, A Rouzée, J Klei, L Foucar, M Siano, A Lübcke, F. Schapper, P Johnsson, D M P Holland, T Schlatholter, T Marchenko, S Düsterer, K Ueda, M J J Vrakking and L J Frasinski "Coulomb explosion of diatomic molecules in intense XUV fields mapped by partial covariance"J. Phys. B: At. Mol. Opt. Phys.46164028 (2013),open access
  12. ^I Noda "Generalized two-dimensional correlation method applicable to infrared, Raman, and other types of spectroscopy"Appl. Spectrosc.471329–36 (1993)

Further reading[edit]