Inmathematics,atensoris analgebraic objectthat describes amultilinearrelationship between sets of algebraic objects related to avector space.Tensors may map between different objects such asvectors,scalars,and even other tensors. There are many types of tensors, includingscalarsandvectors(which are the simplest tensors),dual vectors,multilinear mapsbetween vector spaces, and even some operations such as thedot product.Tensors are definedindependentof anybasis,although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensionalmatrix.

The second-orderCauchy stress tensordescribes the stress experienced by a material at a given point. For any unit vector,the productis a vector, denoted,that quantifies the force per area along the plane perpendicular to.This image shows, for cube faces perpendicular to,the corresponding stress vectorsalong those faces. Because the stress tensor takes one vector as input and gives one vector as output, it is a second-order tensor.

Tensors have become important inphysicsbecause they provide a concise mathematical framework for formulating and solving physics problems in areas such asmechanics(stress,elasticity,quantum mechanics,fluid mechanics,moment of inertia,...),electrodynamics(electromagnetic tensor,Maxwell tensor,permittivity,magnetic susceptibility,...), andgeneral relativity(stress–energy tensor,curvature tensor,...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of atensor field.In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".

Tullio Levi-CivitaandGregorio Ricci-Curbastropopularised tensors in 1900 – continuing the earlier work ofBernhard Riemann,Elwin Bruno Christoffel,and others – as part of theabsolute differential calculus.The concept enabled an alternative formulation of the intrinsicdifferential geometryof amanifoldin the form of theRiemann curvature tensor.[1]

Definition

edit

Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.

As multidimensional arrays

edit

A tensor may be represented as a (potentially multidimensional) array. Just as avectorin ann-dimensionalspace is represented by aone-dimensionalarray withncomponents with respect to a givenbasis,any tensor with respect to a basis is represented by a multidimensional array. For example, alinear operatoris represented in a basis as a two-dimensional squaren×narray. The numbers in the multidimensional array are known as thecomponentsof the tensor. They are denoted by indices giving their position in the array, assubscripts and superscripts,following the symbolic name of the tensor. For example, the components of an order2tensorTcould be denotedTij , whereiandjare indices running from1ton,or also byTi
j
.Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus whileTijandTi
j
can both be expressed asn-by-nmatrices, and are numerically related viaindex juggling,the difference in their transformation laws indicates it would be improper to add them together.

The total number of indices (m) required to identify each component uniquely is equal to thedimensionor the number ofwaysof an array, which is why a tensor is sometimes referred to as anm-dimensional array or anm-way array. The total number of indices is also called theorder,degreeorrankof a tensor,[2][3][4]although the term "rank" generally hasanother meaningin the context of matrices and tensors.

Just as the components of a vector change when we change thebasisof the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with atransformation lawthat details how the components of the tensor respond to achange of basis.The components of a vector can respond in two distinct ways to achange of basis(seeCovariance and contravariance of vectors), where the newbasis vectorsare expressed in terms of the old basis vectorsas,

HereRjiare the entries of the change of basis matrix, and in the rightmost expression thesummationsign was suppressed: this is theEinstein summation convention,which will be used throughout this article.[Note 1]The componentsviof a column vectorvtransform with theinverseof the matrixR,

where the hat denotes the components in the new basis. This is called acontravarianttransformation law, because the vector components transform by theinverseof the change of basis. In contrast, the components,wi,of a covector (or row vector),w,transform with the matrixRitself,

This is called acovarianttransformation law, because the covector components transform by thesame matrixas the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is calledcontravariantand is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is calledcovariantand is denoted with a lower index (subscript).

As a simple example, the matrix of a linear operator with respect to a basis is a rectangular arraythat transforms under a change of basis matrixby.For the individual matrix entries, this transformation law has the formso the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).

Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:

,

whereis theKronecker delta,which functions similarly to theidentity matrix,and has the effect of renaming indices (jintokin this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions likecan immediately be seen to be geometrically identical in all coordinate systems.

Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the componentsare given by.These components transform contravariantly, since

The transformation law for an orderp+qtensor withpcontravariant indices andqcovariant indices is thus given as,

Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order ortype(p,q).The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions),p+qin the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type(p,q)is also called a(p,q)-tensor for short.

This discussion motivates the following formal definition:[5][6]

Definition.A tensor of type (p,q) is an assignment of a multidimensional array

to each basisf= (e1,...,en)of ann-dimensional vector space such that, if we apply the change of basis

then the multidimensional array obeys the transformation law

The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.[1]

An equivalent definition of a tensor uses therepresentationsof thegeneral linear group.There is anactionof the general linear group on the set of allordered basesof ann-dimensional vector space. Ifis an ordered basis, andis an invertiblematrix, then the action is given by

LetFbe the set of all ordered bases. ThenFis aprincipal homogeneous spacefor GL(n). LetWbe a vector space and letbe a representation of GL(n) onW(that is, agroup homomorphism). Then a tensor of typeis anequivariant map.Equivariance here means that

Whenis atensor representationof the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds,[7]and readily generalizes to other groups.[5]

As multilinear maps

edit

A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common indifferential geometryis to define tensors relative to a fixed (finite-dimensional) vector spaceV,which is usually taken to be a particular vector space of some geometrical significance like thetangent spaceto a manifold.[8]In this approach, a type(p,q)tensorTis defined as amultilinear map,

whereVis the correspondingdual spaceof covectors, which is linear in each of its arguments. The above assumesVis a vector space over thereal numbers,.More generally,Vcan be taken over anyfieldF(e.g. thecomplex numbers), withFreplacingas the codomain of the multilinear maps.

By applying a multilinear mapTof type(p,q)to a basis {ej} forVand a canonical cobasis {εi} forV,

a(p+q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, becauseTis linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components ofTthus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear mapT.This motivates viewing multilinear maps as the intrinsic objects underlying tensors.

In viewing a tensor as a multilinear map, it is conventional to identify thedouble dualV∗∗of the vector spaceV,i.e., the space of linear functionals on the dual vector spaceV,with the vector spaceV.There is always anatural linear mapfromVto its double dual, given by evaluating a linear form inVagainst a vector inV.This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identifyVwith its double dual.

Using tensor products

edit

For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements oftensor productsof vector spaces, which in turn are defined through auniversal propertyas explainedhereandhere.

Atype(p,q)tensoris defined in this context as an element of the tensor product of vector spaces,[9][10]

A basisviofVand basiswjofWnaturally induce a basisviwjof the tensor productVW.The components of a tensorTare the coefficients of the tensor with respect to the basis obtained from a basis{ei}forVand its dual basis{εj},i.e.

Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type(p,q)tensor. Moreover, the universal property of the tensor product gives aone-to-one correspondencebetween tensors defined in this way and tensors defined as multilinear maps.

This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual:

The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps fromand.[11]

Tensor products can be defined in great generality – for example,involving arbitrary modulesover a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the termtensorfor an element of a tensor product of any number of copies of a single vector spaceVand its dual, as above.

Tensors in infinite dimensions

edit

This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions arenaturally isomorphic.[Note 2]Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, tovector bundlesorcoherent sheaves.[12]For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (seetopological tensor product). In some applications, it is thetensor product of Hilbert spacesthat is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as asymmetric monoidal categorythat encodes their most important properties, rather than the specific models of those categories.[13]

Tensor fields

edit

In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called atensor field,often referred to simply as a tensor.[1]

In this context, acoordinate basisis often chosen for thetangent vector space.The transformation law may then be expressed in terms ofpartial derivativesof the coordinate functions,

defining a coordinate transformation,[1]

History

edit

The concepts of later tensor analysis arose from the work ofCarl Friedrich Gaussindifferential geometry,and the formulation was much influenced by the theory ofalgebraic formsand invariants developed during the middle of the nineteenth century.[14]The word "tensor" itself was introduced in 1846 byWilliam Rowan Hamilton[15]to describe something different from what is now meant by a tensor.[Note 3]Gibbs introduceddyadicsandpolyadic algebra,which are also tensors in the modern sense.[16]The contemporary usage was introduced byWoldemar Voigtin 1898.[17]

Tensor calculus was developed around 1890 byGregorio Ricci-Curbastrounder the titleabsolute differential calculus,and originally presented in 1892.[18]It was made accessible to many mathematicians by the publication of Ricci-Curbastro andTullio Levi-Civita's 1900 classic textMéthodes de calcul différentiel absolu et leurs applications(Methods of absolute differential calculus and their applications).[19]In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.[16]

In the 20th century, the subject came to be known astensor analysis,and achieved broader acceptance with the introduction ofAlbert Einstein's theory ofgeneral relativity,around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometerMarcel Grossmann.[20]Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:

I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.

— Albert Einstein[21]

Tensors andtensor fieldswere also found to be useful in other fields such ascontinuum mechanics.Some well-known examples of tensors indifferential geometryarequadratic formssuch asmetric tensors,and theRiemann curvature tensor.Theexterior algebraofHermann Grassmann,from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory ofdifferential forms,as naturally unified with tensor calculus. The work ofÉlie Cartanmade differential forms one of the basic kinds of tensors used in mathematics, andHassler Whitneypopularized thetensor product.[16]

From about the 1920s onwards, it was realised that tensors play a basic role inalgebraic topology(for example in theKünneth theorem).[22]Correspondingly there are types of tensors at work in many branches ofabstract algebra,particularly inhomological algebraandrepresentation theory.Multilinear algebra can be developed in greater generality than for scalars coming from afield.For example, scalars can come from aring.But the theory is then less geometric and computations more technical and less algorithmic.[23]Tensors are generalized withincategory theoryby means of the concept ofmonoidal category,from the 1960s.[24]

Examples

edit

An elementary example of a mapping describable as a tensor is thedot product,which maps two vectors to a scalar. A more complex example is theCauchy stress tensorT,which takes a directional unit vectorvas input and maps it to the stress vectorT(v),which is the force (per unit area) exerted by material on the negative side of the plane orthogonal tovagainst the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). Thecross product,where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. Thetotally anti-symmetric symbolnevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems.

This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type(n,m),wherenis the number of contravariant indices,mis the number of covariant indices, andn+mgives the total order of the tensor. For example, abilinear formis the same thing as a(0, 2)-tensor; aninner productis an example of a(0, 2)-tensor, but not all(0, 2)-tensors are inner products. In the(0,M)-entry of the table,Mdenotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.

Example tensors on vector spaces and tensor fields on manifolds
m
0 1 2 3 M
n 0 Scalar,e.g.scalar curvature Covector,linear functional,1-form,e.g.dipole moment,gradientof a scalar field Bilinear form,e.g.inner product,quadrupole moment,metric tensor,Ricci curvature,2-form,symplectic form 3-form E.g.octupole moment E.g.M-form i.e.volume form
1 Euclidean vector Linear transformation,[25]Kronecker delta E.g.cross productin three dimensions E.g.Riemann curvature tensor
2 Inversemetric tensor,bivector,e.g.,Poisson structure E.g.elasticity tensor
N Multivector

Raising an index on an(n,m)-tensor produces an(n+ 1,m− 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table.Contractionof an upper with a lower index of an(n,m)-tensor produces an(n− 1,m− 1)-tensor; this corresponds to moving diagonally up and to the left on the table.

Orientation defined by an ordered set of vectors.
Reversed orientation corresponds to negating the exterior product.
Geometric interpretation of gradenelements in a realexterior algebraforn= 0(signed point), 1 (directed line segment, or vector), 2 (oriented plane element), 3 (oriented volume). The exterior product ofnvectors can be visualized as anyn-dimensional shape (e.g.n-parallelotope,n-ellipsoid); with magnitude (hypervolume), andorientationdefined by that on itsn− 1-dimensional boundary and on which side the interior is.[26][27]

Properties

edit

Assuming abasisof a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organizedmultidimensional arrayof numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows todefinetensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers atensor.Compare this to the array representingnot being a tensor, for the sign change under transformations changing the orientation.

Because the components of vectors and their duals transform differently under the change of their dual bases, there is acovariant and/or contravariant transformation lawthat relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively,vectors:n(contravariantindices) and dualvectors:m(covariantindices) in the input and output of a tensor determine thetype(orvalence) of the tensor, a pair of natural numbers(n,m),which determine the precise form of the transformation law. Theorderof a tensor is the sum of these two numbers.

The order (alsodegreeorrank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order2 + 0 = 2,the same as the stress tensor, taking one vector and returning another1 + 1 = 2.The-symbol,mapping two vectors to one vector, would have order2 + 1 = 3.

The collection of tensors on a vector space and its dual forms atensor algebra,which allows products of arbitrary tensors. Simple applications of tensors of order2,which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this.

Notation

edit

There are several notational systems that are used to describe tensors and perform calculations involving them.

Ricci calculus

edit

Ricci calculusis the modern formalism and notation for tensor indices: indicatinginnerandouter products,covariance and contravariance,summationsof tensor components,symmetryandantisymmetry,andpartialandcovariant derivatives.

Einstein summation convention

edit

TheEinstein summation conventiondispenses with writingsummation signs,leaving the summation implicit. Any repeated index symbol is summed over: if the indexiis used twice in a given term of a tensor expression, it means that the term is to be summed for alli.Several distinct pairs of indices may be summed this way.

Penrose graphical notation

edit

Penrose graphical notationis a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.

Abstract index notation

edit

Theabstract index notationis a way to write tensors such that the indices are no longer thought of as numerical, but rather areindeterminates.This notation captures the expressiveness of indices and the basis-independence of index-free notation.

Component-free notation

edit

Acomponent-free treatment of tensorsuses notation that emphasises that tensors do not rely on any basis, and is defined in terms of thetensor product of vector spaces.

Operations

edit

There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to thescaling of a vector.On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.

Tensor product

edit

Thetensor producttakes two tensors,SandT,and produces a new tensor,ST,whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e., which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e., IfSis of type(l,k)andTis of type(n,m),then the tensor productSThas type(l+n,k+m).

Contraction

edit

Tensor contractionis an operation that reduces a type(n,m)tensor to a type(n− 1,m− 1)tensor, of which thetraceis a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a(1, 1)-tensorcan be contracted to a scalar through,where the summation is again implied. When the(1, 1)-tensor is interpreted as a linear map, this operation is known as thetrace.

The contraction is often used in conjunction with the tensor product to contract an index from each tensor.

The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the spaceVwith the spaceVby first decomposing the tensor into a linear combination of simple tensors, and then applying a factor fromVto a factor fromV.For example, a tensorcan be written as a linear combination

The contraction ofTon the first and last slots is then the vector

In a vector space with aninner product(also known as ametric)g,the termcontractionis used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a(2, 0)-tensorcan be contracted to a scalar through(yet again assuming the summation convention).

Raising or lowering an index

edit

When a vector space is equipped with anondegenerate bilinear form(ormetric tensoras it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known aslowering an index.

Conversely, the inverse operation can be defined, and is calledraising an index.This is equivalent to a similar contraction on the product with a(2, 0)-tensor. Thisinverse metric tensorhas components that are the matrix inverse of those of the metric tensor.

Applications

edit

Continuum mechanics

edit

Important examples are provided bycontinuum mechanics.The stresses inside a solid body orfluid[28]are described by a tensor field. Thestress tensorandstrain tensorare both second-order tensor fields, and are related in a general linear elastic material by a fourth-orderelasticity tensorfield. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.

If a particularsurface elementinside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor oftype(2, 0),inlinear elasticity,or more precisely by a tensor field of type(2, 0),since the stresses may vary from point to point.

Other examples from physics

edit

Common applications include:

Computer vision and optics

edit

The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field ofcomputer vision,with thetrifocal tensorgeneralizing thefundamental matrix.

The field ofnonlinear opticsstudies the changes to materialpolarization densityunder extreme electric fields. The polarization waves generated are related to the generatingelectric fieldsthrough the nonlinear susceptibility tensor. If the polarizationPis not linearly proportional to the electric fieldE,the medium is termednonlinear.To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present),Pis given by aTaylor seriesinEwhose coefficients are the nonlinear susceptibilities:

Hereis the linear susceptibility,gives thePockels effectandsecond harmonic generation,andgives theKerr effect.This expansion shows the way higher-order tensors arise naturally in the subject matter.

Machine learning

edit

The properties oftensors,especiallytensor decomposition,have enabled their use inmachine learningto embed higher dimensional data inartificial neural networks.This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same.

Generalizations

edit

Tensor products of vector spaces

edit

The vector spaces of atensor productneed not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product spaceVWis a second-order "tensor" in this more general sense,[29]and an order-dtensor may likewise be defined as an element of a tensor product ofddifferent vector spaces.[30]A type(n,m)tensor, in the sense defined previously, is also a tensor of ordern+min this more general sense. The concept of tensor productcan be extendedto arbitrarymodules over a ring.

Tensors in infinite dimensions

edit

The notion of a tensor can be generalized in a variety of ways toinfinite dimensions.One, for instance, is via thetensor productofHilbert spaces.[31]Another way of generalizing the idea of tensor, common innonlinear analysis,is via themultilinear maps definitionwhere instead of using finite-dimensional vector spaces and theiralgebraic duals,one uses infinite-dimensionalBanach spacesand theircontinuous dual.[32]Tensors thus live naturally onBanach manifolds[33]andFréchet manifolds.

Tensor densities

edit

Suppose that a homogeneous medium fillsR3,so that the density of the medium is described by a singlescalarvalueρinkg⋅m−3.The mass, in kg, of a regionΩis obtained by multiplyingρby the volume of the regionΩ,or equivalently integrating the constantρover the region:

where the Cartesian coordinatesx,y,zare measured inm.If the units of length are changed intocm,then the numerical values of the coordinate functions must be rescaled by a factor of 100:

The numerical value of the densityρmust then also transform by100−3m3/cm3to compensate, so that the numerical value of the mass in kg is still given by integral of.Thus(in units ofkg⋅cm−3).

More generally, if the Cartesian coordinatesx,y,zundergo a linear transformation, then the numerical value of the densityρmust change by a factor of the reciprocal of the absolute value of thedeterminantof the coordinate transformation, so that the integral remains invariant, by thechange of variables formulafor integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called ascalar density.To model a non-constant density,ρis a function of the variablesx,y,z(ascalar field), and under acurvilinearchange of coordinates, it transforms by the reciprocal of theJacobianof the coordinate change. For more on the intrinsic meaning, seeDensity on a manifold.

A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:[34]

Herewis called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor.[35][36]An example of a tensor density is thecurrent densityofelectromagnetism.

Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from therational representationsof the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are stillsemisimplerepresentations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation,[37]consisting of an(x,y) ∈R2with the transformation law

Geometric objects

edit

The transformation law for a tensor behaves as afunctoron the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such aslocal diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes.[38]Examples of objects obeying more general kinds of transformation laws arejetsand, more generally still,natural bundles.[39][40]

Spinors

edit

When changing from oneorthonormal basis(called aframe) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is notsimply connected(seeorientation entanglementandplate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.[41]Aspinoris an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.[42][43]

Spinors are elements of thespin representationof the rotation group, while tensors are elements of itstensor representations.Otherclassical groupshave tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.

See also

edit

Foundational

edit

Applications

edit

Explanatory notes

edit
  1. ^The Einstein summation convention, in brief, requires the sum to be taken over all values of the index whenever the same symbol appears as a subscript and superscript in the same term. For example, under this convention
  2. ^Thedouble duality isomorphism,for instance, is used to identifyVwith the double dual spaceV∗∗,which consists of multilinear forms of degree one onV.It is typical in linear algebra to identify spaces that are naturally isomorphic, treating them as the same space.
  3. ^Namely, thenorm operationin a vector space.

References

edit

Specific

edit
  1. ^abcd Kline, Morris (1990).Mathematical Thought From Ancient to Modern Times.Vol. 3. Oxford University Press.ISBN978-0-19-506137-6.
  2. ^De Lathauwer, Lieven; De Moor, Bart; Vandewalle, Joos (2000)."A Multilinear Singular Value Decomposition"(PDF).SIAM J. Matrix Anal. Appl.21(4): 1253–1278.doi:10.1137/S0895479896305696.S2CID14344372.
  3. ^Vasilescu, M.A.O.; Terzopoulos, D. (2002)."Multilinear Analysis of Image Ensembles: TensorFaces"(PDF).Computer Vision — ECCV 2002.Lecture Notes in Computer Science. Vol. 2350. pp. 447–460.doi:10.1007/3-540-47969-4_30.ISBN978-3-540-43745-1.S2CID12793247.Archived fromthe original(PDF)on 2022-12-29.Retrieved2022-12-29.
  4. ^Kolda, Tamara; Bader, Brett (2009)."Tensor Decompositions and Applications"(PDF).SIAM Review.51(3): 455–500.Bibcode:2009SIAMR..51..455K.doi:10.1137/07070111X.S2CID16074195.
  5. ^abSharpe, R.W. (2000).Differential Geometry: Cartan's Generalization of Klein's Erlangen Program.Springer. p. 194.ISBN978-0-387-94732-7.
  6. ^Schouten, Jan Arnoldus(1954),"Chapter II",Tensor analysis for physicists,Courier Corporation,ISBN978-0-486-65582-6
  7. ^Kobayashi, Shoshichi; Nomizu, Katsumi (1996),Foundations of Differential Geometry,vol. 1 (New ed.),Wiley Interscience,ISBN978-0-471-15733-5
  8. ^Lee, John (2000),Introduction to smooth manifolds,Springer, p. 173,ISBN978-0-387-95495-0
  9. ^Dodson, C.T.J.; Poston, T. (2013) [1991].Tensor geometry: The Geometric Viewpoint and Its Uses.Graduate Texts in Mathematics. Vol. 130 (2nd ed.). Springer. p. 105.ISBN9783642105142.
  10. ^"Affine tensor",Encyclopedia of Mathematics,EMS Press,2001 [1994]
  11. ^"Why are Tensors (Vectors of the form a⊗b...⊗z) multilinear maps?".Mathematics Stackexchange.June 5, 2021.
  12. ^Bourbaki, N. (1998)."3".Algebra I: Chapters 1-3.Springer.ISBN978-3-540-64243-5.where the case of finitely generated projective modules is treated. The global sections of sections of a vector bundle over a compact space form a projective module over the ring of smooth functions. All statements for coherent sheaves are true locally.
  13. ^Joyal, André; Street, Ross (1993), "Braided tensor categories",Advances in Mathematics,102:20–78,doi:10.1006/aima.1993.1055
  14. ^Reich, Karin (1994).Die Entwicklung des Tensorkalküls.Science networks historical studies. Vol. 11. Birkhäuser.ISBN978-3-7643-2814-6.OCLC31468174.
  15. ^Hamilton, William Rowan (1854–1855). Wilkins, David R. (ed.)."On some Extensions of Quaternions"(PDF).Philosophical Magazine(7–9): 492–9, 125–137, 261–9, 46–51, 280–290.ISSN0302-7597.From p. 498: "And if we agree to call thesquare root(taken with a suitable sign) of this scalar product of two conjugate polynomes, P and KP, the common TENSOR of each,... "
  16. ^abcGuo, Hongyu (2021-06-16).What Are Tensors Exactly?.World Scientific.ISBN978-981-12-4103-1.
  17. ^Voigt, Woldemar (1898).Die fundamentalen physikalischen Eigenschaften der Krystalle in elementarer Darstellung[The fundamental physical properties of crystals in an elementary presentation]. Von Veit. pp. 20–.Wir wollen uns deshalb nur darauf stützen, dass Zustände der geschilderten Art bei Spannungen und Dehnungen nicht starrer Körper auftreten, und sie deshalb tensorielle, die für sie charakteristischen physikalischen Grössen aber Tensoren nennen. [We therefore want [our presentation] to be based only on [the assumption that] conditions of the type described occur during stresses and strains of non-rigid bodies, and therefore call them "tensorial" but call the characteristic physical quantities for them "tensors".]
  18. ^Ricci Curbastro, G. (1892)."Résumé de quelques travaux sur les systèmes variables de fonctions associés à une forme différentielle quadratique".Bulletin des Sciences Mathématiques.2(16): 167–189.
  19. ^Ricci & Levi-Civita 1900.
  20. ^Pais, Abraham (2005).Subtle Is the Lord: The Science and the Life of Albert Einstein.Oxford University Press.ISBN978-0-19-280672-7.
  21. ^Goodstein, Judith R.(1982). "The Italian Mathematicians of Relativity".Centaurus.26(3): 241–261.Bibcode:1982Cent...26..241G.doi:10.1111/j.1600-0498.1982.tb00665.x.
  22. ^Spanier, Edwin H. (2012).Algebraic Topology.Springer. p. 227.ISBN978-1-4684-9322-1.the Künneth formula expressing the homology of the tensor product...
  23. ^Hungerford, Thomas W.(2003).Algebra.Springer. p. 168.ISBN978-0-387-90518-1....the classification (up to isomorphism) of modules over an arbitrary ring is quite difficult...
  24. ^MacLane, Saunders(2013).Categories for the Working Mathematician.Springer. p. 4.ISBN978-1-4612-9839-7....for example the monoid M... in the category of abelian groups, × is replaced by the usual tensor product...
  25. ^Bamberg, Paul; Sternberg, Shlomo (1991).A Course in Mathematics for Students of Physics.Vol. 2. Cambridge University Press. p. 669.ISBN978-0-521-40650-5.
  26. ^Penrose, R. (2007).The Road to Reality.Vintage.ISBN978-0-679-77631-4.
  27. ^Wheeler, J.A.; Misner, C.; Thorne, K.S. (1973).Gravitation.W.H. Freeman. p. 83.ISBN978-0-7167-0344-0.
  28. ^Schobeiri, Meinhard T. (2021). "Vector and Tensor Analysis, Applications to Fluid Mechanics".Fluid Mechanics for Engineers.Springer. pp. 11–29.
  29. ^Maia, M. D. (2011).Geometry of the Fundamental Interactions: On Riemann's Legacy to High Energy Physics and Cosmology.Springer. p. 48.ISBN978-1-4419-8273-5.
  30. ^Hogben, Leslie,ed. (2013).Handbook of Linear Algebra(2nd ed.). CRC Press. pp. 15–7.ISBN978-1-4665-0729-6.
  31. ^Segal, I. E. (January 1956)."Tensor Algebras Over Hilbert Spaces. I".Transactions of the American Mathematical Society.81(1): 106–134.doi:10.2307/1992855.JSTOR1992855.
  32. ^Abraham, Ralph; Marsden, Jerrold E.; Ratiu, Tudor S. (February 1988)."5. Tensors".Manifolds, Tensor Analysis and Applications.Applied Mathematical Sciences. Vol. 75 (2nd ed.). Springer. pp. 338–9.ISBN978-0-387-96790-5.OCLC18562688.Elements of Trsare called tensors on E, [...].
  33. ^Lang, Serge(1972).Differential manifolds.Addison-Wesley.ISBN978-0-201-04166-8.
  34. ^Schouten, Jan Arnoldus,"§II.8: Densities",Tensor analysis for physicists
  35. ^McConnell, A.J. (2014) [1957].Applications of tensor analysis.Dover. p. 28.ISBN9780486145020.
  36. ^Kay 1988,p. 27.
  37. ^Olver, Peter (1995),Equivalence, invariants, and symmetry,Cambridge University Press, p. 77,ISBN9780521478113
  38. ^Haantjes, J.;Laman, G.(1953). "On the definition of geometric objects. I".Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen: Series A: Mathematical Sciences.56(3): 208–215.
  39. ^Nijenhuis, Albert(1960),"Geometric aspects of formal differential operations on tensor fields"(PDF),Proc. Internat. Congress Math.(Edinburgh, 1958),Cambridge University Press, pp. 463–9, archived fromthe original(PDF)on 2017-10-27,retrieved2017-10-26.
  40. ^Salviori, Sarah (1972),"On the theory of geometric objects",Journal of Differential Geometry,7(1–2): 257–278,doi:10.4310/jdg/1214430830.
  41. ^Penrose, Roger(2005).The road to reality: a complete guide to the laws of our universe.Knopf. pp. 203–206.
  42. ^Meinrenken, E. (2013). "The spin representation".Clifford Algebras and Lie Theory.Ergebnisse der Mathematik undihrer Grenzgebiete. 3. Folge / A Series of Modern Surveys in Mathematics. Vol. 58. Springer. pp. 49–85.doi:10.1007/978-3-642-36216-3_3.ISBN978-3-642-36215-6.
  43. ^Dong, S. H. (2011), "2. Special Orthogonal Group SO(N) ",Wave Equations in Higher Dimensions,Springer, pp. 13–38

General

edit
edit