Tensor

The tensor is a mathematical object from algebra and differential geometry. The term was originally introduced in physics and clarified later mathematically. Even today, the tensor analysis is an important tool in the physical sciences and engineering disciplines. A tensor is a multi- linear map, thus an image, which is linear in every variable.

Clearly, however, mathematically imprecise, one can imagine the tensor as a multidimensional matrix. Thus, the components of a tensor can other than the one column vector of a matrix, or have more than one or two indices.

  • A number is a tensor 0-th stage.
  • A column or row vector is a first -rank tensor.
  • A matrix is a second order tensor.
  • The Levi- Civita symbol in is an example of a tensor of nth order.

For example, the mechanical stress tensor is in physics, a tensor of rank - a number ( magnitude of the voltage ) or a vector ( a principal stress direction) are not always sufficient to describe the stress state of a body. A matrix can be viewed as a linear image. This allows the stress tensor regarded as matrix, which calculates to a given direction, the tension in this direction.

  • 2.1 ( R, S) - tensor
  • 2.2 exterior tensor
  • 2.3 Examples
  • 2.4 Tensoralgebra
  • 3.1 Basic & dimension
  • 3.2 change of basis and coordinate transformation
  • 4.1 Inner Product
  • 4.2 Tensorverjüngung
  • 4.3 Pull-Back ( repatriation )
  • 4.4 Push -forward
  • 5.1 The universal property
  • 5.2 Tensor as an element of the tensor product
  • 5.3 Tensor and multilinear forms
  • 5.4 Tensor products of a vector space and symmetry
  • 5.5 injective and projective tensor product

Introduction

Verbal and conceptual history

The word tensor (Latin tendo " I stretch " ) was introduced in the 1840s by William Rowan Hamilton in mathematics; he called so the absolute value of its quaternions, so no tensor in the modern sense.

James Clerk Maxwell seems the stress tensor, which he transferred from the elasticity theory in electrodynamics to have even yet to be named so.

In its modern sense, as a generalization of scalar, vector, matrix, the word tensor is first introduced, the fundamental physical properties of the crystals in an elementary representation ( Leipzig, 1898) by Woldemar Voigt in his book.

Under the title absolute differential geometry Gregorio Ricci - Curbastro and his student Tullio Levi -Civita tensor calculus developed in 1890 on Riemannian manifolds; a wider professional audience they made their results in 1900 with the book Calcolo differential assoluto accessible, which was soon translated into other languages ​​, and from the Albert Einstein appropriated the mathematical foundations he needed for the formulation of general relativity. Einstein himself coined in 1916 the term tensor analysis and contributed significantly to his theory, to make known the tensor calculus; He also led an Einstein summation convention, after the summation over repeated indices omitting the summation sign.

Different ways of looking at

The term of the tensor is used both in physics and in mathematics. In mathematics, this object is most often seen in the algebra and differential geometry. Here, a coordinate- independent notation is preferred in applications such as physics, we mostly used, however, the index notation of tensors. Furthermore, tensor fields are treated in physics frequently, often simply referred to as tensors. A tensor field is a mapping which assigns to each point of space a tensor; many physical field theories act of tensor fields. The most prominent example is the General Theory of Relativity.

Einstein summation convention

In particular, in the tensor analysis ( a branch of differential geometry) and physics, the Einstein summation convention popular. It shortens the notation of tensors. The Convention states that the summation sign can be omitted and is automatically summed over indices which are near the top and bottom. A simple example is the matrix multiplication. Be two matrices with the components and. Then is the component representation of the matrix product

With the Einstein summation convention to write

Co-and contravariance

The terms co- and contravariant refer in the context of tensor to the coordinate representations of vectors, linear forms and tensors of higher. They describe how such coordinate representations behave with respect to a base change in the underlying vector space.

If you put in one - dimensional vector space a fixed base, then each vector of this space by a Zahlentupel, its coordinates are measured and displayed. ( Here and further we use the Einstein summation convention. ) Going to another base in excess, so the vector itself does not change, but the coordinates of the new base will be different. Specifically: Is the new base determined by the old base, then the new coordinates result from comparison

So or

Turning, for example, an orthogonal basis in a three-dimensional Euclidean space to the z- axis, to rotate the coordinates of vectors in the coordinate space also about the z- axis but in the opposite direction.

This transformation of the base opposite transformation behavior is called contravariant. Often vectors for brevity of notation are identified with their coordinate vectors so that vectors are commonly referred to as contravariant.

A linear form or covector other hand, is a scalar-valued linear map on the vector space. You can her as coordinates on the basis vectors, assign their values. The coordinate vectors form a linear transform such as the Basistupel

Why do you call this transformation covariant behavior. Identifies you again linear forms with their coordinate vectors, we also referred to generally covariant linear forms. Here goes, as with vectors, the underlying basis of the context shows. One speaks in this context of dual vectors.

This short name is extended to tensor products (symbol: tensor factors that vector spaces are called contravariant, factors that are dual spaces are called covariant.

Invariants of tensors 1st and 2nd stage

When invariants are called scalars formed from Tensorkoordinaten that do not change under coordinate transformation. For tensors 1st stage (vectors), the formation of the scalar to an invariant

For 2nd order tensors can be generally six irreducible invariants (ie, invariants that can not be expressed by other invariants ) found

In the case of symmetric 2nd order tensors (eg the strain tensor ) invariants and fall together. In addition, this can be represented on the other 3 invariants (that is no longer irreducible ). The determinant is also an invariant, it can be, for example, for matrices of the irreducible invariants, and represented as

For antisymmetric tensors, and can be attributed to lead back. Thus symmetric 2nd order tensors 3 irreducible invariants and anti-symmetric 2nd order tensors have an irreducible invariant

Definition

( R, S) - tensor

In the following, all vector spaces are finite-dimensional. With one denotes the set of all linear forms of the vector space in the body. Are vector spaces over, so the vector space of multilinear forms will be denoted by.

If a K- vector space, as is indicated by its dual space. Then a realization of the tensor product

Set now for a fixed vector space with dual space

With r and s entries of entries from. This vector space realizes the tensor product

Elements of this set are called tensors, contravariant and covariant level of the stage. Short one speaks of tensors of type. The sum r s is called level or rank of the tensor.

Exterior tensor

As an (outer ) tensor or tensor is called a link between two tensors. Be a vector space and let and tensors. The (outer) tensor product of and is the tensor defined by

Is defined. Here are the and the.

Examples

Below are finite dimensional vector spaces and.

  • The amount of (0,0) - tensors is isomorphic to the underlying body. You assign any linear form and any vector to a body member. Therefore, the designation as a (0,0) - tensors.
  • (0,1) - tensors organize any linear form and a vector to a number, thus correspond to the linear forms on.
  • (1,0) - tensors arrange a linear form and no vector to a number. They are thus elements of bidualen vector space. They meet at the finite-dimensional output vector spaces, since it applies (see isomorphism ).
  • A linear map between finite-dimensional vector spaces can be as an element of perceived and is then a (1,1) tensor.
  • A bilinear form can be conceived of as an element, ie as a (0,2) tensor. In particular, therefore can be interpreted as inner products (0,2) tensor.
  • The determinant of matrices construed as an alternating multi- linear shape of the columns is a ( 0, n) tensor.
  • The Kronecker delta is again a (0,2) tensor. It is a member of, and thus a so multilinear mapping. Multi Linear mappings are uniquely determined by the action on the basis vectors. Thus, the Kronecker delta is uniquely identified by
  • The Levi- Civita symbol or Epsilontensor, which can be used to calculate the cross product between vectors is a tensor of the third stage. It is true. It also writes. Since the spaces and are naturally isomorphic, one can often off the star. The Levi -Civita symbol can also be defined for n dimensions. Both the Kronecker delta and the Levi- Civita symbol are often used to study symmetry properties of tensors. The Kronecker delta is symmetric under permutations of the indices, the Levi- Civita symbol is antisymmetric, so that you can split into symmetric and antisymmetric tensors shares with her help.
  • Another example of a covariant tensor of second stage is the inertia tensor.
  • In the theory of elasticity one generalizes the Hooke's equation on the relationship between forces and associated strains and distortions in an elastic medium also with the help of the tensor calculus by introducing the strain tensor, the distortion, deformation describes, and the stress tensor, which describes the deformation forces causing. See also according to continuum mechanics.
  • Be a metric vector space. The metric g assigns two vectors v and w of the vector space V is a real number to. If the metric is a linear map in both arguments, so it is in the metric g is a tensor. More specifically, the metric g is a doubly covariant tensor. Such a metric g is therefore also called metric tensor. With the coordinates of the metric with respect to a basis of the vector space V are called; and are the coordinates of the vectors v and w with respect to the same base. For the illustration of two vectors v and w under the metric g is therefore essential

Tensor algebra

Be a vector space over a field. Then by

The so-called tensor defined. By the multiplication is added to the homogeneous constituents by the tensor becomes a unitary associative algebra.

Base

Base and dimension

Be as above, a vector space. Then the rooms are also re vector spaces. Furthermore, it should now be finite with the base. The dual basis is denoted by. The space of tensors is then also finite and

Is a basis of this space. That is, each element can be obtained by

Are shown. The dimension of this vector space is. As in any finite-dimensional vector space, it is sufficient also in the space of tensors to say how a function operates on the base.

Since the above sum representation brings a lot of paperwork with it, the Einstein summation convention is often used. Thus, in this case, to write

Often identified the components of the tensor with the tensor itself. See this by taking tensor of physics.

Change of basis and coordinate transformation

Be and each different bases of the vector spaces. Each vector, including any base vector can be represented as a linear combination of the basis vectors. The basic vector was portrayed by:

Thus, the sizes determine the transformation between the bases and the base. This applies to all. This process is called base change.

Furthermore, let the coordinates of the tensor with respect to the base. Then the result for the transformation behavior of the Tensorkoordinaten the equation

It is usually distinguished between the coordinate representation of the tensor and the transformation matrix. The transformation matrix is indeed an indexed size, but not a tensor. In Euclidean space are the rotation matrices and the special theory of relativity, for example, Lorentz transformations, which can also be regarded as " twists " in a four-dimensional Minkowski space. One speaks in this case of Vierertensoren and four-vectors.

Operations on tensors

Next to the tensor product are available for ( R, S) - tensors other important operations.

Inner Product

The internal product of a vector (or a ( co ) vector ) with a tensor is (or tensor defined by

Or by

Is defined. This means that the tensor is evaluated in a fixed vector, or solid covector.

Tensorverjüngung

Given an (r, s) - tensor and and. The Tensorverjüngung forms the tensor

On the tensor

From. This process is called Tensorverjüngung or rutting. In the case of (1,1) - tensors corresponding to the Tensorverjüngung

Under the identification of the trace of an endomorphism.

Using the Einstein summation convention can represent the Tensorverjüngung very short. For instance, let the coefficients (or coordinates) of the two- tensor T with respect to a chosen basis. If you want to this (1,1) - tensor rejuvenate, so you often writes rather than only the coefficients. The Einstein summation convention states that, summing over all the same indexes and therefore is a scalar, which coincides with the trace of the endomorphism. The term, however, is not defined because is summed only over repeated indices when one is above and one below. So the other hand, is a first -rank tensor.

Pull-back ( repatriation )

Let be a linear map between vector spaces, which need not be an isomorphism. For return transportation is a chart by

Is defined. Here, and.

Push-forward

Be a Vektorraumisomorphismus. Define the push-forward of by with

Tensorproduktraum

In this section Tensorprodukträume be defined. These are typically considered in algebra. This definition is more general than that of ( R, S) tensors, as Tensorräume can be constructed from different vector spaces here.

The universal property

Let and be vector spaces over the field. Are more - vector spaces, any bilinear map and a linear map, then the link is a bilinear map. Thus, if a bilinear mapping given, it can also construct arbitrarily many other bilinear mappings. The question that arises is whether there is a bilinear map from which in this way, by combining with linear maps, all bilinear mappings on can be constructed ( in a unique way ). Such a universal object, i.e. the bilinear mapping together with its image space is called the tensor product of V and W.

Definition: The tensor product of vector spaces and any vector space is referred to which there is a bilinear map, which satisfies the following universal property:

Is there such a vector space, it is unique up to isomorphism. You write and. The universal property can therefore be written as. For the construction of such product spaces, we refer to the article tensor.

Tensor as an element of the tensor product

In mathematics tensors are elements of tensor products.

There is a body and are vector spaces over the field.

The tensor product of a vector space whose elements are sums of symbols of the form

Are. In this case, apply for these symbols, the following calculation rules:

The tensors of the form are called elementary. Every tensor can be written as a sum of elementary tensors, but this representation is except in trivial cases is not unique, as you can see on the first of the two calculation rules.

Is a basis of ( for, ), then

Is a basis of the dimension of is thus the product of the dimensions of the individual vector spaces

Tensor products and multilinear forms

The dual space of can with the space of multi - linear forms

Be identified:

  • Is a linear form on so is the corresponding multi- linear form
  • Is a multi - linear form, the corresponding linear form on defined by

Are all the considered finite-dimensional vector spaces, so you can

Identify with each other, that is, Elements of corresponding multi- linear forms on

Tensor of a vector space and symmetry

One can form the tensor product of a vector space V with itself. Without further knowledge of the vector space is an automorphism of the tensor product can be defined, which is to reverse the factors in the pure products,

The square of this figure is the identity, it follows that there are eigenvectors with eigenvalue 1 and the eigenvalue -1.

  • One that meets, is called symmetric. Examples of the elements
  • A which satisfied is antisymmetric or alternately. Examples of the elements

Means Tensorpotenzen can be formed of V any stage. Accordingly, further pairwise permutations can be defined. Only they are not independent. Thus, for each permutation of the points j and k are attributed to permutations with the first digit.

Injective and projective tensor product

If the vector spaces, which you want to tensoring with each other, have a topology as it is desirable that it also has a tensor product topology. There are, of course, to define such a topology many ways. However, the injective or projective tensor product are for a natural choice.

Tensor

Originally, the tensor has not been studied in the modern algebraic approach presented here. The tensor calculus arose from considerations of differential geometry. In particular, Gregorio Ricci - Curbastro and his student Tullio Levi -Civita have developed it. This is called the tensor therefore Ricci calculus. Albert Einstein attacked this calculus on in his theory of relativity, which earned him great popularity in the professional world. The former are now referred to as tensors and tensor fields play in differential geometry is still an important role. Unlike tensors tensor fields are differentiable maps, assign to each point of the underlying (often curved ) space a tensor.

487421
de