\chapter{Tensors \& Einstein notation}
\section{Tensors}
\subsection{High dimensional matrices}
A natural question in matrix algebra is: ``What about 3D matrices?''
high dim'l matrices are easy to write down
(just multi-dimensional array),
but how to interpret?
Actually, even 2D matrices admit multiple interpretations (operator,
form)
...and this points to the general point: tensors
(interpret as pairing against components of vectors or covectors)
\subsection{Tensors abstract from maps}
tensors:
abstract away from what specific vector space
(partly sloppy, partly abstract)
more precisely, identifies 'em all as in $V^{\otimes k} \otimes (V^*)^{\otimes l}$
and multiplication is tensoring,
and summing is contraction/trace
...though physicists often use the opposite convention,
seeing 'em as forms.
Other times it's useful
EG, curvature as an $\End V$-valued 2-form
(infinitesimal holonomy in a plane)
instead of as a (1,3) tensor
\section{Einstein notation}
Einstein and Dirac notations are very useful when working with combinations
of vectors and covectors, and encode real mathematical information.
Einstein notation is particularly useful for working with tensors,
hence extremely useful in differential geometry and physics.
- Write coordinates on a vector space with upper indices,
and coordinates on the dual space with lower indices.
- If you repeat an index (necessarily pairing upper and lower),
that means ``sum across it''.
This is a ``good notation'', in that it reflects underlying mathematics:
- the distinction between vectors and covectors (upper and lower indices)
- the natural pairing of vectors and covectors (summation)
Even if you don't change coordinates,
it's a very useful middle ground between pure abstraction
and identifying everything with coordinates:
you remember the distinction between vectors and covectors.
Summation is the coordinate version of *natural* maps.
Mneumonics:
- "Upper indices go up to down; lower indices go left to right"
- vectors are vertical (column matrices)
- you can stack vectors side-by-side:
[v_1 ... v_k]
hence the lower index says which column you're in
- you can stack covectors top-to-bottom:
[w^1]
[...]
[w^k]
hence the lower index says which column you're in
- read "row, column" for "upper, lower"
Indeed, a linear combination of vectors is: a^i v_i
Graphically,
[v_1 ... v_k] [a^1]
[...]
[a^k]
It's a map $K^k \to V$;
this is a partial coordinatization
(viewing a matrix by columns).
Some people write the indices asymmetrically as $a^i{}_j$,
but I find it more accurate and easier to write them symmetrically as $a^i_j$.
EGs:
$a^i b_i$ : inner product: pair covector with vector
$a_i^j c_j$ : apply a matrix to a vector
$a_i^j b_j^k$ : multiply two matrices
$a_i^i$ : trace!
$a_i b^j$ : outer product
e^i \otimes e_j is basis for $V^* \otimes V$
$e^ie_j = \delta^i_j = \delta_{ij}$
use Einstein notation for coordinates on tensor products:
e^i_{jk} = e^i \otimes e_j \otimes e_k
%%%%%%%%%%%%%%%%%%%%%%
\section{Dirac delta}
not just a convenient notation!
Dirac delta is a (1,1) tensor (it's the identity, duh)
it's:
- V -> V identity element
- K -> End V : scalars
- V^* x V -> K : evaluation
- End V -> K : trace
You can use it to *expand* tensors;
if expand and contract (along one of the expanded indices), get id
%%%%%%%%%%%%%%%%%%%%%%
Identification of tensors with maps
This is a subtle point
In Einstein notation, you don't differentiate between
elements of V^* x V^*
and maps V x V -> K
and maps V -> V^*
...they're all (0,2) tensors.
...and this is part of the power of the notation:
it abstracts from maps!
(in this way it's *more* abstract than functions!
...even if it is in coordinates.)
However, when you *multiply*, that's when you're implicitly identifying
the tensor with an actual map.
(by choosing to evaluate or tensor or whatever.
%%%%%%%%%%%%%%%%%%%%%%
NB: if $e_i$ are basis vectors,
and a vector $v$ decomposes as: $v = a^i e_i$
...note that the coefficients are listed as $a^i$,
because *coefficients/coordinates are duals*;
they are $a^i=e^i(v)$.
a^i_j is an operator;
a_{ij} is a form
(aka: form, bilinear form, functional, scalar function, regular function, etc.)
btw, vectors are (1,0) tensors, and covectors are (0,1) tensors
(hence coefficients as per convention)
think of a vector as K -> V, or as V^* -> K