\subsubsection{Eigenvector magnitudes}
If $MC = dC$ then clearly $M\alpha C = d \alpha C$ and so any scalar factor times an eigenvector is also an eigenvector of the same eigenvalue. This choice may be made independently for each of the columns, but once we have chosen the column factors, the row factors are determined by the relationship $RC=I$. This is the simplest example of the linear combination in an eigenspace.
The usual convention is to normalise so that $ delta_{jk}C^jC^k=1$, but this is only a convention. In subdivision there is one important situation where a better convention exists.
\subsection{Eigencomponents of block matrices}
Suppose that $M$ has an interesting pattern of zero values above its diagonal. In particular that it can be viewed as a lower-triangular block matrix.
It is then the case that the equation
$$|M-\lambda I|=0$$
can itself be factored into the product of simpler similar expressions, one for each block.
The eigenvalues can be associated with the blocks, depending on which factors they are roots of.
Any block of size 1 gives one of the eigenvalues immediately as the value of the one element which the clock contains.
An eigenvalue from the first block will have a row eigenvector which has only a few non-zeroes (at most as many as the size of the block), and a large number of training zeroes. An eigenvalue from the last block wil have a column eigenvector which has a large number of leading zeroes, and only a few non-zeroes. The eigenvectors echo the block zero structure of the row and column in which the block appears, so that a block which has $m$ zeroes above it and $n zeroes to its right wil have eigenvectors which share those properties.
Suppose that two different blocks have a common eigenvalue. That eigenvalue will have an eigenvector from each of the blocks, and these two eigenvectors will span an eigenspace. Almost all of the eigenvectors in the space will have fewer zeroes than either of the two that we got from the blocks, and so there is something special about the original pair. We use the partitioning into blocks to break the symmetry of the eigenspace and pick out a particular choice of eigenvectors which may well have useful properties as a basis for the space.
\section{Partitioning}
Given a matrix $M$, we can premultiply it by some non-singular matrix $F$ and postmultiply the product by $F^{-1}$ to get a new matrix $N$.
$$\eqalign{
N &= FMF^{-1}\cr
&= FCDRF^{-1}\cr
&= [FC]D[C^{-1}F^{-1}]\cr
&= C' D C'^{-1}\cr
}$$
$N$ has the same eigenvalues as $M$ and its eigenvectors are simply related to those of $M$.
Particular $F$s which turn out to be useful are:-
\pop {\bf permutation}, which can migrate zeroes to the top right corner of $N$. In some cases this can enable a block partitioning.
\pop {\bf odd-even}, which replaces two rows by their sum and difference. $F$ is a block diagonal matrix where the blocks on the diagonal are either single unit values, or else little $2\times2$ blocks of the form
$$\left[\matrix{1&1\cr-1&1}\right]$$
This is used to exploit mirror symmetries.
\pop {\bf Fourier}, in which the entries are complex exponentials. This is used to exploit rotational symmetries.
\pop combinations of the three above.
In all these cases we are using a well-chosen $F$ to partition the original matrix $M$ into blocks which can have their eigencomponents determined more easily. This is not just a case of trying to make the calculations easier, although that is a valued side-effect. The main purpose is that we can identify particular properties of the eigenvectors based either on their symmetries or on their leading/trailing zeroes, and thus acquire insight into the structure of the system that we are analysing.
The bulk of the remainder of this chapter deals with examples of the way that partitioning is applied to some real subdivision schemes.