More algebra fun

I recently posted on algebraic rules for rewriting linear and quadratic forms and showed how Kronecker products come into play when expanding products of matrices.

Likewise, we might come into a situation where parameters of interest are inside a Kronecker product which we want to “pull out”. As the operation is still linear this should be an easy step. Strangely at the time I needed it I could not find the algebraic rules for it anywhere. I found one by trial and error for a special case and my colleague Martin Weis came up with the much more general case. Here it is for your convenience:

Let $\ma{A} \in \compl^{M \times N}$ and $\ma{B} \in \compl^{P \times Q}$. Then, $\ma{A} \otimes \ma{B}$ is a matrix with depends on the elements of $\ma{A}$ as well as the elements of $\ma{B}$ linearly. Consequently, ${\rm vec}(\ma{A} \otimes \ma{B})$ must be linear in ${\rm vec}(\ma{A})$ and ${\rm vec}(\ma{B})$. Indeed it is and the coefficient matrices are found as:

$$ {\rm vec}(\ma{A} \otimes \ma{B}) =
\left( \ma{I}_N \otimes \left[ \begin{array}{c}
\ma{I}_M \otimes \ma{b}_1 \\
\ma{I}_M \otimes \ma{b}_2 \\
\vdots \\
\ma{I}_M \otimes \ma{b}_Q
\end{array} \right] \right) \cdot {\rm vec}(\ma{A})
=
\left[ \begin{array}{c}
\ma{I}_Q \otimes \ma{a}_1 \otimes \ma{I}_P \\
\ma{I}_Q \otimes \ma{a}_2 \otimes \ma{I}_P \\
\vdots \\
\ma{I}_Q \otimes \ma{a}_N \otimes \ma{I}_P
\end{array} \right] \cdot {\rm vec}(\ma{B}).
$$

Special cases where either of $M, N, P, Q$ are equal to one (i.e., one or two matrices are row or column vectors) are easily found from this, e.g.,
${\rm vec}(\ma{a} \otimes \ma{B}) = \left[\ma{I}_Q \otimes \ma{a} \otimes \ma{I}_P\right] \cdot {\rm vec}(\ma{B})$ or ${\rm vec}(\ma{A} \otimes \ma{b}) = \left[\ma{I}_{MN} \otimes \ma{b} \right] \cdot {\rm vec}(\ma{A})$.

A related important observation is that the matrix $\ma{A} \otimes \ma{B}$ and the matrix ${\rm vec}(\ma{B}) \cdot {\rm vec}(\ma{A})^{\rm T}$ contain exactly the same elements. Therefore, you can permute a given Kronecker product so that it becomes a rank one matrix, where each column is a scaled version of ${\rm vec}(\ma{B})$ and each row a scaled version of ${\rm vec}(\ma{A})^{\rm T}$. This is particularly relevant when we have an estimate of a matrix that should be a Kronecker product and want to find a “Least-Squares Kronecker factorization”. By performing the above mentioned rearrangement, this problem is transformed into finding a Least-Squared rank-one approximation which is readily solved by a truncated Singular Value Decomposition (SVD) via the Eckart-Young theorem.

Starting from this I wondered what happens if we consider a rank higher than one. This allows us to decompose a given matrix into the sum of Kronecker products and find exact as well as approximate low-rank decompositions with a Least-Squares optimality guarantee. Unfortunately, I found out later that others had this idea before. A decomposition like that is known as Kronecker Product SVD. Here is a slide set by C. Van Loan on it.

Leave a comment

Your email address will not be published.