Processing math: 0%

Thursday, February 23, 2012

SO(3) What



There's a bunch of facts about SO(3) in the last post that were used but not proven.  I threw a proof in, but it ended up being longer and less coherent than I had hoped.

So I dumped it here instead.  I'll probably re-tool that entire post in the coming week...



Theorem: Every element of SO(3) can be written as a single rotation along some axis \vec{v}
Remember: every A \in SO(3) satisfies det(A) = 1 and ||A(\vec{j})|| = ||\vec{j}|| \, \forall \, \vec{j} \in \mathbb{R}
Because A preserves the norms of vectors, it must be fully diagonalizable, and its eigenvalues must be roots of unity.  (Just consider the Jordan canonical form of A)
Since complex roots to real-valued polynomials (i.e. the characteristic polynomial of A) come in conjugate pairs, and and \lambda\overline{\lambda} =1 for all \lambda \in \mathbb{C}: |\lambda| =1, the odd dimension of \mathbb{R}^3 guarantees that 1 will be one of our eigenvalues.
The eigenvector corresponding to the eigenvalue 1 will be our axis of rotation.
So we have one eigenvector, whose eigenvalue is 1, and the other two eigenvalues are conjugate roots of unity.  We can clearly express this as a rotation in some basis.  But is this rotation really occurring in the plane orthogonal to our "axis" eigenvector? 
In order for this to be true, we must have an orthonormal eigenbasis of A .
How do we do find such an eigenbasis? We normalize each eigenvector with a unique eigenvalue, and we Gram-Schmidt any eigenvectors e_1, e_2 if \lambda_1 = \lambda_2

How do we know this technique will work?  To preserve the fact that we're working with an eigenbasis, we can only 'orthogonalize' pairs of vectors with the same eigenvalue.  We're in the clear if and only if \lambda_1 \neq \lambda_2 implies e_1 is orthogonal to e_2.
Suppose e_1 \cdot e_2 = r and \lambda_1 \neq \lambda_2.
||a_1e_1 + a_2e_2||^2 = ||a_1e_1||^2 + ||a_2e_2||^2 + 2(a_1e_1\cdot a_2e_2)
||A(a_1e_1 + a_2e_2)||^2 = ||a_1\lambda_1e_1||^2 + ||a_2\lambda_2e_2||^2 + 2(a_1\lambda_1e_1\cdot a_2\lambda_2e_2)
A preserves the norm, so we know ||A(a_1e_1 + a_2e_2)||^2 - ||a_1e_1 + a_2e_2||^2 = 0
2(a_1\lambda_1e_1\cdot a_2\lambda_2e_2)- 2(a_1e_1\cdot a_2e_2)= 2ra_1\lambda_1\overline{a_2\lambda_2} - 2ra_1\overline{a_2}= 2ra_1\overline{a_2}(1-\lambda_1\overline{\lambda_2}) = 0
\lambda_1 \neq \lambda_2 \Rightarrow1-\lambda_1\overline{\lambda_2} \neq 0 so therefore ra_1\overline{a_2} = 0
This expression must be zero for all vectors (i.e. for all a_1 and a_2) so r=0
\therefore \lambda_1 \neq \lambda_2 \Rightarrow e_1 \cdot e_2 = r=0; e_1 and e_2 are orthogonal
Thus each A \in SO(3) can be written as a rotation around some \vec{v} \in \mathbb{R}^3
\Box

No comments:

Post a Comment