* green/doc : New documentation directory.
* green/doc/formulae : New specific directory.
* green/doc/formulae/formulae.tex : New recipe of 3d formulae.
---
trunk/milena/sandbox/ChangeLog | 8 +
.../milena/sandbox/green/doc/formulae/formulae.tex | 1157 ++++++++++++++++++++
2 files changed, 1165 insertions(+), 0 deletions(-)
create mode 100644 trunk/milena/sandbox/green/doc/formulae/formulae.tex
diff --git a/trunk/milena/sandbox/ChangeLog b/trunk/milena/sandbox/ChangeLog
index 5a96357..b50e807 100644
--- a/trunk/milena/sandbox/ChangeLog
+++ b/trunk/milena/sandbox/ChangeLog
@@ -1,3 +1,11 @@
+2009-09-10 Yann Jacquelet <jacquelet(a)lrde.epita.fr>
+
+ Write down 3d currently used formulaes.
+
+ * green/doc : New documentation directory.
+ * green/doc/formulae : New specific directory.
+ * green/doc/formulae/formulae.tex : New recipe of 3d formulae.
+
2009-09-09 Yann Jacquelet <jacquelet(a)lrde.epita.fr>
Fix bugs an compilation problem on histo3d_rgb source code.
diff --git a/trunk/milena/sandbox/green/doc/formulae/formulae.tex
b/trunk/milena/sandbox/green/doc/formulae/formulae.tex
new file mode 100644
index 0000000..52af7a7
--- /dev/null
+++ b/trunk/milena/sandbox/green/doc/formulae/formulae.tex
@@ -0,0 +1,1157 @@
+%% Copyright (C) 2009 EPITA Research and Development Laboratory (LRDE)
+%%
+%% This file is part of Olena.
+%%
+%% Olena is free software: you can redistribute it and/or modify it under
+%% the terms of the GNU General Public License as published by the Free
+%% Software Foundation, version 2 of the License.
+%%
+%% Olena is distributed in the hope that it will be useful,
+%% but WITHOUT ANY WARRANTY; without even the implied warranty of
+%% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+%% General Public License for more details.
+%%
+%% You should have received a copy of the GNU General Public License
+%% along with Olena. If not, see <http://www.gnu.org/licenses/>.
+
+\documentclass{article}
+
+%\usepackage{hevea}
+
+\usepackage{html}
+\usepackage{hyperref}
+\usepackage{graphicx}
+\usepackage{makeidx}
+\usepackage{xcolor}
+\usepackage{color}
+\usepackage{amsfonts}
+\usepackage{amsmath}
+\usepackage{amsthm}
+\usepackage{amssymb}
+
+
+%% \bm
+%% input : the mathematical text to put in bold
+%%
+\newcommand{\bm}[1]{\mbox{\boldmath$#1$}}
+
+%% \fm
+%% input : the mathematical text to put in mathematic font set
+%%
+\newcommand{\fm}[1]{\mathbb{#1}}
+
+%% \df
+%% input : the set which the dim we are intering in
+%%
+\newcommand{\df}[1]{|{\mathbb{#1}}|}
+
+\title{Recipe in statistics and linear algebra\\
+ \large{A few formulas in $\fm{R}^3$ space} }
+\author{LRDE}
+\date{}
+\makeindex
+
+\begin{document}
+
+\maketitle
+
+%#################################################################
+\section{Introduction}
+The goal of this document is to keep in mind some statictic formulas which are
+not so evident to represent. They help us to keep some background mathematics
+in the development.
+
+%#################################################################
+\section{Notations}
+
+\begin{itemize}
+ \item Use lowercase and normal font for the scalar variables.
+ \item Use lowercase and bold font for the vector variables.
+ \item Use uppercase and normal font for the matrix variables.
+ \item Use uppercase and double font for the set variables.
+\end {itemize}
+
+%=================================================================
+\subsection{Sets}
+
+There is three particular sets that we use every time:
+\begin{itemize}
+ \item The color space $\fm{C}$ in which the pixels take their value.
+ \item The dataset $\fm{P}$ which contains every pixel we take care.
+ \item The group set $\fm{G}$ that define any splitting of the dataset.
+\end{itemize}
+
+\begin{tabular}{|c|l|l|c|c|}
+ \hline
+ Sets & Contents & (in sample) &
+ Dimension & (in sample) \\
+ \hline
+ $\fm{C}$ & $\fm{R}^q$ & $\fm{R}^3$ &
+ $\df{C} = q$ & $q = 3$ \\
+ $\fm{P}$ & $\{\bm{p_i}\}$ &$\{\bm{a},\bm{b},\bm{c},\bm{d}\}$ &
+ $\df{P} = r$ & $r = 4$ \\
+ $\fm{G}$ & $\{\fm{G}_i\}$ &$\{\fm{F}, \fm{H}\}$ &
+ $\df{G} = k$ & $k = 2$ \\
+ \hline
+\end{tabular}
+
+%=================================================================
+\subsection{Color space}
+
+We use the euclidian distance.
+
+$$
+d(a,b) =
+\sqrt{\sum_{i=0}^q (a_i - b_i)^2} =
+\sqrt{(a_x - b_x)^2 + (a_y - b_y)^2 + (a_z - b_z)^2}
+$$
+
+$$
+d(a,b)^2 =
+\left[\begin{array}{ccc}
+ a_x - b_x & a_y - b_y & a_z - b_z
+\end{array}\right]
+\times
+\left[\begin{array}{c}
+ a_x - b_x \\
+ a_y - b_y \\
+ a_y - b_z
+\end{array}\right]
+$$
+
+%=================================================================
+\subsection{Data points in $\mathbb{R}^3$}
+We present the four points of the dataset with their vector representation.
+
+$$
+\bm{a} =
+\left[\begin{array}{c}
+ a_x \\
+ a_y \\
+ a_z
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{1x} \\
+ p_{1y} \\
+ p_{1z}
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{11} \\
+ p_{12} \\
+ p_{13}
+\end{array}\right]
+= \bm{p}_1
+$$
+
+$$
+\bm{b} =
+\left[\begin{array}{c}
+ b_x \\
+ b_y \\
+ b_z
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{2x} \\
+ p_{2y} \\
+ p_{2z}
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{21} \\
+ p_{22} \\
+ p_{23}
+\end{array}\right]
+= \bm{p}_2
+$$
+
+$$
+\bm{c} =
+\left[\begin{array}{c}
+ c_x \\
+ c_y \\
+ c_z
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{3x} \\
+ p_{3y} \\
+ p_{3z}
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{31} \\
+ p_{32} \\
+ p_{33}
+\end{array}\right]
+= \bm{p}_3
+$$
+
+$$
+\mbox{\boldmath$d$} =
+\left[\begin{array}{c}
+ d_x \\
+ d_y \\
+ d_z
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{4x} \\
+ p_{4y} \\
+ p_{4z}
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{41} \\
+ p_{42} \\
+ p_{43}
+\end{array}\right]
+= \bm{p}_4
+$$
+
+One may group the four points in one matrix P :
+
+$$
+P =
+\left[\begin{array}{c}
+ \bm{a}^t \\
+ \bm{b}^t \\
+ \bm{c}^t \\
+ \bm{d}^t
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ a_x & a_y & a_z \\
+ b_x & b_y & b_z \\
+ c_x & c_y & c_z \\
+ d_x & d_y & d_z
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ p_{1x} & p_{1y} & p_{1z} \\
+ p_{2x} & p_{2y} & p_{2z} \\
+ p_{3x} & p_{3y} & p_{3z} \\
+ p_{4x} & p_{4y} & p_{4z}
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ p_{11} & p_{22} & p_{13} \\
+ p_{21} & p_{22} & p_{23} \\
+ p_{31} & p_{32} & p_{33} \\
+ p_{41} & p_{42} & p_{43}
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ \bm{p}_1^t \\
+ \bm{p}_2^t \\
+ \bm{p}_3^t \\
+ \bm{p}_4^t
+\end{array}\right]
+$$
+
+%=================================================================
+\subsection{The group}
+
+We can define the group set $\fm{G}$ in thow context:
+\begin{itemize}
+ \item First, the group set is a partition.
+ \item Second, the group set if a fuzzy set.
+\end{itemize}
+
+$\df{P} = \sum_{i=1}^{k} |\fm{G}_i| = \df{F} + \df{H} = 4$
+
+$\dim F = \dim H = \df{P} = 4$
+
+$
+\bm{f} =
+\left[\begin{array}{c}
+ f_a \\
+ f_b \\
+ f_c \\
+ f_d
+\end{array}\right]
+=
+\frac{1}{2}
+\left[\begin{array}{c}
+ 1 \\
+ 1 \\
+ 0 \\
+ 0
+\end{array}\right]
+$
+
+$\sum_{i=1}^{k} f_i = 1$
+
+$
+\bm{h} =
+\left[\begin{array}{c}
+ h_a \\
+ h_b \\
+ h_c \\
+ h_d
+\end{array}\right]
+=
+\frac{1}{2}
+\left[\begin{array}{c}
+ 0 \\
+ 0 \\
+ 1 \\
+ 1
+\end{array}\right]
+$
+
+$\sum_{i=1}^{k} h_i = 1$
+
+The appartenance degree of $\bm{a}$ to the group $\fm{H}$ is $h_a$ such that:
+
+$ h_a \in
+\begin{cases}
+ \{0,1\} \text{if $\fm{G}$ is a partition.} \\
+ [0,1] \text{if $\fm{G}$ is a fuzzy set.}
+\end{cases}
+$
+
+%#################################################################
+\section{Momentum}
+
+Let's have a look to the first three moments.
+
+%=================================================================
+\subsection{The mean}
+
+$$
+ \mbox{\boldmath$m$} =
+\left[\begin{array}{c}
+ m_x \\
+ m_y \\
+ m_z
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ m_1 \\
+ m_2 \\
+ m_3
+\end{array}\right]
+=
+\frac{1}{4}
+\left[\begin{array}{c}
+ a_x + b_x + c_x + d_x \\
+ a_y + b_y + c_y + d_y \\
+ a_z + b_z + c_z + d_z
+\end{array}\right]
+=
+\frac{1}{4}
+\left[\begin{array}{c}
+ \sum_{i=1}^{4}p_{ix} \\
+ \sum_{i=1}^{4}p_{iy} \\
+ \sum_{i=1}^{4}p_{iz}
+\end{array}\right]
+=
+\frac{1}{4}(\mbox{\boldmath$a$} + \mbox{\boldmath$b$} + \mbox{\boldmath$c$} +
+ \mbox{\boldmath$d$})
+=
+\frac{1}{4}\sum_{i=1}^{4}\mbox{\boldmath$p$}_i
+$$
+$$
+ \mbox{\boldmath$m$} =
+\frac{1}{4}
+\left[\begin{array}{cccc}
+ a_x & b_x & c_x & d_x \\
+ a_y & b_y & c_y & d_y \\
+ a_z & b_z & c_z & d_z
+\end{array}\right]
+\left[\begin{array}{c}
+ 1 \\
+ 1 \\
+ 1 \\
+ 1
+\end{array}\right]
+=
+\frac{1}{4}
+\left[\begin{array}{cccc}
+ \mbox{\boldmath$p$}_1^t & \mbox{\boldmath$p$}_2^t &
+ \mbox{\boldmath$p$}_3^t & \mbox{\boldmath$p$}_4^t
+\end{array}\right]
+\left[\begin{array}{c}
+ 1 \\
+ 1 \\
+ 1 \\
+ 1
+\end{array}\right]
+=
+\frac{1}{4} P^t \mbox{\boldmath$ 1$}
+$$
+The mean matrix:
+
+$$
+M =
+\left[\begin{array}{c}
+ \mbox{\boldmath$m$}^t \\
+ \mbox{\boldmath$m$}^t \\
+ \mbox{\boldmath$m$}^t \\
+ \mbox{\boldmath$m$}^t
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ m_x & m_y & m_z \\
+ m_x & m_y & m_z \\
+ m_x & m_y & m_z \\
+ m_x & m_y & m_z
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ m_1 & m_2 & m_3 \\
+ m_1 & m_2 & m_3 \\
+ m_1 & m_2 & m_3 \\
+ m_1 & m_2 & m_3
+\end{array}\right]
+$$
+
+We define the difference between a point $\mbox{\boldmath$p$}_i$ and the mean:
+$$
+(\mbox{\boldmath$p$}_i - \mbox{\boldmath$m$}) =
+\left[\begin{array}{c}
+ p_{ix} - m_x \\
+ p_{iy} - m_y \\
+ p_{iz} - m_z
+\end{array}\right]
+=
+\left[\begin{array}{c}
+ p_{i1} - m_1 \\
+ p_{i2} - m_2 \\
+ p_{i3} - m_3
+\end{array}\right]
+$$
+
+And for all the dataset:
+
+$$
+(P - M) =
+\left[\begin{array}{ccc}
+ a_x & a_y & a_z \\
+ b_x & b_y & b_z \\
+ c_x & c_y & c_z \\
+ d_x & d_y & d_z
+\end{array}\right]
+-
+\left[\begin{array}{ccc}
+ m_x & m_y & m_z \\
+ m_x & m_y & m_z \\
+ m_x & m_y & m_z \\
+ m_x & m_y & m_z
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ a_x - m_x & a_y - m_y & a_z - m_z \\
+ b_x - m_x & b_y - m_y & b_z - m_z \\
+ c_x - m_x & c_y - m_y & c_z - m_z \\
+ d_x - m_x & d_y - m_y & d_z - m_z
+\end{array}\right]
+$$
+
+%=================================================================
+\subsection{The variance}
+
+$$
+\begin{array}{lcl}
+V & = &
+\left[\begin{array}{ccc}
+ v_{xx} & v_{xy} & v_{xz} \\
+ v_{yx} & v_{yy} & v_{yz} \\
+ v_{zx} & v_{zy} & v_{zz}
+\end{array}\right]
+=
+\left[\begin{array}{ccc}
+ v_{11} & v_{12} & v_{13} \\
+ v_{21} & v_{22} & v_{23} \\
+ v_{31} & v_{32} & v_{33}
+\end{array}\right]
+\\
+V & = &
+\frac{1}{4}
+\left[\begin{array}{ccc}
+ \sum_{i=1}^4 (p_{ix} - m_x)(p_{ix} - m_x) &
+ \sum_{i=1}^4 (p_{ix} - m_x)(p_{iy} - m_y) &
+ \sum_{i=1}^4 (p_{ix} - m_x)(p_{iz} - m_z) \\
+ \sum_{i=1}^4 (p_{iy} - m_y)(p_{ix} - m_x) &
+ \sum_{i=1}^4 (p_{iy} - m_y)(p_{iy} - m_y) &
+ \sum_{i=1}^4 (p_{iy} - m_y)(p_{iz} - m_z) \\
+ \sum_{i=1}^4 (p_{iz} - m_z)(p_{ix} - m_x) &
+ \sum_{i=1}^4 (p_{iz} - m_z)(p_{iy} - m_y) &
+ \sum_{i=1}^4 (p_{iz} - m_z)(p_{iz} - m_z)
+\end{array}\right]
+\\
+V & = &
+\frac{1}{4}
+\left[\begin{array}{cccc}
+ a_x - m_x & b_x - m_x & c_x - m_x & d_x - m_x \\
+ a_y - m_y & b_y - m_y & c_y - m_y & d_y - m_y \\
+ a_z - m_z & b_z - m_z & c_z - m_z & d_z - m_z
+\end{array}\right]
+\left[\begin{array}{ccc}
+ a_x - m_x & a_y - m_y & a_z - m_z \\
+ b_x - m_x & b_y - m_y & b_z - m_z \\
+ c_x - m_x & c_y - m_y & c_z - m_z \\
+ d_x - m_x & d_y - m_y & d_z - m_z
+\end{array}\right]
+\\
+V & = &
+\frac{1}{4}
+(P - M)^t (P - M)
+\end{array}
+$$
+
+%#################################################################
+\section{Splitting into groups}
+When we study some mixed population, the total variance can be splitted in the
+the variance between the groups and in the variance whithin each group.
+
+We define two groups in the population. Each group owns its moments of the
+second order.
+
+%=================================================================
+\subsection{Decomposing the count}
+
+$$
+\begin{array}{lcl}
+n_t & = & n_1 + n_2 \\
+ & = & \sum_{i=1}^{2} n_i
+\end{array}
+$$
+
+
+%=================================================================
+\subsection{Decomposing the mean}
+
+$$
+\begin{array}{lcl}
+\mbox{\boldmath$m_t$} & = &
+ \frac{1}{n_t}(n_1 \mbox{\boldmath$m_1$} + n_2 \mbox{\boldmath$m_2$}) \\
+ & = &
+ \frac{1}{n_t}\sum_{i=1}^{2} n_i \mbox{\boldmath$m_i$}
+\end{array}
+$$
+
+
+%=================================================================
+\subsection{Decomposing the variance}
+When we study some mixed population, the total variance can be splitted in the
+the variance between the groups and in the variance whithin each group.
+
+$$
+V_t = V_i + V_b
+$$
+
+$$
+\begin{array}{lcl}
+V_i & = & \frac{1}{n_t}(n_1 V_1 + n_2 V_2) \\
+ & = & \frac{1}{n_t}\sum_{i=1}^2 n_i V_i
+\end{array}
+$$
+
+$$
+\begin{array}{lcl}
+V_b & = & \frac{1}{n_t}(n_1 (\mbox{\boldmath$m_1$} - \mbox{\boldmath$m_t$})^2 +
+ n_2 (\mbox{\boldmath$m_2$} - \mbox{\boldmath$m_t$})^2 \\
+ & = & \frac{1}{n_t} \sum_{i=1}^2 n_i
+ (\mbox{\boldmath$m_i$} - \mbox{\boldmath$m_t$})^2
+\end{array}
+$$
+
+%#################################################################
+\section{Basis}
+
+%%=================================================================
+\subsection{Determinant of a square matrix 3x3}
+
+$$
+\det{V} =
+\left|\begin{array}{ccc}
+ v_{11} & v_{12} & v_{13} \\
+ v_{21} & v_{22} & v_{23} \\
+ v_{31} & v_{32} & v_{33}
+\end{array}\right|
+=
+v_{11}(v_{22}v_{33} - v_{32}v_{23})
+- v_{12}(v_{21}v_{33} - v_{31}v_{23})
++ v_{13}(v_{21}v_{32} - v_{31}v_{22})
+$$
+
+%%=================================================================
+\subsection{Transpose}
+
+$
+V^t =
+\left[\begin{array}{ccc}
+ v_{11} & v_{12} & v_{13} \\
+ v_{21} & v_{22} & v_{23} \\
+ v_{31} & v_{32} & v_{33}
+\end{array}\right]^t
+=
+\left[\begin{array}{ccc}
+ v_{11} & v_{21} & v_{31} \\
+ v_{12} & v_{22} & v_{32} \\
+ v_{13} & v_{23} & v_{33}
+\end{array}\right]
+$
+
+%%=================================================================
+\subsection{Inverse of a square matrix 3x3}
+
+\begin{tabular}{c c}
+$
+minor(v_{11})
+=
+\left|\begin{array}{cc}
+ v_{22} & v_{23} \\
+ v_{32} & v_{33}
+\end{array}\right|
+=
+v_{22}v_{33} - v_{32}v_{23}
+$
+&
+$
+minor(v_{21})
+=
+\left|\begin{array}{cc}
+ v_{12} & v_{13} \\
+ v_{31} & v_{33}
+\end{array}\right|
+=
+v_{12}v_{33} - v_{31}v_{31}
+$
+\\
+\\
+$
+minor(v_{31})
+=
+\left|\begin{array}{cc}
+ v_{12} & v_{13} \\
+ v_{22} & v_{23}
+\end{array}\right|
+=
+v_{12}v_{23} - v_{22}v_{13}
+$
+&
+$
+minor(v_{12})
+=
+\left|\begin{array}{cc}
+ v_{21} & v_{23} \\
+ v_{31} & v_{33}
+\end{array}\right|
+=
+v_{21}v_{33} - v_{31}v_{23}
+$
+\\
+\\
+$
+minor(v_{22})
+=
+\left|\begin{array}{cc}
+ v_{11} & v_{13} \\
+ v_{31} & v_{33}
+\end{array}\right|
+=
+v_{11}v_{33} - v_{31}v_{13}
+$
+&
+$
+minor(v_{32})
+=
+\left|\begin{array}{cc}
+ v_{11} & v_{13} \\
+ v_{21} & v_{23}
+\end{array}\right|
+=
+v_{11}v_{23} - v_{21}v_{13}
+$
+\\
+\\
+$
+minor(v_{13})
+=
+\left|\begin{array}{cc}
+ v_{21} & v_{22} \\
+ v_{31} & v_{32}
+\end{array}\right|
+=
+v_{21}v_{32} - v_{31}v_{22}
+$
+&
+$
+minor(v_{23})
+=
+\left|\begin{array}{cc}
+ v_{11} & v_{12} \\
+ v_{31} & v_{32}
+\end{array}\right|
+=
+v_{11}v_{32} - v_{31}v_{12}
+$
+\\
+\\
+$
+minor(v_{33})
+=
+\left|\begin{array}{cc}
+ v_{11} & v_{12} \\
+ v_{21} & v_{22}
+\end{array}\right|
+=
+v_{11}v_{22} - v_{21}v_{12}
+$
+\end{tabular}
+
+
+$$
+minor(V) =
+minor(
+\left[\begin{array}{ccc}
+ v_{11} & v_{12} & v_{13} \\
+ v_{21} & v_{22} & v_{23} \\
+ v_{31} & v_{32} & v_{33}
+\end{array}\right])
+=
+\left[\begin{array}{ccc}
+ minor(v_{11}) & minor(v_{12}) & minor(v_{13}) \\
+ minor(v_{21}) & minor(v_{22}) & minor(v_{23}) \\
+ minor(v_{31}) & minor(v_{32}) & minor(v_{33})
+\end{array}\right]
+$$
+
+$$
+cofactor(V)
+=
+\left[\begin{array}{ccc}
+ (+)minor(v_{11}) & (-)minor(v_{12}) & (+)minor(v_{13}) \\
+ (-)minor(v_{21}) & (+)minor(v_{22}) & (-)minor(v_{23}) \\
+ (+)minor(v_{31}) & (-)minor(v_{32}) & (+)minor(v_{33})
+\end{array}\right]
+$$
+
+
+$$
+adj(V) = cofactor(V)^t
+=
+\left[\begin{array}{ccc}
+ v_{22}v_{33} - v_{32}v_{23} &
+ v_{31}v_{13} - v_{12}v_{33} &
+ v_{12}v_{23} - v_{22}v_{13} \\
+ v_{31}v_{23} - v_{21}v_{33} &
+ v_{11}v_{33} - v_{31}v_{13} &
+ v_{21}v_{13} - v_{11}v_{23} \\
+ v_{21}v_{32} - v_{31}v_{22} &
+ v_{21}v_{13} - v_{11}v_{23} &
+ v_{11}v_{22} - v_{21}v_{12}
+\end{array}\right]
+$$
+
+$$
+V^{-1}
+=
+\frac{adj(V)}{\det V}
+$$
+
+
+%%=================================================================
+\subsection{Eigenvalues and eigenvectors}
+
+We assume that we work on variance/covariance matrix which is real and symetric.
+In this case, all the three eigen values are real.
+
+$$
+V \bm{x} = \lambda \bm{x}
+$$
+
+$$
+\det V - \lambda I = 0
+$$
+
+\begin{tabular}{lcl}
+$\det V - \lambda I$ &
+$=$ &
+$\left|\begin{array}{ccc}
+ v_{11} - \lambda & v_{21} & v_{31} \\
+ v_{12} & v_{22} - \lambda & v_{32} \\
+ v_{13} & v_{23} & v_{33} - \lambda
+\end{array}\right|$
+\\
+\\
+&
+$=$ &
+$(v_{11} - \lambda)((v_{22} - \lambda)(v_{33} - \lambda) - v_{32}v_{23})$
+\\
+&
+&
+$- v_{12}(v_{21}(v_{33} - \lambda) - v_{31}v_{23})$
+\\
+&
+&
+$- v_{13}(v_{21}v_{32} - v_{31}(v_{22} - \lambda))$
+\\
+\\
+&
+= &
+$- \lambda^3 + (v_{11} + v_{22} + v_{33})\lambda^2$
+\\
+&
+&
+$ -(v_{12}v_{21} + v_{13}v_{31} + v_{23}v_{32}
+ - v_{11}v_{22} - v_{11}v_{33} - v_{22}v_{33})\lambda$
+\\
+&
+&
+$+ v_{11}(v_{22}v_{33} - v_{32}v_{23}) -
+v_{12}(v_{21}v_{33} - v_{31}v_{23}) -
+v_{13}(v_{21}v_{32} - v_{31}v_{22})$
+\\
+\\
+& $=$ &
+$-\lambda^3 + tr(V) \lambda^2 + \frac{1}{2}[tr(A^2) - tr(A)^2]\lambda + \det A$
+\\
+\\
+\end{tabular}
+
+
+\begin{tabular}{ccccccc}
+\\
+$A$
+& $=$ &
+$
+\left[\begin{array}{ccc}
+ a & d & e \\
+ d & b & f \\
+ e & f & c
+\end{array}\right]
+$
+\\
+\\
+& $=$ &
+$P$ & $\times$ & $D$ & $\times$ & $P^t$
+\\
+\\
+& $=$ &
+$
+\left[\begin{array}{ccc}
+ u_1 & v_1 & w_1 \\
+ u_2 & v_2 & w_2 \\
+ u_3 & v_3 & w_3
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ \lambda_1 & 0 & 0 \\
+ 0 & \lambda_2 & 0 \\
+ 0 & 0 & \lambda_3
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ u_1 & u_2 & u_3 \\
+ v_1 & v_2 & v_3 \\
+ w_1 & w_2 & w_3
+\end{array}\right]
+$
+\\
+\\
+& $=$ &
+$
+\lambda_1
+\left[\begin{array}{ccc}
+ u_1^2 & u_1u_2 & u_1u_3 \\
+ u_2u_1 & u_2^2 & u_2u_3 \\
+ u_3u_1 & u_3u_2 & u_3^2 \\
+\end{array}\right]
+$
+& $+$ &
+$
+\lambda_2
+\left[\begin{array}{ccc}
+ v_1^2 & v_1v_2 & v_1v_3 \\
+ v_2v_1 & v_2^2 & v_2v_3 \\
+ v_3v_1 & v_3v_2 & v_3^2 \\
+\end{array}\right]
+$
+& $+$ &
+$
+\lambda_3
+\left[\begin{array}{ccc}
+ w_1^2 & w_1w_2 & w_1w_3 \\
+ w_2w_1 & w_2^2 & w_2w_3 \\
+ w_3w_1 & w_3w_2 & w_3^2 \\
+\end{array}\right]
+$
+\\
+\\
+& $=$ &
+$\lambda_1\bm{uu^t}$
+& $+$ &
+$\lambda_2\bm{vv^t}$
+& $+$ &
+$\lambda_3\bm{ww^t}$
+\\
+\\
+$I$
+& $=$ &
+$PP^t$
+& $=$ &
+$P^tP$
+\\
+\\
+$0$
+& $=$ &
+$A\bm{u}-\lambda_1\bm{u}$
+& $=$ &
+$A\bm{v}-\lambda_2\bm{v}$
+& $=$ &
+$A\bm{w}-\lambda_3\bm{w}$
+\\
+\\
+$|P|$
+& $=$ &
+$
+\left|\begin{array}{ccc}
+ u_1 & u_2 & u_3 \\
+ v_1 & v_2 & v_3 \\
+ w_1 & w_2 & w_3
+\end{array}\right|
+$
+& $=$ &
+1
+\end{tabular}
+
+
+%% sample
+
+\begin{tabular}{ccccccc}
+\\
+$
+\left[\begin{array}{ccc}
+ \frac{3}{2} & \frac{1}{2} & 0 \\
+\\
+ \frac{1}{2} & \frac{3}{2} & 0 \\
+\\
+ 0 & 0 & 2
+\end{array}\right]
+$
+& $=$ &
+$
+\left[\begin{array}{ccc}
+ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{1}} & -\frac{1}{\sqrt{2}} \\
+\\
+ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{2}} \\
+\\
+ \frac{1}{\sqrt{3}} & -\frac{2}{\sqrt{6}} & 0
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ 2 & 0 & 0 \\
+\\
+ 0 & 2 & 0 \\
+\\
+ 0 & 0 & 1
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\
+\\
+ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{2}{\sqrt{6}} \\
+\\
+ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0
+\end{array}\right]
+$
+\\
+\\
+& $=$ &
+$
+2
+\left[\begin{array}{ccc}
+ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
+\\
+ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
+\\
+ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
+\end{array}\right]
+$
+& $+$ &
+$
+2
+\left[\begin{array}{ccc}
+ \frac{1}{6} & \frac{1}{6} & -\frac{2}{6} \\
+\\
+ \frac{1}{6} & \frac{1}{6} & -\frac{2}{6} \\
+\\
+ -\frac{2}{6} & -\frac{2}{6} & \frac{4}{3} \\
+\end{array}\right]
+$
+& $+$ &
+$
+1
+\left[\begin{array}{ccc}
+ \frac{1}{2} & -\frac{1}{2} & 0 \\
+\\
+ -\frac{1}{2} & \frac{1}{2} & 0 \\
+\\
+ 0 & 0 & 0 \\
+\end{array}\right]
+$
+\\
+\\
+$
+\left[\begin{array}{ccc}
+ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
+\\
+ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
+\\
+ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
+\end{array}\right]
+$
+& $=$ &
+$
+\left[\begin{array}{c}
+ \frac{1}{\sqrt{3}} \\
+ \frac{1}{\sqrt{3}} \\
+ \frac{1}{\sqrt{3}} \\
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\
+\end{array}\right]
+$
+\\
+\\
+$
+\left[\begin{array}{ccc}
+ \frac{1}{6} & \frac{1}{6} & -\frac{2}{6} \\
+\\
+ \frac{1}{6} & \frac{1}{6} & -\frac{2}{6} \\
+\\
+ -\frac{2}{6} & -\frac{2}{6} & \frac{4}{3} \\
+\end{array}\right]
+$
+& $=$ &
+$
+\left[\begin{array}{c}
+ \frac{1}{\sqrt{6}} \\
+ \frac{1}{\sqrt{6}} \\
+ -\frac{2}{\sqrt{6}} \\
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{2}{\sqrt{6}} \\
+\end{array}\right]
+$
+\\
+\\
+$
+\left[\begin{array}{ccc}
+ \frac{1}{2} & -\frac{1}{2} & 0 \\
+\\
+ -\frac{1}{2} & \frac{1}{2} & 0 \\
+\\
+ 0 & 0 & 0 \\
+\end{array}\right]
+$
+& $=$ &
+$
+\left[\begin{array}{c}
+ -\frac{1}{\sqrt{2}} \\
+ \frac{1}{\sqrt{2}} \\
+ 0 \\
+\end{array}\right]
+$
+& $\times$ &
+$
+\left[\begin{array}{ccc}
+ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \\
+\end{array}\right]
+$
+\end{tabular}
+
+%%=================================================================
+\subsection{MCO for eigenvalues}
+
+In 3d, it's difficult to extract the cubic roots from the characteristic
+polynomia. The difficulties disappear when we find one of the three roots.
+A planar regression allows us to reach the equation of the plane. From the
+equation, we can determine its normal vector $\bm{w}$. It satisfies the
+following equation $A\bm{w} = \lambda_3\bm{w}$. Thus we know $\lambda_3$.
+By the way, as far as $trace(A) = \lambda_1 + \lambda_2 + \lambda_3$ and
+$det(A) = \lambda_1 \lambda_2 \lambda_3$, then we can access to the value of
+the two others eigenvalues (just solve $\lambda^2 - (trace(A) - \lambda_3)
+\lambda + \frac{det(A)}{\lambda_3} = 0$). The knowledge of $\lambda_1$ and
+$\lambda_2$ let us find $\bm{u}$ and $\bm{v}$ by the equations $A\bm{u} =
+\lambda_1\bm{u}$ and $A\bm{v} = \lambda_2\bm{v}$.
+
+Let's center the points by susbtracting their center of mass. Now, we have
+three equivalent ways to estimate the coefficients of the plane:
+\begin{itemize}
+ \item if not $c = 0$, then $\frac{a}{c}x + \frac{b}{c}y + z = 0$,
+ \item if not $b = 0$, then $\frac{a}{b}x + y + \frac{c}{b}z = 0$,
+ \item if not $a = 0$, then $x + \frac{b}{a}y + \frac{c}{a}z = 0$.
+\end{itemize}
+As we cannot decide which way is the best, may be we have to test the three
+ones.
+
+Let's choose the linear model of the major inertia plane
+($ax + by + cz + d = 0$). With not $c = 0$.
+
+\begin{tabular}{lcl}
+$\bm{y}$ & $=$ &
+$\left[\begin{array}{c}
+ z_1 \\
+ \vdots \\
+ z_r \\
+\end{array}\right]$
+\\
+\\
+$\bm{\theta}$ & $=$ &
+$\left[\begin{array}{c}
+ a \\
+ b \\
+ d \\
+\end{array}\right]$
+\\
+\\
+$J$ & $=$ &
+$\left[\begin{array}{ccc}
+ x_1 & y_1 & 1 \\
+ \vdots & \vdots & \vdots\\
+ x_r & y_r & 1\\
+\end{array}\right]$
+\\
+\\
+$W$ & $=$ &
+$\left[\begin{array}{ccc}
+ 1 & \ldots & 0 \\
+ \vdots & \ddots & \vdots\\
+ 0 & \ldots & 1\\
+\end{array}\right]$
+\\
+\\
+$\chi(\bm{\theta})^2$ & $=$ &
+$(J\theta - \bm{y})^t W (J\theta - \bm{y})$
+\\
+\\
+$grad \chi(\bm{\theta})^2$ & $=$ &
+$2J^tWJ\bm{\theta} - 2J^tW\bm{y}$
+\\
+\\
+$\bm{\theta_{min}}$ & $=$ &
+$(J^tWJ)^{-1} J^tWy$
+\end{tabular}
+
+\end{document}
--
1.5.6.5
Show replies by date