Tuesday, January 10, 2017

deep learning reading, till chapter 4.

Deep learning, by Ian Goodfellow et al

AI -> Knowledge bases -> machine learning -> representation learning (shallow learning) --> deep learning (such as multilayer perceptron MLP).

Representation learning refer to machine learning that can discover not only the mapping from representation to output but also the representation itself.

When designing features or algorithms for learning features, our goal is to separate the factors of variation that explain the observed data.

When it is nearly as difficult to obtain a representation as to solve the original problem, it becomes the central reason for Deep Learning.

Transpose of a matrix is a mirror operation along its main diagonal from upper left to bottom right.

Eigen value decomposition:   A = V diag(\Lambda) V^-1

Every real symmetric matrix can be decomposed into an expression using only real-valued eigenvaectore and eigen values:     A = Q \Lambda Q_transposed, where Q is an orthogonal matrix composed of eigenvectore of A, and \Lambda is a diagonal matrix.  Hence, any real-valued symmetric matrix has a egien decomposition.  (Qin: what does this mean for adjacency matrix?)

Singular value decomposition (SVD) and is more generally applicable than eigen value decomposition.
SVD:   A = U D V^T
A: mxn, U: mxm, V: nxn, D mxn, U and V are both orthogonal, D is a diagonal matrix
U is the eigen decomposition of AA^T, and V is the eign decomposition of A^TA.  The non-zero singluar values of the square roots of the eigenvalues of A^TA.

SVD can enable use to partially inverse a matrix when solving a linear systems, the so-called Moore-Penrose Pseudoinverse.

Determinant of a matrix det(A) is equal to the product of all the eigen-values of the matrix.
















No comments:

Post a Comment