Sunday, November 2, 2014

SVM, reading notes

 See http://hongqinlab.blogspot.com/2014/11/elements-of-statistical-learning-video.html


SVM kernel trick

trial and error to separate data in high dimenstional space

cross validation

predict True Negative?

Mathews correlation coefficient (MCC)  (for binary classification)

In general the equation for a hyperplane has the form


SVM maximize soft margin.

Data should be standardized for SVM analysis, because SVM treats every columns the same. 

On researchgate, someone argues: Perform different normalization such as Z-Score or Min-Max before using PCA. Z-Score normalization before using PCA might be beneficial.

For principal component (PCA) and svm, 
http://www.softcomputing.net/isda2010_2.pdf
On researchGate: Principal components are linear combinations of original variables x1, x2, etc. So when you do SVM on PCA decomposition you work with these combinations instead of original variables.


Support vector classifer in the enlarged spaced solves separation problem in the lower-dimensional space.





Question: Kernel is used to computer inner products of vectors. Why are there different types of kernels for computing the same thing (inner products)? 


SVM for more than 2 classes:


No comments:

Post a Comment