Pepe's Braindump

Support Vector Machine

tags
ML Algorithms

Description

SVM treats every example as a point in a high dimensional space and creates an imaginary hyperplane that separates examples with positive labels from examples with negative ones.

It requires positive labels to have a value +1, negative one to have -1.

Equation

The ecuation of the hyperplane is given by two parameters, a real valued vector w of the same dimensionality as our input feature vector, and a real number b like this:

wx -b = 0

where wx means w1x1 + w2x2 + … + wDxD (where D = dimension of the input feature vector).

\begin{equation} \sum_{a=1}^Dw^{a}x^{a} \end{equation}

The predicted label for some input feature vector x would be:

\begin{prediction} y = sign(wx - b) \end{prediction}

sign in this case is a function that will return -1 if the number is negative, +1 if it’s positive.

The goal for SVM it is to find the optimal values for w\* and b\* for parameters w and b. Once those optimal values are found, the model is defined as:

\begin{model} f(x) = sign(w*x - b*) \end{model}

Optimization

the model will find optimal values of w* and b* using optimization.

Cortex theme by Jethro Kuan. Built with org-mode, org-roam and Hugo