# SVMs: A Geometric Interpretation

Consider a set of positive and negative samples from some dataset as shown above. How can we approach the problem of classifying these - and more importantly, unseen - samples as either positive or negative examples? The most intuitive way to do this is to draw a line / hyperplane between the between the positive and negative samples.

However, which line should we draw? We could draw this one:

or this one:

However, neither of the above seem like the best fit. Perhaps a line such that the boundary between the two classes is maximal is the optimal line?

This line is such that the margin is maximized. This is the line an SVM attempts to find - an SVM attempts to find the **maximum-margin separating hyperplane** between the two classes. However, we need to construct a decision rule to classify examples. To do this, consider a vector perpendicular to the margin. Further, consider some unknown vector representing some example we want to classify:

We want to know what side of the decision boundary is in order to classify it. To do this, we project it onto by computing . This will give us a value that is proportional to the distance is, *in the direction of* . We can then use this to determine which side of the boundary lies on using the following decision rule:

for some . is basically telling us that if we are far *enough* away, we can classify as a positive example. We can rewrite the above decision rule as follows:

where .

But, what and should we choose? We don’t have enough constraint in the problem to fix a particular or . Therefore, we introduce additional constraints:

and

These constraints basically force the function that defines our decision rule to produce a value of 1 or greater for positive examples, and -1 or less for negative examples.

Now, instead of dealing with two inequalities, we introduce a new variable, , for mathematical convenience. It is defined as:

This variable essentially encodes the targets of each example. We multiply both inequalities from above by . For the positive example constraint we get:

and for the negative example constraint we get:

which is the same constraint! The introduction of has simplified the problem. We can rewrite this constraint as:

However, we go a step further by making the above inequality even more stringent:

The above equation constrains examples lying on the margins (known as *support vectors*) to be exactly 0. We do this because if a training point lies exactly on the margin, we don’t want to classify it as either positive or negative, since it’s exactly in the middle. We instead want such points to define our decision boundary. It is also clearly the equation of a hyperplane, which is what we want!

Keep in mind that our goal is to find the margin separating positive and negative examples to be as large as possible. This means that we will need to know the width of our margin so that we can maximize it. The following picture shows how we can calculate this width.

To calculate the width of the margin, we need a unit normal. Then we can just project onto this unit normal and this would exactly be the width of the margin. Luckily, vector was defined to be normal! Thus, we can compute the width as follows:

where the norm ensures that becomes a unit normal. From earlier, we know . Using this, simple algebra yields:

and

Thus, substituting into the expression for the width yields:

which is interesting! The width of our margin for such a problem depends only on . Since we want to maximize the margin, we want:

which is the same as

which is the same as

which is the same as

where we write it like this for mathematical convenience reasons that will become apparent shortly.

One easy approach to solve such an optimisation problem is using Lagrange multipliers. We first formulate our Lagrangian:

We find the optimal settings for and by computing the respective partial derivatives and setting them to zero. First, for :

which implies that . This means that is simply a linear combination of the samples! Now, for :

which implies that .

We could just stop here. We can solve the optimisation problem as is. However, we shall not do that! At least not yet. Let’s plug our expressions for and back into the Lagrangian:

which, after some algebra, results in:

What the above equation tells us is that the optimisation depends **only** on dot products of pairs of samples! This observation will prove key later on. Also, we should note that training examples that are not support vectors will have , as these examples do not effect or define the decision boundary.

Putting the expressions for and back into our decision rule yields:

which means the decision rule also depends **only** on dot products of pairs of samples! Another great benefit is that it is provable that this optimisation problem is convex - meaning we are guaranteed to always find global optima.

However, now a problem arises! The above optimisation problem assumes the data is linearly-separable in the input vector space. However, in most real-life scenarios, this assumption is simply untrue. We therefore have to adapt the SVM to accommodate for this, and to allow for non-linear decision boundaries. To do this, we introduce a transformation which will transform the input vector into a (high-dimensional) vector space. It is in this vector space that we will attempt to find the maximum-margin line / hyperplane.
In this case, we would simply need to swap the dot product in the optimisation problem with . We can do this solely because, as shown above, both the optimisation and decision rule depends only on dot products between pairs of samples. This is known as the *kernel trick*. Thus, if we have a function such that:

then we don’t actually need to know the transformation itself! We only need the function , which is known as a kernel function. This is why we can use kernels that transform the data into an infinite-dimensional space (such as the RBF kernel), because we are not computing the transformations directly. Instead, we simply use a special function (i.e. kernel function) to compute dot products in this space without needing to compute the transformations.

This kernel trick allows the SVM to learn non-linear decision boundaries, and the problem still clearly remains convex. However, even with the kernel trick, the SVM with such a formulation still assumes that the data in linearly-separable in this transformed space. Such SVMs are known as *hard-margin* SVMs. This assumption does not hold most the time for real-world data. Therefore, we arrive at the most common form of the SVM nowadays - the *soft-margin* SVMs. Essentially, so-called *slack* variables are introduced into the optimisation problem to control the amount of misclassification the SVM is allowed to make. For more information on soft-margin SVMs, see my blog post on the subject.