Return to Algorithms

Support Vector Machine Visualization

Class 1
Class 0

Controls

Data

0.1

SVM Parameters

0.1 1.0 10.0
0.01 0.1 1.0

Algorithm Parameters

Playback

Slow 1x Fast

Status

Iteration: 0 / 100
Objective Value: -
Support Vectors: 0
Accuracy: -

Visualization Options

Objective Value Over Iterations

How Support Vector Machines Work

Step 1: Mapping - For non-linear problems, SVM maps the data into a higher-dimensional space using kernel functions where it becomes linearly separable.

Step 2: Maximum Margin - SVM finds the optimal hyperplane that maximizes the margin between classes. The margin is the distance between the hyperplane and the nearest data points (support vectors).

Step 3: Support Vectors - These are the data points closest to the decision boundary and most difficult to classify. They define the position of the hyperplane.

Step 4: Regularization (C parameter) - Controls the trade-off between maximizing the margin and minimizing classification error. Lower C allows more misclassifications but with a wider margin.

Step 5: Kernel Functions - Enable SVMs to handle non-linear data by implicitly mapping to higher dimensions:

  • Linear: Simple dot product for linearly separable data
  • Polynomial: For curved boundaries, controlled by degree parameter
  • RBF (Gaussian): For complex boundaries, controlled by gamma parameter