Return to Algorithms

K-Nearest Neighbors Visualization

Class 1
Class 0

Controls

Data

0.1

KNN Parameters

1 5 15

Test Point

5.0
5.0

Visualization Options

Animation

Slow 1x Fast

How K-Nearest Neighbors Works

Step 1: Proximity Calculation - For a new data point, calculate its distance to all training points using the chosen distance metric (Euclidean, Manhattan, etc.).

Step 2: Neighbor Selection - Select the K training points closest to the new data point as its "neighbors".

Step 3: Voting - The new point is classified based on the majority class among its K neighbors. This can be uniform (each neighbor has equal vote) or weighted by distance (closer neighbors have more influence).

Key Parameters:

  • K value: The number of neighbors to consider. Larger values smooth decision boundaries and reduce noise sensitivity, but may miss local patterns.
  • Distance metric: How similarity between points is measured. Common options include Euclidean (straight-line), Manhattan (city block), and Chebyshev (maximum coordinate difference).
  • Weighting: Whether all neighbors have equal influence (uniform) or closer neighbors matter more (distance-weighted).

KNN is a "lazy learning" algorithm - it doesn't build a model during training, but stores all training examples and classifies new points at prediction time.