Step 1: Proximity Calculation - For a new data point, calculate its distance to all training points using the chosen distance metric (Euclidean, Manhattan, etc.).
Step 2: Neighbor Selection - Select the K training points closest to the new data point as its "neighbors".
Step 3: Voting - The new point is classified based on the majority class among its K neighbors. This can be uniform (each neighbor has equal vote) or weighted by distance (closer neighbors have more influence).
Key Parameters:
KNN is a "lazy learning" algorithm - it doesn't build a model during training, but stores all training examples and classifies new points at prediction time.