Tuning of k-value in KNN classifier
In the previous section, we just checked with only the k-value of three. Actually, in any machine learning algorithm, we need to tune the knobs to check where the better performance can be obtained. In the case of KNN, the only tuning parameter is k-value. Hence, in the following code, we are determining the best k-value with grid search:
# Tuning of K- value for Train & Test data >>> dummyarray = np.empty((5,3)) >>> k_valchart = pd.DataFrame(dummyarray) >>> k_valchart.columns = ["K_value","Train_acc","Test_acc"] >>> k_vals = [1,2,3,4,5] >>> for i in range(len(k_vals)): ... knn_fit = KNeighborsClassifier(n_neighbors=k_vals[i],p=2,metric='minkowski') ... knn_fit.fit(x_train,y_train) ... print ("\nK-value",k_vals[i]) ... tr_accscore = round(accuracy_score(y_train,knn_fit.predict(x_train)),3) ... print ("\nK-Nearest Neighbors - Train ConfusionMatrix\n\n",pd.crosstab...