Javascript must be enabled to continue!
KNN Loss and Deep KNN
View through CrossRef
The k Nearest Neighbor (KNN) algorithm has been widely applied in various supervised learning tasks due to its simplicity and effectiveness. However, the quality of KNN decision making is directly affected by the quality of the neighborhoods in the modeling space. Efforts have been made to map data to a better feature space either implicitly with kernel functions, or explicitly through learning linear or nonlinear transformations. However, all these methods use pre-determined distance or similarity functions, which may limit their learning capacity. In this paper, we present two loss functions, namely KNN Loss and Fuzzy KNN Loss, to quantify the quality of neighborhoods formed by KNN with respect to supervised learning, such that minimizing the loss function on the training data leads to maximizing KNN decision accuracy on the training data. We further present a deep learning strategy that is able to learn, by minimizing KNN loss, pairwise similarities of data that implicitly maps data to a feature space where the quality of KNN neighborhoods is optimized. Experimental results show that this deep learning strategy (denoted as Deep KNN) outperforms state-of-the-art supervised learning methods on multiple benchmark data sets.
Title: KNN Loss and Deep KNN
Description:
The k Nearest Neighbor (KNN) algorithm has been widely applied in various supervised learning tasks due to its simplicity and effectiveness.
However, the quality of KNN decision making is directly affected by the quality of the neighborhoods in the modeling space.
Efforts have been made to map data to a better feature space either implicitly with kernel functions, or explicitly through learning linear or nonlinear transformations.
However, all these methods use pre-determined distance or similarity functions, which may limit their learning capacity.
In this paper, we present two loss functions, namely KNN Loss and Fuzzy KNN Loss, to quantify the quality of neighborhoods formed by KNN with respect to supervised learning, such that minimizing the loss function on the training data leads to maximizing KNN decision accuracy on the training data.
We further present a deep learning strategy that is able to learn, by minimizing KNN loss, pairwise similarities of data that implicitly maps data to a feature space where the quality of KNN neighborhoods is optimized.
Experimental results show that this deep learning strategy (denoted as Deep KNN) outperforms state-of-the-art supervised learning methods on multiple benchmark data sets.
Related Results
Optimising tool wear and workpiece condition monitoring via cyber-physical systems for smart manufacturing
Optimising tool wear and workpiece condition monitoring via cyber-physical systems for smart manufacturing
Smart manufacturing has been developed since the introduction of Industry 4.0. It consists of resource sharing and networking, predictive engineering, and material and data analyti...
Group-Based Sample Partitioning kNN: A Computationally Efficient kNN Algorithm for Resource-Constrained Environments
Group-Based Sample Partitioning kNN: A Computationally Efficient kNN Algorithm for Resource-Constrained Environments
The k-nearest neighbors (kNN) algorithm is widely adopted for classification due to its simplicity and effectiveness. However, its computational cost remains a significant challeng...
Comparison of modelled seismic loss against historical damage information
Comparison of modelled seismic loss against historical damage information
<p>The increasing loss of human life and property due to earthquakes in past years have increased the demand for seismic risk analysis for people to be better prepare...
A Study on GBW-KNN Using Statistical Testing
A Study on GBW-KNN Using Statistical Testing
In the 4th industrial revolution, big data and artificial intelligence are becoming more and more important. This is because the value can be four by applying artificial intelligen...
Comparison between KNN, W-KNN, Wc-KNN and Wk-KNN models on a CDC heart disease dataset
Comparison between KNN, W-KNN, Wc-KNN and Wk-KNN models on a CDC heart disease dataset
Abstract
One of the most popular and fundamental methods used for machine learning classification is KNN (K-nearest neighbor). Despite its simplicity, this method can achie...
Slope-units-based landslide susceptibility mapping based on graph convolutional network: A case study in Lueyang region
Slope-units-based landslide susceptibility mapping based on graph convolutional network: A case study in Lueyang region
Landslides are the most frequent and numerous geological hazards that pose a serious threat to human safety and property. Landslide susceptibility mapping (LSM) has been focused on...
A Management Method And Engineering Philosophy For Deep Drilling Ventures
A Management Method And Engineering Philosophy For Deep Drilling Ventures
Abstract
Deep wells, particularly exploratory tests, not only require detailed planning and precise execution but also other work activities to bring such wells t...
[RETRACTED] Prima Weight Loss Dragons Den UK v1
[RETRACTED] Prima Weight Loss Dragons Den UK v1
[RETRACTED]Prima Weight Loss Dragons Den UK :-Obesity is a not kidding medical issue brought about by devouring an excessive amount of fat, eating terrible food sources, and practi...

