Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

MKEL: Multiple Kernel Ensemble Learning via Unified Ensemble Loss for Image Classification

View through CrossRef
In this article, a novel ensemble model, called Multiple Kernel Ensemble Learning (MKEL), is developed by introducing a unified ensemble loss. Different from the previous multiple kernel learning (MKL) methods, which attempt to seek a linear combination of basis kernels as a unified kernel, our MKEL model aims to find multiple solutions in corresponding Reproducing Kernel Hilbert Spaces (RKHSs) simultaneously. To achieve this goal, multiple individual kernel losses are integrated into a unified ensemble loss. Therefore, each model can co-optimize to learn its optimal parameters by minimizing a unified ensemble loss in multiple RKHSs. Furthermore, we apply our proposed ensemble loss into the deep network paradigm and take the sub-network as a kernel mapping from the original input space into a feature space, named Deep-MKEL (D-MKEL). Our D-MKEL model can utilize the diversified deep individual sub-networks into a whole unified network to improve the classification performance. With this unified loss design, our D-MKEL model can make our network much wider than other traditional deep kernel networks and more parameters are learned and optimized. Experimental results on several mediate UCI classification and computer vision datasets demonstrate that our MKEL model can achieve the best classification performance among comparative MKL methods, such as Simple MKL, GMKL, Spicy MKL, and Matrix-Regularized MKL. On the contrary, experimental results on large-scale CIFAR-10 and SVHN datasets concretely show the advantages and potentialities of the proposed D-MKEL approach compared to state-of-the-art deep kernel methods.
Title: MKEL: Multiple Kernel Ensemble Learning via Unified Ensemble Loss for Image Classification
Description:
In this article, a novel ensemble model, called Multiple Kernel Ensemble Learning (MKEL), is developed by introducing a unified ensemble loss.
Different from the previous multiple kernel learning (MKL) methods, which attempt to seek a linear combination of basis kernels as a unified kernel, our MKEL model aims to find multiple solutions in corresponding Reproducing Kernel Hilbert Spaces (RKHSs) simultaneously.
To achieve this goal, multiple individual kernel losses are integrated into a unified ensemble loss.
Therefore, each model can co-optimize to learn its optimal parameters by minimizing a unified ensemble loss in multiple RKHSs.
Furthermore, we apply our proposed ensemble loss into the deep network paradigm and take the sub-network as a kernel mapping from the original input space into a feature space, named Deep-MKEL (D-MKEL).
Our D-MKEL model can utilize the diversified deep individual sub-networks into a whole unified network to improve the classification performance.
With this unified loss design, our D-MKEL model can make our network much wider than other traditional deep kernel networks and more parameters are learned and optimized.
Experimental results on several mediate UCI classification and computer vision datasets demonstrate that our MKEL model can achieve the best classification performance among comparative MKL methods, such as Simple MKL, GMKL, Spicy MKL, and Matrix-Regularized MKL.
On the contrary, experimental results on large-scale CIFAR-10 and SVHN datasets concretely show the advantages and potentialities of the proposed D-MKEL approach compared to state-of-the-art deep kernel methods.

Related Results

Physicochemical Properties of Wheat Fractionated by Wheat Kernel Thickness and Separated by Kernel Specific Density
Physicochemical Properties of Wheat Fractionated by Wheat Kernel Thickness and Separated by Kernel Specific Density
ABSTRACTTwo wheat cultivars, soft white winter wheat Yang‐mai 11 and hard white winter wheat Zheng‐mai 9023, were fractionated by kernel thickness into five sections; the fractiona...
Genetic Variation in Potential Kernel Size Affects Kernel Growth and Yield of Sorghum
Genetic Variation in Potential Kernel Size Affects Kernel Growth and Yield of Sorghum
Large‐seededness can increase grain yield in sorghum [Sorghum bicolor (L.) Moench] if larger kernel size more than compensates for the associated reduction in kernel number. The ai...
Sorghum Kernel Weight
Sorghum Kernel Weight
The influence of genotype and panicle position on sorghum [Sorghum bicolor (L.) Moench] kernel growth is poorly understood. In the present study, sorghum kernel weight (KW) differe...
Double Exposure
Double Exposure
I. Happy Endings Chaplin’s Modern Times features one of the most subtly strange endings in Hollywood history. It concludes with the Tramp (Chaplin) and the Gamin (Paulette Godda...
Optimising tool wear and workpiece condition monitoring via cyber-physical systems for smart manufacturing
Optimising tool wear and workpiece condition monitoring via cyber-physical systems for smart manufacturing
Smart manufacturing has been developed since the introduction of Industry 4.0. It consists of resource sharing and networking, predictive engineering, and material and data analyti...
Enhancing Non-Formal Learning Certificate Classification with Text Augmentation: A Comparison of Character, Token, and Semantic Approaches
Enhancing Non-Formal Learning Certificate Classification with Text Augmentation: A Comparison of Character, Token, and Semantic Approaches
Aim/Purpose: The purpose of this paper is to address the gap in the recognition of prior learning (RPL) by automating the classification of non-formal learning certificates using d...
Spectral-Similarity-Based Kernel of SVM for Hyperspectral Image Classification
Spectral-Similarity-Based Kernel of SVM for Hyperspectral Image Classification
Spectral similarity measures can be regarded as potential metrics for kernel functions, and can be used to generate spectral-similarity-based kernels. However, spectral-similarity-...
Cost-sensitive multi-kernel ELM based on reduced expectation kernel auto-encoder
Cost-sensitive multi-kernel ELM based on reduced expectation kernel auto-encoder
ELM (Extreme learning machine) has drawn great attention due its high training speed and outstanding generalization performance. To solve the problem that the long training time of...

Back to Top