Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks

View through CrossRef
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson χ 2 divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches.
Title: Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
Description:
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples.
The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images.
In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators.
The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson χ 2 divergence, Hellinger divergence, etc.
Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes.
These hyper-parameters make our model more flexible to trade off between the generated and target distributions.
We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images.
Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN.
The generated samples are also competitive compared with the state-of-the-art approaches.

Related Results

L᾽«unilinguisme» officiel de Constantinople byzantine (VIIe-XIIe s.)
L᾽«unilinguisme» officiel de Constantinople byzantine (VIIe-XIIe s.)
&nbsp; <p>&Nu;ί&kappa;&omicron;&sigmaf; &Omicron;&iota;&kappa;&omicron;&nu;&omicron;&mu;ί&delta;&eta;&sigmaf;</...
North Syrian Mortaria and Other Late Roman Personal and Utility Objects Bearing Inscriptions of Good Luck
North Syrian Mortaria and Other Late Roman Personal and Utility Objects Bearing Inscriptions of Good Luck
<span style="font-size: 11pt; color: black; font-family: 'Times New Roman','serif'">&Pi;&Eta;&Lambda;&Iota;&Nu;&Alpha; &Iota;&Gamma;&Delta...
Un manoscritto equivocato del copista santo Theophilos († 1548)
Un manoscritto equivocato del copista santo Theophilos († 1548)
<p><font size="3"><span class="A1"><span style="font-family: 'Times New Roman','serif'">&Epsilon;&Nu;&Alpha; &Lambda;&Alpha;&Nu;&...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
Research on Style Migration Techniques Based on Generative Adversarial Networks in Chinese Painting Creation
Research on Style Migration Techniques Based on Generative Adversarial Networks in Chinese Painting Creation
Abstract The continuous progress and development of science and technology have brought rich and diverse artistic experiences to the current society. The image style...
Oscillatory Brain Activity in the Canonical Alpha-Band Conceals Distinct Mechanisms in Attention
Oscillatory Brain Activity in the Canonical Alpha-Band Conceals Distinct Mechanisms in Attention
Brain oscillations in the alpha-band (8–14 Hz) have been linked to specific processes in attention and perception. In particular, decreases in posterior alpha-amplitude are thought...
Immunolocalization of integrin receptors in normal lymphoid tissues
Immunolocalization of integrin receptors in normal lymphoid tissues
Abstract The integrin superfamily of cell adhesion receptors consists of heterodimeric glycoproteins composed of unique alpha and beta subunits. These receptors medi...

Back to Top