Javascript must be enabled to continue!
Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
View through CrossRef
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson χ 2 divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches.
Title: Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks
Description:
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples.
The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images.
In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators.
The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson χ 2 divergence, Hellinger divergence, etc.
Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes.
These hyper-parameters make our model more flexible to trade off between the generated and target distributions.
We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images.
Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN.
The generated samples are also competitive compared with the state-of-the-art approaches.
Related Results
L᾽«unilinguisme» officiel de Constantinople byzantine (VIIe-XIIe s.)
L᾽«unilinguisme» officiel de Constantinople byzantine (VIIe-XIIe s.)
<p>Νίκος Οικονομίδης</...
North Syrian Mortaria and Other Late Roman Personal and Utility Objects Bearing Inscriptions of Good Luck
North Syrian Mortaria and Other Late Roman Personal and Utility Objects Bearing Inscriptions of Good Luck
<span style="font-size: 11pt; color: black; font-family: 'Times New Roman','serif'">ΠΗΛΙΝΑ ΙΓ&Delta...
Un manoscritto equivocato del copista santo Theophilos († 1548)
Un manoscritto equivocato del copista santo Theophilos († 1548)
<p><font size="3"><span class="A1"><span style="font-family: 'Times New Roman','serif'">ΕΝΑ ΛΑΝ&...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Improving Diversity and Quality of Adversarial Examples in Adversarial Transformation Network
Improving Diversity and Quality of Adversarial Examples in Adversarial Transformation Network
Abstract
This paper proposes a method to mitigate two major issues of Adversarial Transformation Networks (ATN) including the low diversity and the low quality of adversari...
Adversarial Training and Robustness in Machine Learning Frameworks
Adversarial Training and Robustness in Machine Learning Frameworks
In the realm of machine learning, ensuring robustness against adversarial attacks is increasingly crucial. Adversarial training has emerged as a prominent strategy to fortify model...
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
Research on Style Migration Techniques Based on Generative Adversarial Networks in Chinese Painting Creation
Research on Style Migration Techniques Based on Generative Adversarial Networks in Chinese Painting Creation
Abstract
The continuous progress and development of science and technology have brought rich and diverse artistic experiences to the current society. The image style...

