CFP last date
22 April 2024
Reseach Article

A Survey of DCGAN based Unsupervised Decoding and Image Generation

by Baomin Shao, Qiuling Li, Xue Jiang
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 178 - Number 23
Year of Publication: 2019
Authors: Baomin Shao, Qiuling Li, Xue Jiang
10.5120/ijca2019919099

Baomin Shao, Qiuling Li, Xue Jiang . A Survey of DCGAN based Unsupervised Decoding and Image Generation. International Journal of Computer Applications. 178, 23 ( Jun 2019), 45-49. DOI=10.5120/ijca2019919099

@article{ 10.5120/ijca2019919099,
author = { Baomin Shao, Qiuling Li, Xue Jiang },
title = { A Survey of DCGAN based Unsupervised Decoding and Image Generation },
journal = { International Journal of Computer Applications },
issue_date = { Jun 2019 },
volume = { 178 },
number = { 23 },
month = { Jun },
year = { 2019 },
issn = { 0975-8887 },
pages = { 45-49 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume178/number23/30677-2019919099/ },
doi = { 10.5120/ijca2019919099 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:51:15.146054+05:30
%A Baomin Shao
%A Qiuling Li
%A Xue Jiang
%T A Survey of DCGAN based Unsupervised Decoding and Image Generation
%J International Journal of Computer Applications
%@ 0975-8887
%V 178
%N 23
%P 45-49
%D 2019
%I Foundation of Computer Science (FCS), NY, USA
Abstract

At present, deep learning is a fast growing research area of machine learning, which can extract more effective features by using a cascade of nonlinear layer units, some of them use deep convolutional neural network to process digital images. DCGAN stands for Deep Convolutional Generative Adversarial Networks, which is an unsupervised learning method for neural networks. The model consists of two networks. A generator that generates an image from a noise vector, and a discriminator that discriminates between a generated image and a real image. The generator model takes random input values and transforms them into images through a deconvolutional neural network, whereas DCGAN generates an image from random parameters. In this paper, an unsupervised learning method is proposed which is based on DCGAN to generate the image of codes and use the discriminator to encode the image, in the training stage, except the minimax cost function for the GAN network, the distance between input code of generator and the output code of discriminator is minimized. On MNIST dataset, this method has achieved good experiment result, experiments show that this architecture can learn the code of images and further prove the method has the ability to understand a set of image without extra knowledge, and can reconstruct the image using the generated code.

References
  1. OpenAI.GenerativeModelshttps://blog.openai.com/generative-models/, 2016.[Online; accessed 9-January-2018].
  2. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  3. A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  4. J. Zbontar and Y. LeCun. Stereo matching by training a convolutional neural network to compare image patches. arXiv preprint arXiv:1510.05970, 2015.
  5. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014.
  6. C. A. Aguilera, F. J. Aguilera, A. D. Sappa, C. Aguilera, and R. Toledo. Learning cross-spectral similarity measures with deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–9, 2016.
  7. T. Salimans, I. Goodfellow,W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2226–2234, 2016.
  8. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.
  9. J. Zhao, M. Mathieu, and Y. LeCun. Energybased generative adversarial network. arXiv preprintarXiv:1609.03126, 2016.
  10. E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, pages 1486–1494, 2015.
  11. A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint rXiv:1511.06434, 2015.
  12. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.
  13. M. F. Mathieu, J. Zhao, A. Ramesh, P. Sprechmann, and Y. LeCun. Disentangling factors of variation in deep representation using adversarial training. InNIPS, pages 5040–5048, 2016.
  14. S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
  15. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. CVPR, 2016.
  16. M. Mathieu, C. Couprie, and Y. LeCun. Deep multiscale video prediction beyond mean square error. ICLR, 2016.
  17. C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In NIPS, pages 613–621, 2016.
  18. J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NIPS, pages 82–90, 2016.
  19. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
Index Terms

Computer Science
Information Sciences

Keywords

DCGAN image reconstruction auto decoding.