CFP last date
20 May 2024
Call for Paper
June Edition
IJCA solicits high quality original research papers for the upcoming June edition of the journal. The last date of research paper submission is 20 May 2024

Submit your paper
Know more
Reseach Article

Image to Image Translation using Deep Learning Techniques

by S. Ramya, S. Anchana, A.M. Bavidhraa Shrrei, R. Devanand
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 175 - Number 22
Year of Publication: 2020
Authors: S. Ramya, S. Anchana, A.M. Bavidhraa Shrrei, R. Devanand
10.5120/ijca2020920745

S. Ramya, S. Anchana, A.M. Bavidhraa Shrrei, R. Devanand . Image to Image Translation using Deep Learning Techniques. International Journal of Computer Applications. 175, 22 ( Oct 2020), 40-42. DOI=10.5120/ijca2020920745

@article{ 10.5120/ijca2020920745,
author = { S. Ramya, S. Anchana, A.M. Bavidhraa Shrrei, R. Devanand },
title = { Image to Image Translation using Deep Learning Techniques },
journal = { International Journal of Computer Applications },
issue_date = { Oct 2020 },
volume = { 175 },
number = { 22 },
month = { Oct },
year = { 2020 },
issn = { 0975-8887 },
pages = { 40-42 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume175/number22/31586-2020920745/ },
doi = { 10.5120/ijca2020920745 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:25:51.092640+05:30
%A S. Ramya
%A S. Anchana
%A A.M. Bavidhraa Shrrei
%A R. Devanand
%T Image to Image Translation using Deep Learning Techniques
%J International Journal of Computer Applications
%@ 0975-8887
%V 175
%N 22
%P 40-42
%D 2020
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This project aims to make image-to-image translation for different tasks using deep learning techniques. Using dataset generated by a combination of online images. We have generated images for different tasks. Our innovation is we train and test the cycle-consistent adversarial networks using dataset. We have tried to fine-tune hyper parameters such as batch size, learning rate, lambda in loss function and trying to add dropout to our network. The experiment outcomes show that our method can successfully transfer for disparate tasks while conserving the original content.

References
  1. L. Gatys, A. Ecker, and M. Bethge. A neural algorithm of artistic style.arXivpreprint arXiv.•1508.06576, 2015.
  2. J. Zhu, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
  3. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio- Generative adversarial nets. In NIPS, 2014.
  4. Z. Kalal, K. Mikolajczyk, and J. Matas. Forward- backward error: Automatic detection of tracking failures. In ICPR, 2010.
  5. J. Long, E. Shelhamer, and T. Darrell, ‘‘Fully convolutional networks for semantic segmentation,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Aug. 2015, pp. 3431–3440.
  6. P. Isola, J. Y. Zhu, T. Zhou, and A. Efros, ‘‘Image- to-image translation with conditional adversarial networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Aug. 2017, pp. 1125–1134.
  7. T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. (2017). ‘‘Learning to discover cross-domain relations with generative adversarial networks.’’ [Online].Available:https://arxiv.org/abs/1703.05192
  8. Z. Yi, R. Hao, P. Tan, and M. Gong, ‘‘DualGAN: Unsupervised dual learning for image-to-image translation,’’ in Proc. ICCV, Aug. 2017, pp. 2868– 2876.
  9. j. Y. Zhu, T. Park, P. Isola, and A. A. Efros. (2017). ‘‘Unpaired image-toimage translation using cycle- consistent adversarial networks.’’ [Online]. Available: https://arxiv.org/abs/1703.10593.
  10. M. Y. Liu, T. Breuel, and J. Kautz, ‘‘Unsupervised image-to-image translation networks,’’ in Proc.Adv. Neural Inf. Process. Syst., 2017, pp. 700–708
  11. X. Huang, M. Y. Liu, S. Belongie, and J. Kautz. (2018). ‘‘Multimodal unsupervised image-to-image translation.’’[Online].Available:https://arxiv.org/abs/1804.04732.
  12. Y. Choi, M. Choi, and M. Kim, ‘‘Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,’’ Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2018, pp. 3662–3670.
  13. H. Y. Lee, H. Y. Tseng, J. B. Huang, M. Singh, and M. H. Yang, ‘‘Diverse image-to-image translation via disentangled representations,’’ in Proc. ECCV, Sep. 2018, pp. 35–51.
  14. N. Liu, J. Han, and M. H. Yang, ‘‘PiCANet: Learning pixel-wise contextual attention for saliency detection,’’ Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Aug. 2018, pp. 3089–3098.
Index Terms

Computer Science
Information Sciences

Keywords

CycleGAN CNN Adversarial losses