CFP last date
20 August 2024
Call for Paper
September Edition
IJCA solicits high quality original research papers for the upcoming September edition of the journal. The last date of research paper submission is 20 August 2024

Submit your paper
Know more
Reseach Article

Motion Image Deblurring using AS-Cycle Generative Adversarial Network

by Xiaoming Zhu, Lijun Yao, Fan Luo, Kejun Wang, Zhou Che, Jing Yan, Min Zhou, Yongchang Cai, Lingling Wang, Zelong Cao, Lan Peng, Fengqing Bai, Zifang You, Hongqiu Xiao, Haocheng Qi
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 183 - Number 39
Year of Publication: 2021
Authors: Xiaoming Zhu, Lijun Yao, Fan Luo, Kejun Wang, Zhou Che, Jing Yan, Min Zhou, Yongchang Cai, Lingling Wang, Zelong Cao, Lan Peng, Fengqing Bai, Zifang You, Hongqiu Xiao, Haocheng Qi
10.5120/ijca2021921768

Xiaoming Zhu, Lijun Yao, Fan Luo, Kejun Wang, Zhou Che, Jing Yan, Min Zhou, Yongchang Cai, Lingling Wang, Zelong Cao, Lan Peng, Fengqing Bai, Zifang You, Hongqiu Xiao, Haocheng Qi . Motion Image Deblurring using AS-Cycle Generative Adversarial Network. International Journal of Computer Applications. 183, 39 ( Nov 2021), 32-37. DOI=10.5120/ijca2021921768

@article{ 10.5120/ijca2021921768,
author = { Xiaoming Zhu, Lijun Yao, Fan Luo, Kejun Wang, Zhou Che, Jing Yan, Min Zhou, Yongchang Cai, Lingling Wang, Zelong Cao, Lan Peng, Fengqing Bai, Zifang You, Hongqiu Xiao, Haocheng Qi },
title = { Motion Image Deblurring using AS-Cycle Generative Adversarial Network },
journal = { International Journal of Computer Applications },
issue_date = { Nov 2021 },
volume = { 183 },
number = { 39 },
month = { Nov },
year = { 2021 },
issn = { 0975-8887 },
pages = { 32-37 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume183/number39/32190-2021921768/ },
doi = { 10.5120/ijca2021921768 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T01:19:13.554237+05:30
%A Xiaoming Zhu
%A Lijun Yao
%A Fan Luo
%A Kejun Wang
%A Zhou Che
%A Jing Yan
%A Min Zhou
%A Yongchang Cai
%A Lingling Wang
%A Zelong Cao
%A Lan Peng
%A Fengqing Bai
%A Zifang You
%A Hongqiu Xiao
%A Haocheng Qi
%T Motion Image Deblurring using AS-Cycle Generative Adversarial Network
%J International Journal of Computer Applications
%@ 0975-8887
%V 183
%N 39
%P 32-37
%D 2021
%I Foundation of Computer Science (FCS), NY, USA
Abstract

To improve the problem of poor generalization ability of image deblurring model in real scenes, this paper proposes a model named AS-CycleGAN (Cycle Generative Adversarial Network based on Asymmetric Samples). The model trains on unpaired images by using two “dual form” Conditional Generation Adversarial Networks, adopting global residual connection and ResNetv2 residual module. To enhance the texture effect, the SFT layer is integrated. The experimental results on the data set of Gopro show that the SSIM and PSNR values of our algorithm are 15.97% and 0.75% higher than those of the benchmark model CycleGAN, respectively. By improving the residual structure and adding the SFT layer, the effect is even better. AS-CycleGAN provides a powerful help to solve the motion blur problem in the actual scene.

References
  1. Oliver, et al. "Non-uniform Deblurring for Shaken Images." International Journal of Computer Vision vol. 98, no. 2, pp. 168-186, 2012.
  2. Chakrabarti, A., "A Neural Approach to Blind Motion Deblurring." European Conference on Computer Vision Springer, Cham, pp. 221-235, 2016.
  3. Kupyn, O., et al. "DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks." 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8183-8192, 2018.
  4. Kupyn, O., et al. "DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better." 2019 IEEE/CVF International Conference on Computer Vision (ICCV) IEEE, pp. 00897: 8877-8886, 2019.
  5. Tao, X., et al. "Scale-recurrent Network for Deep Image Deblurring." 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition IEEE, pp. 8174-8182, 2018.
  6. Zhang, H. , et al. "Deep Stacked Hierarchical Multi-patch Network for Image Deblurring." 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) IEEE, pp. 5971-5979, 2019.
  7. Mirza, M., and S. Osindero. "Conditional Generative Adversarial Nets." Computer Science (2014), pp. 2672-2680, 2014.
  8. Isola, P., et al. "Image-to-Image Translation with Conditional Adversarial Networks." IEEE Conference on Computer Vision & Pattern Recognition IEEE, pp. 1125-1134, 2016.
  9. Zhu, J. Y., et al. "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks." IEEE, pp. 2223-2232, 2017
  10. Cao, Y., et al. “Review of computer vision based on generative adversarial networks.” Journal of Image and Graphics, pp. 1433-1449, 2018.
  11. Wang, K. F., et al. "Generative Adversarial Networks: The State of the Art and Beyond." Acta Automatica Sinica, pp. 321-332, 2017.
  12. Arjovsky M, Chintala S, Bottou L. “Wasserstein generative adversarial networks.”. In: International Conference on Machine Learning, pp. 214-223 2017.
  13. Gulrajani I, et al. “Improved training of Wasserstein gans.” In: Advances in Neural Information Processing Systems, pp. 5767-5777, 2017.
  14. He K, et al. “Deep residual learning for image recognition.” In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
  15. Wang Xintao, et al. “Recovering Realistic Texture in Image Super-Resolution by Deep Spatial Feature Transform.” CVPR, pp. 606-615, 2018.
  16. Zhang Yaliang, et al. “Free-Form Video Inpainting With 3D Gated Convolution and Temporal PatchGAN.” ICCV, pp. 9065-9074 2019.
  17. Taigman Y, et al. “Unsupervised cross-domain image Generation.” In: International Conference on Learning Representations. 2016.
  18. Nah S, et al. “Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring.” CVPR, pp. 257-265 2017.
Index Terms

Computer Science
Information Sciences

Keywords

Motion image deblurring cycle generative adversarial networks unpaired data sets residual network