A comprehensive Analysis on Image Colorization using Generative Adversarial Networks (GANs)

Authors

  • Hazel Mahajan

DOI:

https://doi.org/10.17762/msea.v71i4.1779

Abstract

Using a training set of matched picture pairs, image-to-image translation learns the mapping between an input image and an output image. For many issues, however, matching training data will not be available. We propose a method for learning to translate a picture from a source domain X to a target domain Y in the absence of matched examples. We wish to train a mapping A: X (Y) using an adversarial loss such that the distribution of photos from A(X) is indistinguishable from the distribution Y. Because it is significantly under-constrained, we connect it with an inverse mapping B:Y(X) and apply a cycle consistency loss to push B(A(X))X and A(B(X))X. Qualitative outcomes are presented on several tasks when matched training data is unavailable, such as collection style transfer, obseason transfer, photo enhancement, and so on. Quantitative comparisons to previous approaches reveal that our approach is superior.

Downloads

Published

2022-12-31

How to Cite

Hazel Mahajan. (2022). A comprehensive Analysis on Image Colorization using Generative Adversarial Networks (GANs). Mathematical Statistician and Engineering Applications, 71(4), 9748–9755. https://doi.org/10.17762/msea.v71i4.1779

Issue

Section

Articles