A comprehensive Analysis on Image Colorization using Generative Adversarial Networks (GANs)
DOI:
https://doi.org/10.17762/msea.v71i4.1779Abstract
Using a training set of matched picture pairs, image-to-image translation learns the mapping between an input image and an output image. For many issues, however, matching training data will not be available. We propose a method for learning to translate a picture from a source domain X to a target domain Y in the absence of matched examples. We wish to train a mapping A: X (Y) using an adversarial loss such that the distribution of photos from A(X) is indistinguishable from the distribution Y. Because it is significantly under-constrained, we connect it with an inverse mapping B:Y(X) and apply a cycle consistency loss to push B(A(X))X and A(B(X))X. Qualitative outcomes are presented on several tasks when matched training data is unavailable, such as collection style transfer, obseason transfer, photo enhancement, and so on. Quantitative comparisons to previous approaches reveal that our approach is superior.