- Researchers have developed a framework for translating images from one domain to another
- The algorithm can perform many-to-many mappings, unlike previous attempts which had one-to-one mappings
- Take a look at the video that provides results of the experiment below
Deep Learning keeps producing remarkably realistic results. 10 years ago, could you imagine taking an image of a dog, running an algorithm, and seeing it being completely transformed into a cat image, without any loss of quality or realism? It was the stuff of movies and dreams!
A team of researchers from Cornell University have proposed a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework for translating images from one domain to another. The aim is to take an image and generate a new image from it that is from a new category (for instance, transforming an image of a dog to a cat). According to the researchers, for a given image in the source domain, the proposed model learns the conditional distribution of corresponding images in the target domain.
The previously existing approaches are able to perform only one-to-one mapping of the given an image and thus fail to produce diverse outputs of the same. MUNIT, on the other hand is able to provide more than one output. It decomposes the image representation into a domain invariant content code and a domain specific style code, and later recombine the content code with random style code sampled from the style space.
Below is a video that shows this very technique – image-to-image translation for various images:
Our take on this
This is not the first attempt in image-to-image translations. UNIT and CycleGAN were used for translation previously and could perform one-to-one mapping, that is, it could translate one dog image to one cat image. MUNIT overcomes this limitation as it can perform many-to-many mappings, in other words, one dog image to many kinds of cat images!
This is undoubtedly an amazing algorithm but there are a few drawbacks or limitations which still need to be corrected. For example, a scene in winter could have a different appearance during summer. The algorithm will not consider different appearances as the existing technique assumes a unimodal approach. Try it out and let us know your experience in the comments below!
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!