Visible to the public Multi-View Image Generation from a Single-View

TitleMulti-View Image Generation from a Single-View
Publication TypeConference Paper
Year of Publication2018
AuthorsZhao, Bo, Wu, Xiao, Cheng, Zhi-Qi, Liu, Hao, Jie, Zequn, Feng, Jiashi
Conference NameProceedings of the 26th ACM International Conference on Multimedia
ISBN Number978-1-4503-5665-7
KeywordsDeep Learning, Generative Adversarial Learning, generative adversarial networks, image generation, Metrics, pubcrawl, Resiliency, Scalability

How to generate multi-view images with realistic-looking appearance from only a single view input is a challenging problem. In this paper, we attack this problem by proposing a novel image generation model termed VariGANs, which combines the merits of the variational inference and the Generative Adversarial Networks (GANs). It generates the target image in a coarse-to-fine manner instead of a single pass which suffers from severe artifacts. It first performs variational inference to model global appearance of the object (e.g., shape and color) and produces coarse images of different views. Conditioned on the generated coarse images, it then performs adversarial learning to fill details consistent with the input and generate the fine images. Extensive experiments conducted on two clothing datasets, MVC and DeepFashion, have demonstrated that the generated images with the proposed VariGANs are more plausible than those generated by existing approaches, which provide more consistent global appearance as well as richer and sharper details.

Citation Keyzhao_multi-view_2018