GuidedStyle: Attribute knowledge guided style manipulation for semantic face editing

Xianxu Hou, Xiaokang Zhang, Hanbang Liang, Linlin Shen, Zhihui Lai, Jun Wan

Abstract

Although significant progress has been made in synthesizing high-quality and visually realistic face images by unconditional Generative Adversarial Networks (GANs), there is still a lack of control over the generation process in order to achieve semantic face editing. In this paper, we propose a novel learning framework, called GuidedStyle, to achieve semantic face editing on pretrained StyleGAN by guiding the image generation process with a knowledge network. Furthermore, we allow an attention mechanism in StyleGAN generator to adaptively select a single layer for style manipulation. As a result, our method is able to perform disentangled and controllable edits along various attributes, including smiling, eyeglasses, gender, mustache, hair color and attractive. Both qualitative and quantitative results demonstrate the superiority of our method over other competing methods for semantic face editing. Moreover, we show that our model can be also applied to different types of real and artistic face editing, demonstrating strong generalization ability.

Overview

Results

  • Different attribute manipulation
  • Visual results of continuous editing
  • A video demo
  • 
    @article{hou2022guidedstyle,
      title={Guidedstyle: Attribute knowledge guided style manipulation for semantic face editing},
      author={Hou, Xianxu and Zhang, Xiaokang and Liang, Hanbang and Shen, Linlin and Lai, Zhihui and Wan, Jun},
      journal={Neural Networks},
      volume={145},
      pages={209--220},
      year={2022},
      publisher={Elsevier}
    }