Face attribute editing, one of the important research directions in face image synthesis and processing techniques, aims to photorealistic editing single or multiple attributes of face images on demand using editing and generation models. Most existing methods are based on generative adversarial networks, using target attribute vectors to control the editing region or Gaussian noise as conditional input to capture texture details. However, these cannot better control the consistency of attributes in irrelevant regions, while the generation of fidelity is also limited. In this paper, we propose a method that uses an optimized latent space to fuse the attribute feature maps into the latent space. At the same time, make full use of the conditional information for additional constraints. Then, in the image generation phase, we use a progressive architecture for controlled editing of face attributes at different granularities. At last, we also conducted an ablation study on the selected training scheme further to demonstrate the stability and accuracy of our chosen method. The experiments show that our proposed approach, using an end-to-end progressive image translation network architecture, obtained qualitative (FID) as well as quantitative (LPIPS) face image editing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.