• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MU-GAN: Facial Attribute Editing Based on Multi-Attention Mechanism

    2021-07-23 10:21:08KeZhangYukunSuXiwangGuoLiangQiandZhenbingZhao
    IEEE/CAA Journal of Automatica Sinica 2021年9期

    Ke Zhang,, Yukun Su, Xiwang Guo,,Liang Qi,, and Zhenbing Zhao,

    Abstract—Facial attribute editing has mainly two objectives:1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a multi-attention U-Net-based generative adversarial network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention (SA) mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. Experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality. Our code is available at https://github.com/SuSir1996/MU-GAN.

    I. INTRODUCTION

    FACIAL attribute editing aims to replace some attributes of a source facial image with target attributes, such as changing a subject's hair color, gender or expression. Facial attribute editing plays an important role in human-robotics interaction and bionic agents, which has extensive applications in such fields as face reconstruction [1], privacypreserving [2] and intelligent photography [3].

    The difficulty in facial attribute editing lies in accurately manipulating a given image from a source attribute domain to a target one while keeping attribute-independent details well preserved. Facial image needs to satisfy strict geometric constraints and correlations among facial attributes. Besides, it is difficult to achieve both attribute manipulation and detail retention ability. These make facial attribute editing a difficult task. Recently, significant breakthroughs have been made with the development of generative adversarial networks (GAN)[4]–[9]. Some previous studies [3], [10], [11] are based on an encoder-decoder architecture, which is adopted for extracting source image representation and reconstructing it under the guidance of target attribute vectors.

    Although it is widely used in image-to-image translation, an encoder-decoder architecture has some limitations especially in high quality attribute editing. Facial attributes have different levels of abstraction and can be divided into local attributes such as beard and facial aging texture, global attributes such as bald and hair color, or more abstract attributes such as gender. Convolutional downsampling or spatial pooling can be used to obtain different levels of abstract attributes. A generator in [3] uses an encoder-decoder with residual layers [12], [13]. However, the introduction of residual bottleneck layers means that latent presentations are highly compressed and image details are thus lost during frequent down-up sampling. The innermost latent representation with minimal spatial size cannot contain all the useful details, which leads to blurry attribute-editing results and serious content-missing problems. The preservation of the details is the guarantee of image reality and quality. As a remedy, researchers [14] attempt to add skip-connections between an encoder and a decoder to supplement decoder representations. Encoder representations are employed as a supplement of decoder branches with detailed information.The use of direct skip-connections can transfer abundant complementary details to make images more realistic, but also transfer a lot of details, which are related to the original attributes, resulting in information redundancy and thereby weakening attribute manipulation ability. As shown in Fig. 1(see top of next page), the model with direct skip-connections performs bad in local attribute editing, e.g., beard, with limited attribute manipulation ability. In previous studies,detail retention and attribute manipulation are difficult to reconcile.

    Fig. 1. Image examples generated by AttGAN [3], STGAN [15], and MUGAN.

    The introduction of convolutional neural networks (CNNs)[16] have promoted the development of GANs [3], [10], [11].Researchers [17] believe that CNN-based GANs are good at editing local attributes and synthesizing images with few geometric constraints. Taking the landscape images as an example, a slight deformation of mountains and rivers does not affect the reality of an image. However, they have difficulty in editing images with geometric or structural patterns, such as facial attribute editing. As shown in Fig. 1,when dealing with a bald attribute, CNN-based GANs [3],[17] often simply paint the hair color of the original image with skin color to create the illusion of bald, ignoring the outline of a face and generating visually weird samples. One possible explanation is that CNN-based GAN relies on convolutional kernel to model global dependencies across long-range regions. Due to the limited receptive field of a convolution kernel, it is difficult to capture the dependencies among long-distance pixels in a picture.

    It is also known that there are complex coupling relationships among facial attributes, e.g., gender and beard.In some facial attribute editing tasks, it is unavoidable to generate samples that do not exist in the real world, such as a woman with a beard in the third row of Fig. 1. Results generated by attribute generative adversarial network(AttGAN) [3] change hair length and produce serious artifacts. Although a sample generated by selective transfer generative adversarial network (STGAN) [15] is more like a woman, it still suffers from poor attribute decoupling, which makes an undesired change in beard. Thus, a desired model needs to have the ability to decouple attributes in order to meet the requirements of target labels.

    To solve these problems, we construct a new generator with a novel encoder-decoder architecture and propose a multiattention U-Net-based GAN (MU-GAN) model. First, for detail preservation, a symmetric U-Net architecture [14] is employed to replace the original asymmetric one to ensure that the abstract semantics of latent representations at both sides of an encoder-decoder are in the same level, and avoid the information loss caused by sharp decrease in channel count numbers of the last decoder layer. Second, an additive attention mechanism is introduced to U-Net skip-connections,so that attribute-excluding representations are selectively transferred under the guidance of an attention mask, which is complementary to decoder representations and helps us balance detail preservation and attribute manipulation abilities. Third, self-attention (SA) layers are introduced to an encoder-decoder as a supplement to the convolutional layers.A self-attention mechanism helps us model long-range dependencies among image regions. It can effectively capture multi-level presentations and help GAN enforce complicated geometric constraints on generated images. In addition, the use of a multi-attention mechanism makes the model more powerful in attribute decoupling.

    Our method is capable of generating facial images with better perception reality, attribute manipulation accuracy, and geometric rationality, compared with the state-of-the-art approaches. Moreover, the new generator architecture can balance attribute manipulation and detail preservation abilities. As shown in Fig. 1, our model performs well in attribute editing tasks at different semantic levels with strong attribute decoupling capability. In summary, this work makes the following contributions:

    1) It constructs a symmetric U-Net-like architecture generator based on an additive attention mechanism, which effectively enhances our method's detail preservation and attribute manipulation abilities.

    2) It takes a self-attention mechanism into the existing encoder-decoder architecture thus effectively enforcing geometric constraints on generated results.

    3) It introduces a multi-attention mechanism to help attribute decoupling, i.e., it only changes the attributes that need to be changed. Qualitative and quantitative results show that MU-GAN outperforms the state-of-the-art methods in facial attribute editing.

    The rest of the paper is organized as follows. Section II briefly reviews related work for generative model, image-toimage translation and facial attribute editing. The proposed method is illustrated in Section III. Experimental results and analysis are presented in Section IV. Ablation study is described in Section V, leading to conclusions in Section VI.

    II. RELATED WORK

    A. Generative Model

    A generation model is devoted to learning real sample distribution and have attracted upsurging attention in attribute editing. There are two main approaches for facial generation models: variational auto-encoder (VAE) [1] and GAN. The former's goal is to maximize variational lower bounds, while GAN aims to reach Nash equilibrium through a binary minimax game. Experimental results show that VAE’s training process is more stable, but the results are fuzzy. GAN has better generation quality and creativity than VAE, but lacks appropriate constraints. To address the above issues,wasserstein generative adversarial networks (WGANs) [18],[19] improve stability of the optimization process by replacing Jensen-Shannon/Kullback–Leibler divergence [20] with Earth-Mover distance to measure the distance between real and generated sample distribution, thus solving the problem of vanishing gradient. A conditional image generation task has also been actively studied. Several methods [21]–[33] use category information such as attribute labels to generate samples. GAN [34], [35] has exhibited a remarkable capability in various fields, and has been used in several applications such as image generation [18], [19], [34], [36],style translation [5], [6], [11], [37], [38], super-resolution,image reconstruction [1], [39], and facial attribute editing [3],[6]–[9], [15], [40]–[42].

    B. Image-to-Image Translation

    Image-to-image translation means manipulating a given image attribute from a source domain to a target one with other image contents untouched. Existing works [4], [10],[43]–[46] have made remarkable progress in image translation. For example, pix2pix [43] adopts conditional generative adversarial network (CGAN) [21] for multi-domain image-to-image translation tasks with paired images.However, paired image datasets are unavailable in most scenarios. To address this issue, researchers [4], [10], [28]propose unpaired image translation methods. unsupervised image-to-image translation networks (UNIT) [4] combine VAE [1] and coupled generative adversarial network(coGAN) [37] to build a GAN architecture, where two generators share the same weights to learn the joint distribution of images in cross domains. Cycle-consistent generative adversarial network (CycleGAN) [10] preserves the key representation between the input and generated images by minimizing cycle consistency loss. The idea of dual learning allows Disco generative adversarial network (DiscoGAN) [28]and CycleGAN [10] to learn reversible mapping among different domains in unpaired image-to-image translation.However, the aforementioned methods cannot perform image manipulation on multiple domains. Their inefficiency results from the fact that in order to learn all mappings amongkdomains,k×(k?1) generators have to be trained. Recent studies [5], [11], [38] focus on multi-domain conversion and propose some multi-domain image translation models such as augmented CycleGAN [38], star generative adversarial network (StarGAN) [11], and AttGAN [3].

    C. Facial Attribute Editing

    The objective of facial attribute editing is to generate a face with a target attribute while preserving the attribute-excluding facial detail. Facial attribute editing has been a hot topic in computer vision. Existing methods [6], [7] are designed for modeling an aging process. Face aging is mainly reflected by wrinkles. Since the subtle texture information is more salient and robust in a frequency-domain, a Wavelet-domain global and local consistent age generative adversarial network(WaveletGLCA-GAN) [6] uses wavelet transform to synthesize aging images. Several studies [8], [9], [47] are conducted to solve a facial expression synthesis problem. Other studies propose facial attribute editing methods. DNA generative adversarial network (DNA-GAN) [41] can be regarded as an extension of gene generative adversarial network(GeneGAN) [40], which swaps attribute-relevant latent representations between given image pairs to synthesize“hybrid” images, and can transform multiple attributes simultaneously. Fader network (FaderNet) [5] imposes adversarial constraints to enforce the independence of latent representations. Its decoder then takes latent representation extracted from an encoder and target attribute vector as the input to generate desired results. Invertible conditional generative adversarial network (IcGAN) [42] and FaderNet [5]impose the constraints of mutual independence of attributes on the latent space such that latent representations from different classes can be independent of attribute decoupling. On the contrary, experimental results [3] show that it is too strict to impose independent constraints on latent space. Then, AttGAN[3] applies attribute classification constraints to generated images to ensure attributes being translated correctly. The generator of AttGAN consists of five convolution and deconvolutional layers. Then, it applies one skip-connection between encoder and decoder to improve image quality. Note that AttGAN’s encoder-decoder is not a symmetrical structure,and sharp decrease in the number of channels in the last deconvolutional layer of its decoder results in detail loss.Limited by the receptive field of a convolution kernel, CNN layers cannot model long-range, multi-level dependencies across image regions, which makes it difficult to synthesize image classes with complex geometric or structural patterns.Previous work [14] adopts skip-connections to enhance detail retention at the cost of reducing attribute manipulation ability.Adding direct skip-connections cannot fundamentally balance the attribute manipulation and detail retention abilities.AttGAN and its variants face three problems: 1) Loss of image details; 2) Insufficient attribute manipulation ability; and 3)Poor enforcement of geometric constraints. STGAN [15], a variant of AttGAN, introduces gated recurrent unit (GRU) [48]to build selective transfer units to selectively transmit encoder representation. However, memory-based approaches, e.g.,GRU [48] and long short-term memory (LSTM) [35], [49]–[51]mainly focus on sequential processing rather than visual tasks,which is limited by memory capacity and low computational efficiency.

    III. PROPOSED METHOD

    Fig. 2 shows an overview of our method. In order to solve the problem of AttGAN and STGAN, we present MU-GAN for facial attribute editing. First, instead of using an ordinary encoder-decoder [3], we use a symmetric U-Net structure to build our generator and construct MU-GAN by replacing direct skip-connections with attention U-Net connections(AUCs). Second, we adopt self-attention layers as complement to convolution layers. Finally, a discriminator and objective function of MU-GAN are provided.

    A. Generator

    1) Attention U-Net Connection:Fig. 3 shows the architecture of the proposed generator and AUCs. For detail retention and blurry image problems, we replace the original asymmetric CNN-based encoder-decoder with a symmetrical Attention U-Net architecture. Besides, instead of directly connecting an encoder to a decoder via skip-connections, we present AUCs to selectively transfer attribute-irrelevant representations from an encoder, and then, AUCs concatenate encoder representation with decoder ones to improve image quality and detail preservation. With an attention mechanism,AUCs are capable of filtering out representations related to original attributes while preserving attribute-irrelevant details.It can promote the image fidelity without weakening attribute manipulation ability. By using an attention mechanism, AUCs solve the problem of information redundancy caused by direct skip-connections.

    Fig. 2. The architecture of MU-GAN. SA denotes a self-attention mechanism. MU-GAN consists of a generator G and a discriminator D . D consists of two sub-networks, i.e., real/fake adversarial discriminator Dadv and attribute classifier Dc, which share the weights of the same convolutional layers. AUCs bridge an encoder Genc and a decoder Gdec to selectively transform encoder presentation, making it complementary to decoder presentation.

    Fig. 3. The architecture of the proposed generator. AUCs bridge the two ends of an encoder-decoder, and calculate attention coefficient α between encoderdecoder representations of the same size. Under the guidance of α, it selectively transfers encoder representation as supplementary information of a decoder.Green block represents target vectors used to guide attribute editing. Besides, we follow SAGAN and put self-attention layers behind convolutional layers with feature map sizes of 64 and 32, respectively.

    AUCs progressively suppress representation responses in source-attribute-related regions, and retain image details that are independent of the attributes. Representations transferred by AUCs are used as supplementary to decoder representations to compensate for the irreversible information loss caused by convolution downsampling, and enrich the details of a concerned image.

    More importantly, as shown in Fig. 3, AUCs helpGaggregate information from multiple image scales, which increases the image fidelity and achieves better performance,without weakening attribute manipulation ability.

    Note that our method adopts a symmetrical encoder-decoder to settle the issue of highly-compressed representation and loss of details caused by the sharp decrease in the number of channels. In addition, the abstract-level of representations at both ends of a symmetric encoder-decoder are similar, which are highly correlated with each other and contain significant reference values for attribute editing.

    2) Self-Attention:Most GAN-based models for facial attribute editing are built with convolutional layers. Limited by the receptive field of a convolution kernel, the convolutional layer can only process information from adjacent pixels. Therefore, many CNN-based GAN models share similar problems, i.e., their results poorly meet global geometric constraints, and the networks are not competent for an image manipulation task with complex composition and strict geometric constraints.

    For example, the task of facial attribute editing requires rigorous arrangement of facial features, and a tiny unreasonable-deformation can cause salient visual irrationality. As shown in the 2nd row of Fig. 1, the edited results fail to meet appropriate structural and geometric constraints. Thus, we utilize a self-attention mechanism [17],[53] as a supplement to convolutional layers inG, to efficiently model dependency across long-range separated spatial regions. The details of a self-attention mechanism are shown in Fig. 4.

    Fig. 4. Structure of a self-attention mechanism, where ? represents matrix multiplication. After the features pass through Wq, Wk and Wv, their feature size are reshaped. Note that N=W×H.

    B. Discriminator

    C. Loss Functions

    Next, we cover the adversarial, attribute classification and reconstruction losses.

    1) Adversarial Loss:In order to make the distribution of generated images close to the distribution of real images, we introduce adversarial learning in the proposed method, thereby improving the visual reality of the generated image. WGAN uses Earth-Mover distance as a metric to measure the distance between two probability distributions, which can make the training process more stable and avoid mode collapse from happening. Following WGAN, we formulate adversarial loss betweenGandDadvas follows:

    IV. EXPERIMENTS

    A. Implementation Details

    To evaluate the proposed method, we compare MU-GAN with AttGAN [3] and STGAN [15] and conduct extensive experiments on the CelebA dataset. The models involved in the experiment are trained on a workstation equipped with an Intel (R) Xeon (R) CPU E5-2620 v4 @ 2.10GHz and NVIDIA GTX1080ti GPU. All the experiments are conducted in the Pytorch 0.4 environment, with Cuda 8.0.44 and cuDNN6.0.20. The baseline model is trained under original experimental setting. There are 100 epochs in the training phase and models are trained by Adam optimizer ( β1= 0.5,β2= 0.999), and the initial learning rate is 0.002, which drops to 1/10 of itself for every 33 epochs. We use 5 discriminator update steps per generator update during training. The weights of the objective function are set as λ1= 3, λ2= 10, and λ3=100.178×218are center-cropped and resized to 128×128.According to the official division, CelebA is divided into a training set, a validation set, and a test set. The training and validation sets are used to train our method, while the test set is used in the evaluation phase.

    CelebA is a large-scale facial attributes dataset including 10 177 celebrities and 202 599 facial images, each of which have 40 binary attribute labels. In order to compare with the previous work [3], [15], the same data preprocessing method is adopted. Thirteen attributes with intense visual impact are selected, including Bald, Bangs, Black hair, Blond hair,Brown hair, Bushy eyebrows, Eyeglasses, Male, Mouth slightly open, Mustache, No beard, Pale skin and Young.These attributes cover most distinctive facial attributes,containing practical information about human-computer interaction, and are also widely used in relevant work [3],[15]. In this experiment, CelebA source images with a size of

    B. Qualitative Results

    Fig. 5. Comparisons with AttGAN [3] and STGAN [15] on editing specified attributes.

    Fig. 6. Facial attribute editing results of AttGAN [3], STGAN [15] and MU-GAN. Please zoom in for better observation.

    The qualitative results are shown in Figs. 5–6. Some samples generated by AttGAN and STGAN suffer from lowquality problems, i.e., artifacts and blurry to some extent,while the results of our method are more natural and realistic.MU-GAN aims to change only the facial attributes that need to be changed. The performance of detail preservation ability can be evaluated in two aspects. One is the preservation of details in the visual spatial regions, which is mainly reflected by whether the model can distinguish the attributerelevant/irrelevant regions. The other one is the ability to disentangle attributes in abstract semantics. As we know,some attributes are highly correlated with other attributes,which may lead to undesired changes in other attributes.

    First of all, from Fig. 7(a), our method outperforms other models. Samples generated by MU-GAN have better realism and fidelity of details. However, the results of its competing methods appear to be over-smooth and blurry with artifacts to some extent. One possible reason is that our model adopts a symmetrical U-Net-like architecture to make encoder representations complementary to decoder ones, without reducing its attribute editing ability. Besides, in a symmetric encoder-decoder, the corresponding encoder representation and decoder one are highly correlated.

    Secondly, from Fig. 7(b), when editing global attributes,e.g., Black hair, Blond hair, Brown hair, and Pale skin, our method better enforces geometric constraints and is capable of distinguishing spatial regions related/unrelated to attributes,while its peers have difficulty in global attribute manipulation.For example, when the background is close to hair color, its peers often incorrectly recognize the background as hair,resulting in severe artifacts. In the opposite side, benefited from self-attention layers, our method can better distinguish the foreground and background, and accurately edits the hair color. In the same way, when dealing with a pale skin attribute, our method can better segment the faces from the background, rather than simply whitening the center region of an image, as done by its peers.

    Fig. 7. Attribute editing results at different abstract-level, compared with the competition methods.

    Fig. 8. Attribute generation accuracy of AttGAN [3], STGAN [15] and MU-GAN.

    Thirdly, our model can effectively deal with the interference among attributes. Taking gender as an example, because of the sampling bias, the Male group generally have short-hair,while the Female group usually have long hair with neither beards nor mustaches. Hair length, beard and mustache are attributes that are highly related to gender. As a result, these attributes often change with the editing of gender, which can be observed in generated results in the 3rd and 4th rows of Fig. 7(c). The competition models sometimes drop beard attribute, when the image changes from male to female. These changes are very interesting and the generated samples are more realistic in attribute gender, but they can cause serious artifacts like fake long hair or make unexpected changes in other attributes. In our method, attributes are well decorrelated to avoid the interference among attributes and undesired changes in generated images.

    C. Quantitative Evaluation

    In a facial attribute editing task, the quality of generated images is mainly reflected in whether they are realistic or whether the source images are accurately manipulated from an original domain to a target one. We take attribute manipulation accuracy and reconstructed image quality for quantitative evaluation. In order to evaluate the former, a multi-class classification method is employed to classify the generated images. First, a specific ResNet variant [12] is trained on the training set of CelebA, attaining an accuracy of 94.79%for 13 attributes on the test set. The classification network consists of three residual groups ( 3, 4, 6) and a fully connected layer with an output dimension of 13. The attribute generation accuracy is shown in Fig. 8. The classification results show that our method outperforms the others in the accuracy of attribute editing. As shown in Table I, the average attribute generation accuracy of MU-GAN is 89.15%, which is a significant improvement over AttGAN’s 83.91% and STGAN’s 84.89%. Besides the gender attribute, the classification accuracies of other attributes are better than those of its peers, especially for the beard, hair colors and eyeglass attributes. As mentioned earlier, gender correlates with other attributes, and MU-GAN is good at attribute decoupling, which is an effective way to prevent unexpected changes when editing target attributes. For example, when an image changes from male to female, MU-GAN faithfully retains the original beard and other correlated attributes,which is easily misjudged by the classification network in quantitative experiments. As we can see from Fig. 7(c),competing methods are more likely to make visually significant but unexpected changes like modifying hair length with serious artifact and other gender-related attributes while samples generated by MU-GAN change only the attributes that need to be changed. The above results also illustrate that MU-GAN not only has better attribute manipulation accuracy,but also has good attribute decoupling capabilities.

    The evaluation indexes for reconstruction results are the peak signal to noise ratio and structural similarity(PSNR/SSIM). PSNR is the most common and widely-used evaluation index for images, but it is based on the errorbetween corresponding pixel points, and it dose not take human visual characteristics into account. As a full-reference image quality evaluation index, SSIM measures image similarity in brightness, contrast, and structure. SSIM is better than PSNR in image denoising and similarity evaluation. In order to study the reconstruction ability, reconstruction imagexa?is generated from source imagexa, conditioned on a source attribute vector. Table II lists the PSNR/SSIM results of reconstruction images for six methods. The quantitative results are consistent with the previous qualitative results [15].From Table II, benefited from AUCs, a symmetrical architecture, and a self-attention mechanism, our method can retain more image information and achieve much better reconstruction results than its five peers. AUCs are capable of generating high quality reconstruction results, which are more natural and realistic while retaining more details.

    TABLE I AVERAGE ATTRIBUTE MANIPULATION ACCURACY OF THE COMPARISON METHODS ON 13 FACIAL ATTRIBUTES

    V. ABLATION STUDY

    In this section, we evaluate the effect of the two main components, i.e., symmetric attention U-Net and self-attention mechanism on MU-GAN's performance. To analyze each’s effect, we try different generator structures. Several MU-GAN variants are constructed, which are trained and tested in CelebA, under the same experimental settings. Ablation experiments between variants can also help to find out the contributions of AUCs, symmetric U-Net architecture, and self-attention mechanism.

    We consider four variants: 1)M0: original MU-GAN.2)M1:M0after removing a self-attention mechanism but retaining a symmetrical attention U-Net architecture. 3)M2:M0after removing AUCs, but retaining a symmetric encoderdecoder and a self-attention mechanism. 4)M3:M0with an asymmetric encoder-decoder architecture.

    A. Effect of Symmetrical Attention U-Net Structure

    First, a comparison of the editing result between symmetric and asymmetric encoder-decoder architectures is shown in Fig. 9. Compared withM0, the generated results ofM3, to some extent, are blurrier and the image details are oversmooth. In addition, qualitative experimental results in Fig. 9 illustrates that the MU-GAN variants with a symmetric encoder-decoder achieve better perceptual results on reconstructed images. One possible reason is that the symmetrical architecture avoids latent representation being highly-compressed, caused by the sharp decrease in the number of decoder channels, which can effectively retain the details and make edited results more natural and realistic.

    Second, compared with models without AUCs (e.g.,M2,AttGAN, and STGAN),M0has better attribute manipulation accuracy and higher PSNR/SSIM from Table III and Fig. 10.The additive attention mechanism selectively transfers attribute-irrelevant presentation from an encoder, filtering out the original attribute information to resolve the problems of information redundancy. Therefore, AUCs fuse multi-level features and enrich image details, which guarantees the attribute manipulation ability of AUC-based variants and only changes the attributes that need to be changed.

    In addition, we have established many variants, which are based on symmetrical encoder-decoder model without selfattention mechanism to explore the effect of the number ofAUCs on the results.AUCimeans adding AUCs to the firstilayers, andAUC4is completely equivalent toM1mentioned earlier. As can be seen from Table IV, the reconstruction quality and classification accuracy of the model are improved with the increase in the number of AUCs. When there are four AUCs in Generator,AUC4attains the best classification accuracy of 85.15% and PSNR/SSIM increases from 24.07/0.841to 28.14/0.918, which is a big breakthrough compared with the baseline. Therefore, we add AUCs to each layer of the generator.

    TABLE II RECONSTRUCTION QUALITY ON FACIAL ATTRIBUTE EDITING TASKS

    Fig. 9. Effect of different combinations of the three components.

    TABLE III RECONSTRUCTION QUALITY AND AVERAGE CLASSIFICATION ACCURACY OF THE MU-GAN VARIANTS

    Fig. 10. Attribute generation accuracy of MU-GAN variants, AttGAN, and STGAN, which are conducted on CelebA.

    AttGAN’s sparse encoder-decoder over-compresses image information, and loses a large number of details, which leads to low image fidelity. The introduction of skip-connections is one way to increase detail retention ability, but at the cost of severely weakening the attribute operation ability.Relevant/irrelevant information is indiscriminately injected into the decoder, resulting in information redundancy. With the help of the additive attention mechanism, AUCs can obtain the detailed information needed for image reconstruction. Similarly to STGAN, we are committed to selectively transferring useful representation from encoder to decoder. AUCs avoid information redundancy, and then achieve the goal of balancing the detail retention ability and attribute operation ability, simultaneously.

    B. Effect of a Self-Attention Mechanism

    As shown in Fig. 9, attribute edited results generated by variants with self-attention mechanism, e.g.,M0,M2, andM3,better enforce structural constraints, which can generate visual-reasonable results with rigorous geometry. Especially,our model performs well in global attribute editing such as bald and pale skin. Although the model with a self-attention mechanism can hardly obtain a significant improvement in the quantitative results, the generated images are more realistic in perception.

    Benefited from a multi-attention mechanism,M0has strong attribute decoupling ability in abstract semantics. For example, in the task of gender attribute editing, the model with a multi-attention mechanism avoids unexpected changes.Attribute decoupling ability makes the gender attribute manipulation less significant compared with its peers, leading to quantitative classifier misjudgment.

    To explore the effect of a self-attention mechanism, selfattention layers are added into different layers of the generator, which is based on symmetrical encoder-decoder model without AUCs. Since the effect of SA is mainly reflected in improving the quality of reconstructed images, so SA has little effect on the improvement of classification accuracy, only the reconstruction experiment has been done here. As shown in Table V, self-attention mechanism seems to be more effective on the high-level feature graph, i.e.,Feat32andFeat64, but has limited performance improvement in the low-level graph.Feat8,16’s performance is even lower than that ofFeat32. Theoretically, introducing self-attention mechanism into all layers of a generator is better. However, it will greatly increase the number of parameters of the model.Limited by hardware resources, we choose to add the selfattention layers on the third and fourth layers.

    VI. CONCLUSIONS

    The conclusion goes here. In this paper, we introduce a multi-attention mechanism, i.e., AUCs and self-attention mechanism into a symmetrical U-Net-like architecture, thus resulting in MU-GAN. By using AUCs, it can accurately edit desired facial attributes, which not only significantly improves attribute editing accuracy, but also enhances the detail retention ability. Furthermore, self-attention is introduced as a supplement to the convolutional layers and helps us generate results to better meet structural constraints. Experimental results show that our method balances attribute manipulation and detail retention, and has strong decoupling capabilities. It can generate high-quality facial attribute editing results andoutperforms the state-of-the-art approaches in terms of reconstruction quality and attribute generation accuracy. As future work, we intend to explore more appropriate attention mechanisms for AUCs to enhance the performance of MUGAN.

    TABLE IV THE INFLUENCE OF AUCS ON RECONSTRUCTION QUALITY AND CLASSIFICATION ACCURACY. THERE ARE MU-GAN VARIANTS WITH DIFFERENT AMOUNTS OF AUCS

    TABLE V COMPARISON OF MU-GAN VARIANTS, WHOSE SELF-ATTENTION LAYER IS PLACED IN DIFFERENT POSITIONS. Feati MEANS ADDING SELFATTENTION LAYER TO THE i×i REPRESENTATION MAPS

    ACKNOWLEDGMENT

    The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for this research.

    狂野欧美白嫩少妇大欣赏| 日韩有码中文字幕| 亚洲18禁久久av| 久久久久久人人人人人| 香蕉久久夜色| 午夜精品一区二区三区免费看| 久久久国产成人免费| 国产伦人伦偷精品视频| 在线免费观看的www视频| 亚洲成人精品中文字幕电影| 免费av不卡在线播放| 国产成+人综合+亚洲专区| 久久热在线av| 亚洲熟妇熟女久久| 女人高潮潮喷娇喘18禁视频| 亚洲无线观看免费| 婷婷丁香在线五月| 一二三四社区在线视频社区8| 女人高潮潮喷娇喘18禁视频| 色哟哟哟哟哟哟| 国产熟女xx| 观看免费一级毛片| 两人在一起打扑克的视频| 在线观看午夜福利视频| 97超视频在线观看视频| 91av网一区二区| 又紧又爽又黄一区二区| 亚洲 欧美一区二区三区| 中出人妻视频一区二区| 精品一区二区三区四区五区乱码| 91在线精品国自产拍蜜月 | 国产成+人综合+亚洲专区| 国产成人啪精品午夜网站| 黄色片一级片一级黄色片| 性色avwww在线观看| 舔av片在线| 男女床上黄色一级片免费看| 最新中文字幕久久久久 | 亚洲国产看品久久| av在线蜜桃| 天天一区二区日本电影三级| 九色国产91popny在线| 国产精品亚洲av一区麻豆| 久久久国产成人免费| 国产精品精品国产色婷婷| 亚洲,欧美精品.| 天堂动漫精品| 亚洲自拍偷在线| 精品国产超薄肉色丝袜足j| 国产一级毛片七仙女欲春2| 久久久久久人人人人人| 久久久国产欧美日韩av| 身体一侧抽搐| 国产精品一区二区精品视频观看| 国产精品一区二区三区四区免费观看 | 午夜日韩欧美国产| 国产高清三级在线| 午夜久久久久精精品| 欧美一级毛片孕妇| 国产 一区 欧美 日韩| 亚洲av电影不卡..在线观看| 特大巨黑吊av在线直播| 亚洲欧美精品综合久久99| 免费在线观看成人毛片| 免费看日本二区| 精品一区二区三区视频在线 | 90打野战视频偷拍视频| 一二三四社区在线视频社区8| 久久99热这里只有精品18| 十八禁网站免费在线| 老汉色av国产亚洲站长工具| 久久天堂一区二区三区四区| 久久亚洲精品不卡| 男人舔女人的私密视频| 国产高潮美女av| 九九在线视频观看精品| 看黄色毛片网站| 亚洲av电影不卡..在线观看| 久久久久亚洲av毛片大全| 午夜精品一区二区三区免费看| 亚洲国产中文字幕在线视频| 狂野欧美激情性xxxx| 国产成人av教育| 午夜免费激情av| 国产亚洲av嫩草精品影院| 丝袜人妻中文字幕| www.www免费av| 毛片女人毛片| 免费在线观看亚洲国产| 禁无遮挡网站| 亚洲成av人片免费观看| 日本一二三区视频观看| 欧美成狂野欧美在线观看| 欧美日韩一级在线毛片| 在线免费观看不下载黄p国产 | 久久这里只有精品19| 一级作爱视频免费观看| 无人区码免费观看不卡| 在线免费观看不下载黄p国产 | 在线观看日韩欧美| 久久午夜亚洲精品久久| 两人在一起打扑克的视频| 欧美日韩一级在线毛片| 三级毛片av免费| 在线观看日韩欧美| aaaaa片日本免费| 1024手机看黄色片| 毛片女人毛片| 久久久色成人| 久久精品91无色码中文字幕| 国产黄色小视频在线观看| a级毛片在线看网站| 亚洲aⅴ乱码一区二区在线播放| 九九热线精品视视频播放| 我要搜黄色片| 日韩av在线大香蕉| 午夜亚洲福利在线播放| 一夜夜www| 男女视频在线观看网站免费| 国产精品女同一区二区软件 | 色在线成人网| 伊人久久大香线蕉亚洲五| 1000部很黄的大片| ponron亚洲| 午夜福利欧美成人| 18禁黄网站禁片免费观看直播| 日本免费一区二区三区高清不卡| 亚洲色图av天堂| 欧美日韩国产亚洲二区| 波多野结衣巨乳人妻| 麻豆一二三区av精品| 脱女人内裤的视频| 国产综合懂色| 国产成人啪精品午夜网站| 人妻久久中文字幕网| 国产成人欧美在线观看| 夜夜夜夜夜久久久久| 午夜激情福利司机影院| 麻豆国产av国片精品| 国产亚洲精品久久久com| a级毛片在线看网站| 色播亚洲综合网| 岛国在线免费视频观看| 欧美性猛交╳xxx乱大交人| 亚洲精品国产精品久久久不卡| 亚洲av美国av| 免费电影在线观看免费观看| 丰满的人妻完整版| 国产免费av片在线观看野外av| 久久亚洲精品不卡| 亚洲国产精品sss在线观看| 在线a可以看的网站| 亚洲国产欧美网| 国产精品电影一区二区三区| 亚洲在线观看片| 怎么达到女性高潮| 脱女人内裤的视频| 精品久久久久久久久久免费视频| 亚洲天堂国产精品一区在线| 丰满人妻熟妇乱又伦精品不卡| 欧美3d第一页| 精品午夜福利视频在线观看一区| 国产精品亚洲av一区麻豆| 特大巨黑吊av在线直播| 成人亚洲精品av一区二区| 国产麻豆成人av免费视频| 在线看三级毛片| 日本免费a在线| 在线观看舔阴道视频| 久久热在线av| 成人欧美大片| 91在线观看av| 欧美绝顶高潮抽搐喷水| 午夜两性在线视频| 欧美一区二区精品小视频在线| 亚洲自拍偷在线| 热99在线观看视频| 亚洲av美国av| 国产午夜精品久久久久久| 亚洲性夜色夜夜综合| 免费看十八禁软件| 熟女人妻精品中文字幕| 色视频www国产| 久久久久精品国产欧美久久久| 看片在线看免费视频| 激情在线观看视频在线高清| 我的老师免费观看完整版| 亚洲成人精品中文字幕电影| 男人舔女人下体高潮全视频| 亚洲熟妇中文字幕五十中出| 亚洲av日韩精品久久久久久密| 日本精品一区二区三区蜜桃| 色综合亚洲欧美另类图片| 精品国产乱码久久久久久男人| 俺也久久电影网| www.www免费av| 麻豆国产av国片精品| 国产视频内射| 国产久久久一区二区三区| 国产久久久一区二区三区| 天堂网av新在线| 亚洲中文日韩欧美视频| 欧美乱码精品一区二区三区| 国内精品久久久久精免费| 真人做人爱边吃奶动态| 久久精品国产综合久久久| 国产精品永久免费网站| 丰满人妻熟妇乱又伦精品不卡| bbb黄色大片| 国产日本99.免费观看| 亚洲精品456在线播放app | 久久久精品欧美日韩精品| 深夜精品福利| 性色avwww在线观看| 亚洲av电影不卡..在线观看| 精品午夜福利视频在线观看一区| 欧美丝袜亚洲另类 | 国产精品日韩av在线免费观看| 免费高清视频大片| 97超视频在线观看视频| 深夜精品福利| or卡值多少钱| 99精品欧美一区二区三区四区| 久久久精品欧美日韩精品| a级毛片a级免费在线| 久久久久久大精品| 午夜免费观看网址| 99久久精品热视频| 变态另类丝袜制服| 国产伦人伦偷精品视频| 又爽又黄无遮挡网站| 少妇的逼水好多| 不卡一级毛片| 19禁男女啪啪无遮挡网站| 好男人在线观看高清免费视频| 小蜜桃在线观看免费完整版高清| 大型黄色视频在线免费观看| 五月玫瑰六月丁香| 亚洲一区二区三区色噜噜| 天天躁狠狠躁夜夜躁狠狠躁| 两个人视频免费观看高清| 精品久久久久久久久久久久久| 亚洲中文字幕日韩| 久久久久久大精品| а√天堂www在线а√下载| 国产亚洲精品久久久com| 精品久久久久久久人妻蜜臀av| 无遮挡黄片免费观看| 久久久久国内视频| 亚洲国产日韩欧美精品在线观看 | 亚洲av电影不卡..在线观看| 精品一区二区三区视频在线观看免费| 亚洲国产欧洲综合997久久,| 成人无遮挡网站| 最新美女视频免费是黄的| 免费在线观看影片大全网站| 久久久久久国产a免费观看| 老司机福利观看| 亚洲国产欧美网| 国产精品久久久人人做人人爽| 午夜免费激情av| 午夜日韩欧美国产| av欧美777| 亚洲人成电影免费在线| 婷婷亚洲欧美| 亚洲精品粉嫩美女一区| 国产精品亚洲av一区麻豆| 小说图片视频综合网站| 久久香蕉国产精品| 两人在一起打扑克的视频| 亚洲精品一卡2卡三卡4卡5卡| 久久久久久久久久黄片| 99热6这里只有精品| 一个人看的www免费观看视频| 欧美在线黄色| 91在线精品国自产拍蜜月 | 亚洲国产欧洲综合997久久,| 99久久精品国产亚洲精品| 国产乱人伦免费视频| 床上黄色一级片| 成人亚洲精品av一区二区| 老司机在亚洲福利影院| 久久中文看片网| 久久性视频一级片| 国产高清videossex| 18禁观看日本| 91在线精品国自产拍蜜月 | 亚洲中文日韩欧美视频| 999久久久国产精品视频| 免费看美女性在线毛片视频| 色尼玛亚洲综合影院| 亚洲 欧美 日韩 在线 免费| 午夜福利免费观看在线| 人妻久久中文字幕网| 麻豆成人av在线观看| 国产精品免费一区二区三区在线| 天天躁日日操中文字幕| 级片在线观看| 两个人的视频大全免费| 一个人免费在线观看的高清视频| 在线看三级毛片| 成人一区二区视频在线观看| 悠悠久久av| 国产精品一区二区三区四区久久| 国模一区二区三区四区视频 | 又粗又爽又猛毛片免费看| 免费高清视频大片| tocl精华| xxx96com| 夜夜看夜夜爽夜夜摸| 婷婷六月久久综合丁香| 嫩草影院精品99| 青草久久国产| 亚洲av成人一区二区三| 久久亚洲精品不卡| 又大又爽又粗| 亚洲成av人片在线播放无| 亚洲片人在线观看| 久久久色成人| 级片在线观看| 国产精品一区二区三区四区免费观看 | 亚洲18禁久久av| 日本撒尿小便嘘嘘汇集6| 国产精品国产高清国产av| 五月玫瑰六月丁香| 久久草成人影院| 又紧又爽又黄一区二区| 90打野战视频偷拍视频| 国产亚洲欧美98| 99视频精品全部免费 在线 | 久久国产精品人妻蜜桃| 亚洲熟女毛片儿| 露出奶头的视频| www.精华液| 久久久久国产精品人妻aⅴ院| 日韩 欧美 亚洲 中文字幕| 久久久久九九精品影院| 看片在线看免费视频| 日本五十路高清| 免费观看人在逋| 亚洲avbb在线观看| 动漫黄色视频在线观看| 欧美黑人欧美精品刺激| 女警被强在线播放| 国产成+人综合+亚洲专区| 色在线成人网| 日本 欧美在线| xxxwww97欧美| 少妇熟女aⅴ在线视频| 午夜免费观看网址| bbb黄色大片| 91在线观看av| 色噜噜av男人的天堂激情| 91久久精品国产一区二区成人 | 男女视频在线观看网站免费| 国产伦精品一区二区三区四那| 日韩欧美一区二区三区在线观看| 在线观看日韩欧美| 欧美日本视频| 在线观看一区二区三区| 一个人观看的视频www高清免费观看 | 国产伦人伦偷精品视频| 国产69精品久久久久777片 | 亚洲中文av在线| 国产精品美女特级片免费视频播放器 | 在线观看舔阴道视频| 伊人久久大香线蕉亚洲五| 国产一区二区激情短视频| 国产欧美日韩一区二区精品| 精品国产乱子伦一区二区三区| 一区二区三区激情视频| 日本精品一区二区三区蜜桃| 男人的好看免费观看在线视频| www.熟女人妻精品国产| 婷婷亚洲欧美| 啦啦啦免费观看视频1| 一个人免费在线观看电影 | 国语自产精品视频在线第100页| 不卡一级毛片| 热99在线观看视频| a级毛片在线看网站| 日本在线视频免费播放| 婷婷精品国产亚洲av| 婷婷六月久久综合丁香| 亚洲五月天丁香| 国产在线精品亚洲第一网站| 成人特级黄色片久久久久久久| 国产精品免费一区二区三区在线| 国产成人影院久久av| 级片在线观看| 亚洲中文字幕日韩| 国产精品九九99| 国产成人av教育| 亚洲国产日韩欧美精品在线观看 | 精品一区二区三区视频在线观看免费| 精品乱码久久久久久99久播| 久久九九热精品免费| 国产高清videossex| 色哟哟哟哟哟哟| 国产欧美日韩精品亚洲av| 18禁裸乳无遮挡免费网站照片| 精品一区二区三区av网在线观看| 亚洲美女视频黄频| 精品一区二区三区视频在线 | 中文字幕av在线有码专区| 精品国产三级普通话版| 精品一区二区三区视频在线 | 亚洲av中文字字幕乱码综合| 波多野结衣高清无吗| 色播亚洲综合网| 精品一区二区三区av网在线观看| 日韩精品青青久久久久久| 香蕉丝袜av| 亚洲欧洲精品一区二区精品久久久| 国产乱人视频| cao死你这个sao货| 91老司机精品| 精品人妻1区二区| 俺也久久电影网| 搡老熟女国产l中国老女人| 国产蜜桃级精品一区二区三区| 午夜精品久久久久久毛片777| 熟妇人妻久久中文字幕3abv| 国产精品久久久人人做人人爽| 在线观看免费午夜福利视频| 亚洲国产精品合色在线| 久久久久久久久久黄片| 夜夜夜夜夜久久久久| 亚洲成人免费电影在线观看| 国产一区二区在线av高清观看| 国产v大片淫在线免费观看| 岛国在线观看网站| 久久中文看片网| 国产美女午夜福利| 婷婷亚洲欧美| 国产三级黄色录像| 特大巨黑吊av在线直播| 精品国产超薄肉色丝袜足j| 露出奶头的视频| 天堂网av新在线| 久久国产乱子伦精品免费另类| 亚洲欧美日韩高清专用| 免费无遮挡裸体视频| 国产黄a三级三级三级人| 国产精品一区二区免费欧美| 欧美在线黄色| 99在线视频只有这里精品首页| 特级一级黄色大片| 色吧在线观看| 久久久久性生活片| 亚洲国产精品合色在线| 亚洲乱码一区二区免费版| 久久亚洲真实| 成人欧美大片| 久久精品国产亚洲av香蕉五月| 十八禁人妻一区二区| 久久伊人香网站| 国产三级中文精品| 中文字幕熟女人妻在线| 无人区码免费观看不卡| 全区人妻精品视频| 天天躁日日操中文字幕| 亚洲午夜精品一区,二区,三区| 亚洲无线在线观看| 亚洲精品456在线播放app | 国产亚洲av高清不卡| 岛国在线免费视频观看| 国产探花在线观看一区二区| 男女午夜视频在线观看| 国模一区二区三区四区视频 | 观看美女的网站| 91九色精品人成在线观看| 久久久水蜜桃国产精品网| 国产综合懂色| 国产精品久久久人人做人人爽| 精品一区二区三区四区五区乱码| 黄色日韩在线| 免费大片18禁| 亚洲av成人精品一区久久| 婷婷六月久久综合丁香| 久久精品国产99精品国产亚洲性色| av福利片在线观看| 听说在线观看完整版免费高清| 波多野结衣高清无吗| 国产精品,欧美在线| 亚洲avbb在线观看| 亚洲国产看品久久| 亚洲欧美激情综合另类| 午夜福利在线观看免费完整高清在 | av在线天堂中文字幕| 看免费av毛片| 岛国视频午夜一区免费看| 在线播放国产精品三级| 久久精品夜夜夜夜夜久久蜜豆| 婷婷丁香在线五月| 男女午夜视频在线观看| 美女 人体艺术 gogo| 亚洲专区中文字幕在线| 美女cb高潮喷水在线观看 | 我要搜黄色片| 久久精品国产综合久久久| 亚洲无线在线观看| 亚洲av中文字字幕乱码综合| 亚洲熟妇熟女久久| 中文字幕久久专区| 在线免费观看的www视频| 99久久成人亚洲精品观看| 草草在线视频免费看| 精品99又大又爽又粗少妇毛片 | 黄色女人牲交| 美女扒开内裤让男人捅视频| 一a级毛片在线观看| 99久国产av精品| 99re在线观看精品视频| 少妇的丰满在线观看| 久久人人精品亚洲av| 最新在线观看一区二区三区| 午夜福利在线观看免费完整高清在 | 日本a在线网址| 搞女人的毛片| 叶爱在线成人免费视频播放| 国产伦在线观看视频一区| 国产一区二区三区在线臀色熟女| 国产激情欧美一区二区| 久久久水蜜桃国产精品网| 日日夜夜操网爽| 国产成人福利小说| 美女午夜性视频免费| 亚洲欧美日韩高清专用| 国产99白浆流出| cao死你这个sao货| 国内精品一区二区在线观看| 欧美成人性av电影在线观看| 亚洲色图av天堂| 亚洲aⅴ乱码一区二区在线播放| 日本熟妇午夜| 一个人免费在线观看电影 | 偷拍熟女少妇极品色| 国产亚洲精品一区二区www| av在线天堂中文字幕| 欧美日韩国产亚洲二区| 国产精品av久久久久免费| 中文字幕av在线有码专区| 久久久精品欧美日韩精品| 亚洲av成人一区二区三| 视频区欧美日本亚洲| 久久精品国产综合久久久| 极品教师在线免费播放| 欧美乱色亚洲激情| 91av网站免费观看| 熟妇人妻久久中文字幕3abv| 99国产精品一区二区蜜桃av| 白带黄色成豆腐渣| 18禁美女被吸乳视频| 757午夜福利合集在线观看| 欧美成人性av电影在线观看| 久久久久国产精品人妻aⅴ院| 日本免费a在线| 日韩精品中文字幕看吧| 天天躁日日操中文字幕| 欧美激情久久久久久爽电影| 91久久精品国产一区二区成人 | 黑人欧美特级aaaaaa片| 中文字幕人妻丝袜一区二区| 99精品在免费线老司机午夜| 国产一区二区在线av高清观看| 99国产精品99久久久久| 久久久精品欧美日韩精品| 又黄又粗又硬又大视频| 亚洲av熟女| h日本视频在线播放| 欧美成人一区二区免费高清观看 | 欧美激情在线99| 黄色日韩在线| 国产亚洲av嫩草精品影院| 一进一出抽搐gif免费好疼| 99在线视频只有这里精品首页| 国产爱豆传媒在线观看| 久久这里只有精品19| 精品久久久久久,| 人妻久久中文字幕网| 少妇丰满av| 亚洲天堂国产精品一区在线| 国产精品一区二区三区四区久久| 一级毛片女人18水好多| 亚洲av日韩精品久久久久久密| 99国产精品99久久久久| 免费看美女性在线毛片视频| 在线十欧美十亚洲十日本专区| 欧美黑人巨大hd| 午夜两性在线视频| 亚洲国产色片| 午夜免费成人在线视频| 久久精品国产99精品国产亚洲性色| 最新中文字幕久久久久 | 国产精品一区二区三区四区免费观看 | 亚洲国产中文字幕在线视频| 国产主播在线观看一区二区| 亚洲av电影不卡..在线观看| 久久久成人免费电影| 久久伊人香网站| 亚洲av免费在线观看| 日日摸夜夜添夜夜添小说| 久久中文字幕一级| 国产乱人伦免费视频| 男人的好看免费观看在线视频| 色哟哟哟哟哟哟| 99热6这里只有精品| 欧美在线黄色| 男女那种视频在线观看| 亚洲中文av在线| 在线十欧美十亚洲十日本专区| a级毛片在线看网站| 国内少妇人妻偷人精品xxx网站 | 每晚都被弄得嗷嗷叫到高潮|