• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    PP-GAN:Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN

    2024-01-12 03:46:10JongwookSiandSungyoungKim
    Computers Materials&Continua 2023年12期

    Jongwook Si and Sungyoung Kim

    1Department of Computer AI Convergence Engineering,Kumoh National Institute of Technology,Gumi,39177,Korea

    2Department of Computer Engineering,Kumoh National Institute of Technology,Gumi,39177,Korea

    ABSTRACT The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean portraits where elements like the “Gat” (a traditional Korean hat) are prevalent.This paper proposes a deep learning network designed to perform style transfer that includes the“Gat”while preserving the identity of the face.Unlike traditional style transfer techniques,the proposed method aims to preserve the texture,attire,and the“Gat”in the style image by employing image sharpening and face landmark,with the GAN.The color,texture,and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16,and only the necessary elements during training were preserved using a facial landmark mask.The head area was presented using the eyebrow area to transfer the “Gat”.Furthermore,the identity of the face was retained,and style correlation was considered based on the Gram matrix.To evaluate performance,we introduced a metric using PSNR and SSIM,with an emphasis on median values through new weightings for style transfer in Korean portraits.Additionally,we have conducted a survey that evaluated the content,style,and naturalness of the transferred results,and based on the assessment,we can confidently conclude that our method to maintain the integrity of content surpasses the previous research.Our approach,enriched by landmarks preservation and diverse loss functions,including those related to“Gat”,outperformed previous researches in facial identity preservation.

    KEYWORDS Style transfer;style synthesis;generative adversarial network (GAN);landmark extractor;ID photos;Korean portrait

    1 Introduction

    With the advent of modern technologies,such as photography,capturing the appearances of people has become effortless.However,when these technologies were not developed artists would paint portraits of individuals.Such a painting is called a portrait,and because of the invention of photography,modern portraits have become a new field of art.However,all famous figures from the past were handed down in pictures.The main purpose of paintings is to depict politically famous figures [1],but in modern times,the purpose has expanded to the general public.Although the characteristics of portraits by period and country are very different,most differ greatly from the actual appearance of the characters unless they are surrealistic works.Korean portraits differ considerably depending on time and region.Fig.1a shows a representative work of portraits from the Goryeo Dynasty.This work is a portrait of Ahn Hyang,a Neo-Confucian scholar from the mid-Goryeo period.Fig.1b is a portrait of the late Joseon Dynasty,which indicates that there is a large difference in the preservation conditions and drawing techniques.In particular,in Fig.1b,the “Gat”on the head is clearly visible[2].

    Figure 1:The left photo(a)is a portrait of Ahn Hyang(1243~1306)in the mid-Goryeo dynasty and the right photo(b)is a portrait of Lee Chae(1411~1493)in the late Joseon Dynasty

    Prior to the Three Kingdoms Period,Korean portrait records were absent,and only a limited quantity of portraits were preserved during the Goryeo Dynasty[3].In contrast,the Joseon Dynasty produced numerous portraits with different types delineated according to their social status.Furthermore,works from the Joseon era exhibit a superior level of painting,in which facial features are rendered in greater detail than in earlier periods.

    A portrait exhibits slight variations in the physical appearance of a person,but it uniquely distinguishes individuals akin to a montage.Modern identification photographs serve a similar purpose and are used as identification cards,such as driver’s licenses and resident registration cards.Old portraits may pique interest in how one appears in such artwork,for which style transfer technology can be used.Korean portraits may be used to provide the style of ID photos;however,the custom of wearing a “Gat”headgear renders transferring the style from Korean portraits to ID photos using previous techniques challenging.While earlier studies have employed global styles or partial styles to transfer onto content images,the distinct styles of texture,attire,and“Gat”must be considered simultaneously for Korean portraits.By independently extracting several styles from the style image,transferring the age,hairstyle,and costume of the person in a portrait onto the ID photos are possible.Fig.2 showcases the results from the method presented in previous research [4].The figure distinctly emphasizes the significant challenges encountered when attempting style transfer with multiple styles using CycleGAN[5].In this study,we introduce a method for high-quality style transfer of Korean portraits,which overcomes the limitations of previous research to accurately preserve facial landmarks and produce realistic results.

    Figure 2:Results of style transfer from Korean portraits to ID photos using CycleGAN

    Style transfer techniques,such as GAN,are commonly used based on facial datasets,but maintaining the identity of the person is crucial for achieving high-quality results.Existing facebased style transfer studies only consider facial components,such as eyes,nose,mouth,and hair,when transferring styles onto content images.In contrast,this study aims to transfer multiple styles,including Gats and costumes,simultaneously.

    To accomplish this,we propose an enhanced GAN-based network for style transfer that generates a mask using landmarks and defines a new loss function to perform style transfer based on facial data.we define the proposed method,“Style Transfer from KoreanPortraits to IDPhotos Using Landmark Extractor withGAN,”asPP-GAN.The primary contribution of this study is the development of a novel approach to style transfer that considers multiple styles and maintains the identity of a person.

    ? The possibility of independent and arbitrary style transfer to a network trained with a small dataset has been demonstrated.

    ? This study is the first attempt at arbitrary style transfer in Korean portraits,which was achieved by introducing a new combination of loss functions.

    ? The generated landmark mask improved the performance of identity preservation and outperformed previous methods[4].

    ? New data on upper-body Korean portraits and ID photos were collected for this study.

    In Section 2,previous studies related to style transfer are reviewed.In Section 3,the foundational techniques for the method proposed in this paper are explained.Section 4 delves into the architecture,learning strategy,and loss functions of the proposed method in detail.In Section 5,the results of the proposed method and experimental and analytical results through performance metrics are presented.Lastly,Section 6 discusses the conclusions of this research and directions for future studies.

    2 Related Work

    Research on style transfer can be categorized into two main groups:those based on Convolutional Neural Networks(CNN)and those based on General Adversarial Networks(GAN).

    2.1 CNN-Based Previous Works

    AdaIN[6]suggested a method of transferring style at high speed using statistics in feature maps of content and style images.This is one of the earlier studies on style transfers.Huang et al.[7]used the correlation between the content feature map and scaling information of the style feature map for the fusion of content and style.In addition,the order statistics method,called “Style Projection”,demonstrated the advantages and results of fast training speed.Zhu et al.[8] maintained structural distortion and content by presenting a style transfer network that could preserve details.In addition,by presenting the refined network,which modified the VGG-16 [9],the style pattern was preserved via spatial matching of hierarchical structures.Simonyan et al.[10] proposed a new style transfer algorithm that expanded the texture synthesis work.It aimed to create images of similar quality and emphasized a consistent way of creating rich styles while keeping the content intact in the selected area.In addition,it was fast and flexible to process any pair of content and style images.Li et al.[11]suggested a style transfer method for low-level features to express content images in a CNN.Low-level features dominate the detailed structure of new images.A Laplacian matrix was used to detect edges and contours.It shows a better stylized image,which can preserve the details of the content image and remove artifacts.Chen et al.[12] proposed a stepwise method based on a deep neural network for synthesizing facial sketches.It showed better performance by proposing a pyramid column feature to enrich the parts by adding texture and shading.Fast Art-CNN [13] is a structure for fast style transfer performance in the feedforward mode while minimizing deterioration in image quality.It can be used in real-time environments as a method for training deconvolutional neural networks to apply a specific style to content images.Liu et al.[14] proposed an architecture that includes geometric elements in the style transfer.This new architecture can transfer textures into distorted images.In addition,because the content/texture-style/geometry style can be selected to be entered in triple,it provides much greater versatility to the output.Kaur et al.[15]proposed a framework that solves the problem of realistically transferring the texture of the face from the style image to the content image without changing the identity of the original content image.Changes around the landmark are gently suppressed to preserve the facial structure so that it can be transferred without changing the identity of the face.Ghiasi et al.[16]presented a study on neural style transfer capable of real-time inference by combining with a fast style transfer network.It utilized a learning approach that predicts conditional instance normalization parameters for style images,enabling the generation of results for arbitrary content and style images.

    2.2 GAN-Based Previous Works

    APDrawingGAN [17] improved the performance by combining global and regional networks.High-quality results were generated by measuring the similarity between the distance transform and artist drawing.Xu et al.[18]used a generator and discriminator as conditional networks.Subsequently,the mask module for style adjustment and AdaIN [6] for style transfer performed better than the existing GAN.S3-GAN [19] introduced a style separation method in the latent vector space to separate style and content.A style-transferred vector space was created using a combination of separated latent vectors.CycleGAN[5]proposed a method for converting a style to an image without a pair of domains.While training the generator mapping to X →Y,reverse mapping to Y →X is performed.In addition,the cycle consistency loss was designed such that an input image and its reconstructed image could be identical when the transferred style was removed through reverse mapping.Yi et al.[20] proposed a new asymmetric cycle mapping that forced the reconstruction information to be shown and included only in optional facial areas.Portrait images generated along with a localized discriminator for landmark and style classifiers were introduced.Considering the style vector,portraits were generated in several styles using a single network.They attempted to transfer the style of the portrait similar,which is similar to the purpose of our study.However,in this study,not only the portrait painting style but also the Gat and costume are transferred together.

    Some attempts have been made to maintain facial landmarks in style transfer studies aimed at makeup,aging or changing.In SLGAN [21],a style-invariant decoder was created by a generator to preserve the identity of the content image and introduce a new perceptual makeup loss,resulting in high-quality conversion.BeautyGAN [22] defined instance and perceptual loss to change the makeup style while maintaining the identity of the face,thereby generating high-quality images and maintaining the identity.Paired-CycleGAN [23] trained two generators simultaneously to convert the makeup styles of other people from portrait photos.Stage 1 was used as a pair of powers through image analogy,and as an input of Stage 2,it showed excellent results by calculating identity preservation and style consistency compared to the power of Stage 1.Landmark-CycleGAN [24]showed incorrect results owing to the distortion of the geometrical structure while converting a face image to a cartoon image.To solve this problem,local discriminators have been proposed using landmarks to improve performance.Palsson et al.[25] suggested Group-GAN,which consisted of several models of CycleGAN [5],to integrate pre-trained age prediction models and solve the face aging problem.Wang et al.[26] proposed a method for interconverting edge maps to a CycleGANbased E2E-CycleGAN network for aging.The old face was generated using the identity feature map and result of converting the edge map using the E2F-pixelHD network.Face-Dancer[27]proposed a model that transfers features from the face of a source image to a target image with the aim of face swapping.This involves transferring the identity of the source image,including its expressions and poses,while preserving the target image.This is significantly different from the method proposed in this paper.Although it claims to maintain identity,the results can be very different from the identity of the source image because it is influenced by the facial components of the target image.The key difference in our paper is that we propose a method that guarantees the identity of the source image itself while transferring the style of the target image.

    3 Background

    3.1 VGG-16

    The VGG-16 [9] network is a prominent computer vision model that attained a 92.7% Top-5 accuracy in the ImageNet Challenge competition by receiving an RGB image with dimensions of 224 × 224 as input,containing 16 layers in a configuration of 13 convolution layers and three FC layers.The convolution filter measures 3×3 pixels and maintains fixed strides and padding at 1.The activation function employed in the network is ReLU,and the pooling layer is max pooling,which is set to a fixed stride of 2 on 2×2.The closer it is to the input layer,the more low-level information the feature map contains,such as color and texture.of the image,and the closer it is to the output layer,thus providing higher-level information,such as shape.The pre-trained VGG-16[9]was used in this study to preserve facial and upper-body content and transfer the style efficiently.

    3.2 Gram Matrix

    The Gram matrix is a valuable tool for representing the color distribution of an image.This enabled the computation of the overall color and texture correlation between the two images.Gatys et al.[28] demonstrated that the style transfer performance can be improved using a Gram matrix on feature maps from various layers.Fig.3 illustrates the process of calculating the Gram matrix,which involves converting each channel of a color image into a 1D vector,followed by obtaining the matrix by multiplying the H×W matrix with its transpose.The Gram matrix is a square matrix with channel size as its dimension.As the corresponding values in the Gram matrix of the two images become more similar,the color distribution of the images also becomes more similar.

    Figure 3:The process of calculating into a Gram matrix for Korean portraits

    3.3 Face Landmark

    Facial landmarks,such as the eyes,nose,and mouth,play a vital role in identifying and analyzing facial structures.To detect the landmarks,this study employed the Shape Predictor of 68 face landmarks [29],which generated 68xandycoordinates of the crucial facial components,including the jaw,eye,eyebrow,nose,and mouth and also provided the locations of the face.Subsequently,the coordinates obtained from the predictor were used to create masks for the eyes,nose,and mouth,as shown in Fig.4.

    Figure 4:Masks for eyes,nose,and mouth created by shape predictor 68 face landmarks

    3.4 Image Sharpening

    Image sharpening is considered a high-frequency emphasis filtering technique,which is employed to enhance image details.High frequency is characterized by the changes in brightness or color occurring locally,and it is useful in identifying facial landmarks.Image sharpening can be achieved using high-boost filtering.It involves generating a high-pass image by subtracting a low-pass image from an input image as shown in Eq.(1).A high-frequency emphasized image is obtained by multiplying the input image with a constant during this process.

    Mean filtering is a low-pass filtering technique,and the coefficients of the filter can be determined using Eq.(2).The sharpening strength of the input image is controlled by the value ofα,where 9A-1 is set toα.A highαvalue results in a decrease in the sharpness level owing to the high ratio of the original image to the output.Conversely,a smallαvalue results in a reduction in contrast,owing to the removal of numerous low-frequency components.

    To have similar structures between portrait images and ID photos,portrait images are cropped around the faces,as the face occupies a relatively small area.In contrast,ID photos are resized so that they have the same size,both horizontally and vertically,instead of being cropped.However,this resizing can make extracting facial landmarks difficult.Therefore,image sharpening is performed in the present study.This process is necessary to ensure that facial landmarks are extracted well from ID photos,as shown in Fig.5,where the difference in facial landmark extraction with and without image sharpening is illustrated.

    Figure 5:Result of landmark mask generation according to the use of high boost filtering(The first and third columns are the original and the high boost filtered image,respectively,and the second and fourth columns show the masks with the detected landmark for each corresponding image)

    4 Proposed Method

    4.1 Network

    The primary objective of the proposed method is to achieve a style transfer of ID photos to Korean portraits.Let X and Y indicate the domains of the three-dimensional color image and the Korean portrait,respectively.These domains are subsets ofX?RH×W×CandY?RH×W×Cand have a set relationship such thatx∈Xandy∈Y.

    The CycleGAN[5]network is limited in performing style transfer owing to its training over the entire domain.Therefore,the proposed method adopts a Dual I/O generator from BeautyGAN[20],which has a stable discriminator that enables mapping training between two domains and style transfer.Additionally,the proposed method incorporates VGG-16,a Gram matrix,and a landmark extractor to improve performance.Fig.6 depicts the overall structure of the proposed method.

    4.1.1 Generator

    The generator is trained to perform (X,Y)→(Y,X) mapping,resulting in a fake image G(x,y)=(xy,yx)with content X and style Y,which is evaluated in this study.Contents Y and X are used to generate another fake imageyx.This study focuses only onxyresults,even though the network structure can generate results in both directions.The image recovered by the Dual I/O generator performing style transfer and the input image must be identical.With an input size of (256,256,32),xandypass through three convolution layers each,resulting in a size of (64,64,128).Thexandyresults are concatenated to produce a size of(64,64,256),which is restored to the original size through the deconvolution layer,allowing style transfer through nine residual blocks.This result represents a fake image of a style transferred and represents the result of the proposed method.Therefore,the generator deceives the discriminator by generating fake images that appear real,resulting in more natural and higher performance results.

    Figure 6:Overall structure of the system proposed in this study

    4.1.2 Discriminator

    The network structure includes two discriminators that are trained to classify the styles of fake and real images generated by the generator.The discriminator consists of five convolution layers and aims to distinguish styles.The input image size is(256,256,3),and the network result size is(30,30,1).The first four convolution layers,excluding the last layer,perform Spectral Normalization to improve the performance and maintain a stable distribution of the discriminator in a high-dimensional space.The discriminator is defined as follows:Dxclassifiesxyas fake andyas real,whereasDyclassifiesyxas fake andxas real.Finally,the PatchGAN[30]is applied to produce the discriminator output,which is the final judgment result of the discriminator result image.

    4.2 Loss Function

    In this study,we propose a loss function for transferring ID photos for arbitrary Korean portrait styles.Six loss functions,including the loss function of the new approach,are used to generate good results.

    CycleGAN introduced the concept of setting the result as the input of the generator again through a cycle structure,which should theoretically produce the same output as the original image.Therefore,in this study,we define the recovered result as the cycle loss,which consists of a loss function designed to reduce the difference between the input and output images.In particular,it can be expressed asx≈G(G(x,y))=G=xxandy≈G(G(y,x))=G=yy.This can be expressed as Eq.(3).

    The existing style transfer method distorts the shape of the face geometrically,leading to difficulties in recognizing the face shape.To maintain the identity of the character,a new condition is required.Hence,this study defines land loss based on a face landmark mask,which helps in preserving the eyes,nose,and mouth while enhancing the performance of style transfer.Land loss is defined by mathematical expression Eq.(4)in this study.

    Land loss is a function that aims to maintain the landmark features of the input and output images generated by the generator.The pairs of images (xy,x) and (yx,y) contained the same content with different styles,and the landmark shapes are identical.The masks,MfXandMfY,generated for the eye,nose,and mouth areas,are used to calculate the area,as discussed in Section 3.Using a pixel-wise operation,each area of the eye,nose,and mouth is processed using a face landmark mask,and a loss function is defined to minimize the difference in pixel values.This process is expressed in Eq.(5).The difference for each landmark is based on L1 Loss.

    The method proposed in this study differs greatly from previous style transfer research,which requires some content of the style target image rather than ignoring it and only considering the color relationship.In particular,for Korean portraits,the style of the Gat and clothes must be considered in addition to image quality,background,and overall color.However,the form of the Gat varies widely,which is difficult to detect due to the differences in wearing position,while the hair of Korean portraits and ID photos have completely different shapes.To address this,a head loss is proposed to minimize the difference between the head area of the result and style images,with the head area divided into the Gat and hair areas,represented by masksMhtandMhr.Head loss uses the fact that the Gat does not cover the eyebrows;therefore,the feature point located at the top of the coordinates corresponding to the eyebrows is used to define the head area,which is then used to transfer the corresponding style to the resulting image.This is expressed in Eq.(6).

    To preserve the overall shape of the character and enhance the performance of style transfer,content loss and Style Loss are defined using a specific layer of VGG-16 in this study.The pretrained network contained low-and high-level information,such as colors and shapes,which appeared differently depending on the layer location.Low-level information is related to style,and high-level information is related to content.Conversely,high-level layers represent the image characteristics.Therefore,the content and style losses are configured based on the layer characteristics.Style loss is defined using a Gram matrix,which is obtained by computing the inner product of the feature maps.The best set of layers obtained through the experiment is used to define style loss,as shown in Eq.(7),where N and M represent the product and channel of each layer,respectively,and g represents the Gram matrix of the feature map.By training to minimize the difference in the Gram matrix between the feature maps for both sides(xyandyx),the style ofycan be transferred tox.

    Content loss is defined as a method to minimize linear differences in feature maps at the pixel level.Because the style transfer aims to maintain the content of an image while transferring the style,it is not necessary to consider correlations.The equation for content loss is the same as that in Eq.(8).This is a critical factor in preserving the identity of a person;however,if the weight of this loss is extremely large,it can result in poor style transfer results.Therefore,appropriate hyperparameters must be selected to achieve the desired outcome.

    The discriminator loss solely comprised adversarial loss,which followed the GAN structure.The output of the discriminator is a 32 × 32 × 1 result that is evaluated based on PatchGAN [30] to identify whether they are authentic or fake,considering every image PatchGAN[30].The loss function used to train the discriminator is given by Eq.(9).The loss function is reduced if the patches ofxyandyxare fake,and the patches ofxandyare genuinely classified.However,the equation related to the discriminator used during the training of the generator is defined as in Eq.(10).This concept is opposite to that of Eq.(9)and is designed as a metric for the generator to deceive the discriminator,distinguishing Fake as Real.

    The generator loss is composed of cycle,land,head,style,and content losses,as expressed in Eq.(11).Each loss is multiplied by a different hyperparameter,and the sum of the resulting values is used as the loss function of the generator.

    The total loss employed in this study is expressed by Eq.(12)and is composed of the generator and discriminator losses.The generator seeks to minimize the generator loss to generate style transfer outcomes,whereas the discriminator aims to minimize the discriminator loss to enhance its discriminative capability.A trade-off between the generator and discriminator performances is observed,where if one is improved,the other is diminished.Consequently,the total loss is optimized by forming a competitive relationship between the generator and discriminator,which led to superior outcomes.

    4.3 Training

    The experimental environment in this study was conducted on a multi-GPU system using the GeForce RTX 3090 and Ubuntu 18.04 LTS operating system.As TensorFlow 1.x has a minimum version requirement for CUDA,the experiments were carried out using Nvidia-Tensorflow version 1.15.4.Datasets of ID photos and Korean portraits were collected through web crawling using Google and Bing search engines.To improve the training performance,preprocessing was conducted to separate the face area from the whole body of the Korean portraits,which typically feature the entire body.Data augmentation techniques,such as left and right inversion,blur,and noise,were applied to increase the limited number of datasets.Gat preprocessing was also performed,as shown in Fig.7,to facilitate the feature mapping.

    Figure 7:Examples of datasets preprocessing

    Table 1 shows the resulting dataset consisting of 1,054 ID photos and 1,736 Korean portraits divided into 96% training and 4% test sets.Owing to the limited number of portraits,a higher ratio of training data was used,and no data augmentation was applied to the test set.As the number of combinations that could be generated from the test data was substantial(XTest×YTest),the evaluation was not problematic.Previous research has emphasized the importance of data preprocessing,and the results of this study further support its impact on training performance.

    Table 1:Detailed datasets

    The proposed network was trained for 200 epochs using the Adam Optimizer.The initial learning rate was set to 0.0001 and linearly reduced to zero after 50%of the training epoch for stable learning.To match the equality between the loss functions,λcywas set to 50,which resulted in a relatively lower value than the other losses.To increase the effect of style transfer,λsandλDGwere set to 1 andλhwas set to 0.5,which helped concentrate on the head area between the style transfers.Finally,the training proceeded by settingλc=0.1 andλl=0.2.The entire training process took approximately 6 h 30 min.The results are presented in Fig.8,which visually confirms that the proposed method shows a greater performance improvement than previous research[4].While previous methods have focused only on style transfer,this study successfully maintained the identity of a person while transferring the style.The results show excellent outcomes in which the style is transferred while preserving the shape of the character in the content image.Additionally,the identity of the personnel is preserved,and the Gat is transferred naturally.

    Figure 8:The result of the method proposed in this paper

    5 Experiments

    5.1 Feature Map

    To perform style loss,this study adopted Conv2_2 and Conv3_2 from the VGG-16 layer,whereas Conv4_1 was used for content loss.Although the front convolution layers contain low-level information,they are sensitive to change and difficult to train because of the large differences in pattern and color between the style and content images.To overcome this problem,this study utilizes a feature map located in the middle to extract low-level information for style transfer.The results of using a feature map not adopted for style loss are presented in Fig.9 with a smoothing set to 0.8 on the tensorboard.Loss graphs were output for only ten epochs because of the failure of training when layers not used for Style Loss were utilized in the experiment.

    ? The Conv2_1 layer exhibits a large loss value and unstable behavior during training,indicating that training may not be effective for this layer.

    ? Conv1_2 is close to zero in most of the losses,but it cannot be said that the training proceeds because the maximum minimum differs by more than 104times owing to the very large loss of some data.

    ? Conv1_1 exhibits a high loss deviation and instability during training,similar to Conv2_1 and Conv1_2.Moreover,owing to its sensitivity to color,this layer presents challenges for training.

    Figure 9:Result of a specific feature map experiment of VGG-16 for style loss

    If Conv4_1 is utilized as the style loss layer,it can transfer the style of the image content.However,because the feature map scarcely includes style-related content,the generator may produce images lacking style.Nevertheless,it is observed that transferring the style of the background is feasible as it corresponds to the overall style of the image and can be recognized as content because clothing style is not a prominent feature.Therefore,high-level layers,such as Conv4_1,contain only the style of the background in the character content.The result of utilizing the feature map employed in the content loss for style loss is shown in Fig.10.In general,most contents of the content image are preserved,whereas the style is marginally transferred.Hence,we proceed with using trainable layers,which results in stable training and enables us to transfer styles while conserving content.

    Figure 10:Results when the feature map used for Style Loss is used with the same layer as Content Loss(Column 1:input image;Column 2 and Column 3:output image using Conv4_1)

    5.2 Ablation Study

    An ablation study was conducted on four loss functions,excluding Cycle Loss,to demonstrate the effectiveness of the loss function proposed in this study for Generator Loss.The results are presented in Fig.11,where one row represents the use of the same content and style images.IfLcis excluded,the character shape is not preserved,leading to poor results owing to the style concentration during training.Consequently,only facial components are transferred based onLl.WhenLcandLlare excluded simultaneously,the style transfer outcome lacks facial components.Similarly,whenLsis excluded,the result of the style transfer is of poor quality,with the character remaining almost the same.The use ofLcyallows for style transfer of the background without the need for a separate loss function for style;however,owing to the main focus of training on the character shape,style transfer barely occurs,resulting in the creation of the Gat whenLhis used.ExcludingLlleads to unclear and blurred liver facial components,and the face color becomes bright.Therefore,Llplays a crucial role in preserving the character identity by making the liver face components more apparent.IfLhis excluded,the head area becomes blurred or is not created,leading to unsatisfactory style transfer results.Unlike the overall style transfer,the Gat must be newly generated;thus,Lsserves a different purpose.Therefore,the head area must be set separately,andLscan be used withLhto achieve this purpose.Consequently,the use of all the loss functions proposed in this study results in the best performance,generating natural images without bias in any direction.

    5.3 Performance Evaluation

    This study conducts a performance comparison with previous research[4]with the same subject,as well as an ablation study.Although there are diverse existing studies on style transfer,the subject of this paper differs significantly from them,leading to a comparison only with a single previous study[4].

    Figure 11:Results highlighting the importance of the various loss functions.Column 1 to 4 excluding the loss functions Lc,Ls,Ll,and Lh;Column 5 results generated using all loss functions

    Based on CycleGAN[5],evaluation in pairs could not be performed because of the impossibility of arbitrary style transfer.Thus,an evaluation survey was conducted with 59 students of different grades from the Department of Computer Engineering at Kumoh National Institute of Technology to evaluate the performance of the proposed method in terms of three items: the transfer of style(Sst),preservation of content(Scn),and generation of natural images(Snt).The survey was conducted online for 10 days using Google Forms,and the surveyors received and evaluated a combination of 10 results from a previous study [4] and 10 results from the proposed method.The survey results presented in Table 2.It shows that the proposed method outperforms the previous method [4] in all three aspects,with the highest difference in scores for preserving character content.In contrast,the previous method[4]failed to preserve the shape of the character during style transfer,leading to blurring or disappearance of the landmark of the face,which was not natural.However,the proposed method successfully preserved the character content and effectively transferred the style,producing relatively natural results.Thus,the proposed method showed better overall performance than the previous method[4].

    Table 2:Survey results

    Peak signal-to-noise ratio(PSNR)and structural similarity index measure(SSIM)are commonly employed to measure performance.However,when conducting style transfer while preserving content,it is crucial to ensure a natural outcome without any significant bias towards either the content or style.Consequently,to compare the performance,we propose new performance indicators that utilize a weighted arithmetic mean,which is further weighted by the median values of the PSNR and SSIM.

    The final result is obtained by combining the results of both content and style using the proposed performance indicator.

    The PSNR is commonly used to assess the quality of images after compression by measuring the ratio of noise to the maximum value,which can be calculated using Eq.(13).The logarithmic denominator in this equation represents the average sum of squares between the original and compressed images,and a lower value indicates a higher PSNR and better preservation of the original image.By contrast,the SSIM is used to evaluate distortions in image similarity between a pair of images by comparing their structural,luminance,and contrast features.Eq.(14)is used to calculate the SSIM,which involves various probability-related definitions such as mean,standard deviation,and covariance.

    To evaluate the performance,the indicators are sorted in ascending order,resulting in a sequence of values denoted as[x1,x2,x3,x4,x5],wherex3represents the best result.The value in[]fromx1tox5represents loss when various specific losses are excluded and when all are used.As our goal is to find an optimal combination,it is crucial to focus on the median values.The weight vector(w)is designed to emphasize the significance of the median value.Compared to the adjacent value,it has twice the weight,and compared to the value two steps away,it has five times the weight.This weight vector accentuates the importance of the central point while gradually diminishing the significance of the surrounding data points.This can be beneficial in analyses that prioritize the center of specific data.To assign more weight to the median value,a weight vector(w)of[10,25,50,25,10]is assigned,and the weighted arithmetic mean is calculated using Eq.(15).The performance is evaluated using Eq.(16),which calculates the square of the difference between the weighted arithmetic mean and PSNR and SSIM values.The resulting value indicates the degree of performance,with smaller values indicating a better performance.The difference from the average weight (wavg) is squared and added,resulting in a large one-sided result.Finally,the sum of the square errors for content and style(EPSNR,ESSIM)is presented as the final indicator of the performance evaluation.

    Table 3 shows the results of the proposed performance indicators based on PSNR and SSIM,which were evaluated based on 1,452 generated results.PSNR values for content and style images are represented byPContentandPStylerespectively,whereasSContentandSStylecalculate the SSIM values for content and style images,respectively.EContent+EStyle,which is the sum of the square errors for content and style,is used as the final metric.In this context,the “+”symbol refers to the combined value of Content-based Loss and Style-based Loss,which is used as a performance metric in this paper.The preservation of content was highest whenLhwas not used,while not usingLcorLsresulted in a loss of content and style.Llshowed no significant difference in terms of content,whereas style was relatively high.Therefore,EPSNRandESSIMare good evaluation metrics when all loss functions are used.The distribution of the results generated with the content retention performance as the style transfer performance is shown in Fig.12.When PSNR (Fig.12a) is considered,the distribution of results with respect toLhis different from the other results.The distribution with respect toLcis located in the first half and with respect toLsin the second half.However,the distribution ofLTotalis relatively close to the center with a small deviation,making it the most appropriate result.In the case of SSIM(Fig.12b),the distribution shape is similar to that of the PSNR,but several distributions show parallel movement results.The smaller theESSIM,the more central the distribution,indicating better performance.Therefore,LTotalshows better performance thanLl,andLTotalwith a small difference has similar distributions in the center.Other results are considered to produce relatively poor results because they are located away from the center.

    Table 3:Ablation study analysis

    The performance of the style transfer of Korean portraits to ID photos presented in this paper is highly satisfactory.However,there are three issues.First,the dissimilarity in the texture of the images on both sides leads to unsatisfactory results in the reverse direction.While this paper mainly focuses on transferring the Korean portrait style to ID photos,the generator structure allows the transfer of the ID photos style to Korean portraits.Nevertheless,maintaining the shape of the Korean portraits,which are paintings,can pose a challenge.Furthermore,owing to mapping difficulties,only the face may be preserved during style transfer.Second,if the feature map of the Korean portrait dataset does not map well with the ID photos dataset,unsatisfactory results are obtained.This can be attributed to either dataset limitations or faulty preprocessing.For instance,ID photos are front-facing,but Korean portraits may depict characters from other angles.Improper cropping during preprocessing can also lead to different feature maps and poor results.Finally,Korean portrait data are scarce,and using existing data leads to limited and inadequate style representation.Although data augmentation increases the number of data points,the training style remains unchanged,thereby limiting the results.

    Figure 12:Scatter plot of two metrics(PSNR and SSIM)in terms of content and style for test datasets and their transfer results based on combinations of used loss function combinations

    6 Conclusions

    The objective of this study is to propose a generative adversarial network that utilizes facial feature points and loss functions to achieve arbitrary style transfers while maintaining the original face shape and transferring the Gat.To preserve the characteristics of the face,two loss functions,Land Loss and Head Loss,were defined using landmark masks to minimize the difference and speed up the learning process.Style Loss,which uses a Gram matrix for content loss and style transfer,enables style transfer while preserving the character’s shape.However,if the input images have large differences and the feature maps have significant discrepancies,the results are not satisfactory,and there are color differences in some instances.Additionally,it is noticeable that when hair is prominently displayed in ID photos,the chances of experiencing the ghosting effect increase.To overcome these limitations,it is recommended to define a loss function that considers color differences and aligns the feature map through the alignment of facial landmarks in future studies.

    Appendix A.Comparison of generated results with similar related works

    In this section,we will perform comparative analysis alongside the results of two similar categories:Neural Style Transfer and Face Swap.

    In the case of Neural Style Transfer,it involves transferring the style of the entire Style image to the Content image.Therefore,it is unable to produce results such as transferring the Gat or changing the texture of the clothes.For Face Swap,it can extract only the facial features of the ID photos and generate an image on the Korean portrait.However,one of its limitations is that it superimposes the features from the ID photos onto the facial landmarks of the Korean portrait itself,which leads to a loss of original identity.The aim of this study is quite different as it seeks to preserve all aspects of the upper body shape and facial area found in the ID photos,while also transferring the texture of the clothes,the overall color of the picture,and the Gat from the Korean portrait.

    Fig.13 illustrates the visualized results of the methods proposed by Ghiasi et al.[16],Face-Dancer[27],and this study.The results of Ghiasi et al.[16]show a transfer from the ID photos based on the overall style distribution of the Korean portrait images.For Face-Dancer[27],it maintains all content outside the face region of the Korean portrait,and the internal facial features of the ID photos are spatially transformed and transferred onto the face of the Korean portrait.The generated result is not a transfer of the Korean portrait style to the ID photos,but rather a projection of the ID photos onto the Korean portrait itself,hence the preservation of the identity is not maintained.

    Figure 13:Comparison of results between neural style transfer,face swap methods and ours(Column 2: Ours,Column 3: G.Ghiasi et al.[16],Column 4: Face-Dancer [27]).Adapted with permission from reference[16],Copyright?2017,Arxiv.,reference[27],Copyright?2023,IEEE

    Acknowledgement:The authors thank the undergraduate students,alumni of Kumoh National Institute of Technology,and members of the IIA Lab.

    Funding Statement:This work was supported by Metaverse Lab Program funded by the Ministry of Science and ICT(MSIT),and the Korea Radio Promotion Association(RAPA).

    Author Contributions:Study conception and design:J.Si,S.Kim;data collection:J.Si;analysis and interpretation of results:J.Si,S.Kim;draft manuscript preparation:J.Si,S.Kim.

    Availability of Data and Materials:Data supporting this study cannot be made available due to ethical restrictions.

    Ethics Approval:The authors utilized ID photos of several individuals,all of which were obtained with their explicit consent for providing their photos.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲国产看品久久| 肉色欧美久久久久久久蜜桃| 久久av网站| 亚洲中文av在线| 中文字幕人妻丝袜制服| 一进一出抽搐动态| 精品第一国产精品| 又紧又爽又黄一区二区| 国产精品久久久人人做人人爽| 俄罗斯特黄特色一大片| 在线av久久热| 成人永久免费在线观看视频 | 在线看a的网站| 日本vs欧美在线观看视频| 黄色毛片三级朝国网站| 国产深夜福利视频在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 麻豆av在线久日| 在线av久久热| 国产精品久久久久久精品电影小说| 欧美精品高潮呻吟av久久| 日韩熟女老妇一区二区性免费视频| 成年人黄色毛片网站| 正在播放国产对白刺激| 在线观看66精品国产| 搡老岳熟女国产| 热99久久久久精品小说推荐| 国产一区二区三区视频了| 国产精品一区二区免费欧美| 久久人妻av系列| 2018国产大陆天天弄谢| 国产精品欧美亚洲77777| 亚洲情色 制服丝袜| 美女福利国产在线| 在线十欧美十亚洲十日本专区| 久久国产精品影院| 亚洲国产欧美日韩在线播放| 日本五十路高清| 欧美黑人精品巨大| 国产一区二区三区综合在线观看| 久久中文字幕人妻熟女| 最近最新免费中文字幕在线| 三级毛片av免费| 国产免费视频播放在线视频| 12—13女人毛片做爰片一| 欧美黑人欧美精品刺激| 久久性视频一级片| 真人做人爱边吃奶动态| 乱人伦中国视频| 国产免费现黄频在线看| 可以免费在线观看a视频的电影网站| 欧美乱码精品一区二区三区| 亚洲成人免费av在线播放| 国产免费现黄频在线看| 国产亚洲精品第一综合不卡| 国产精品 欧美亚洲| 18禁裸乳无遮挡动漫免费视频| 精品久久久久久电影网| 亚洲精品久久午夜乱码| 不卡av一区二区三区| 伊人久久大香线蕉亚洲五| 精品一品国产午夜福利视频| 久久久国产欧美日韩av| 欧美变态另类bdsm刘玥| 性高湖久久久久久久久免费观看| 黄色视频不卡| av有码第一页| 精品国产一区二区三区久久久樱花| 久久久久国内视频| 少妇的丰满在线观看| 免费av中文字幕在线| 丰满人妻熟妇乱又伦精品不卡| 男男h啪啪无遮挡| 热99国产精品久久久久久7| 自线自在国产av| 波多野结衣av一区二区av| 99re6热这里在线精品视频| 美女高潮到喷水免费观看| 久久精品成人免费网站| 一区在线观看完整版| 少妇的丰满在线观看| 国产亚洲午夜精品一区二区久久| 成人18禁在线播放| 最近最新中文字幕大全电影3 | 超碰97精品在线观看| 午夜两性在线视频| 美女国产高潮福利片在线看| 久热爱精品视频在线9| 91字幕亚洲| 精品一区二区三卡| 亚洲,欧美精品.| 人人妻人人爽人人添夜夜欢视频| 19禁男女啪啪无遮挡网站| 中文字幕人妻丝袜制服| av超薄肉色丝袜交足视频| 麻豆国产av国片精品| 最新在线观看一区二区三区| 91麻豆av在线| 黑人欧美特级aaaaaa片| 国产在视频线精品| 中文亚洲av片在线观看爽 | 三上悠亚av全集在线观看| 日韩制服丝袜自拍偷拍| 99久久人妻综合| 深夜精品福利| 国产免费福利视频在线观看| 操出白浆在线播放| 久久久国产一区二区| 国产在线精品亚洲第一网站| 亚洲成人手机| 建设人人有责人人尽责人人享有的| 午夜精品国产一区二区电影| 久久狼人影院| 精品人妻在线不人妻| 另类亚洲欧美激情| 19禁男女啪啪无遮挡网站| 在线观看免费午夜福利视频| 大片电影免费在线观看免费| 久久久精品94久久精品| 美女福利国产在线| 亚洲中文字幕日韩| 成年动漫av网址| 大香蕉久久网| 夜夜爽天天搞| 国产一区二区在线观看av| 黄色丝袜av网址大全| 成年人午夜在线观看视频| 女人高潮潮喷娇喘18禁视频| 男女无遮挡免费网站观看| 视频区欧美日本亚洲| 亚洲国产欧美一区二区综合| 亚洲一卡2卡3卡4卡5卡精品中文| 久久久国产一区二区| 黑人操中国人逼视频| 亚洲熟女毛片儿| 国产伦理片在线播放av一区| 久久久久久久精品吃奶| 精品久久久久久电影网| 成人影院久久| 亚洲一区中文字幕在线| 国产又爽黄色视频| videosex国产| 久久免费观看电影| 欧美黄色淫秽网站| 丰满迷人的少妇在线观看| 亚洲精品成人av观看孕妇| avwww免费| 免费观看av网站的网址| 日本wwww免费看| 9色porny在线观看| 手机成人av网站| 精品一品国产午夜福利视频| 老鸭窝网址在线观看| 久久久久国产一级毛片高清牌| 亚洲av成人一区二区三| 国产高清视频在线播放一区| av欧美777| 99国产综合亚洲精品| 久久香蕉激情| 大片电影免费在线观看免费| 一本久久精品| 久久久久久久国产电影| 悠悠久久av| 黄片小视频在线播放| 国产av又大| 中文字幕最新亚洲高清| 国产精品免费视频内射| av在线播放免费不卡| 人人妻人人澡人人爽人人夜夜| 久久精品亚洲av国产电影网| av天堂久久9| 老熟妇乱子伦视频在线观看| 啦啦啦免费观看视频1| 国产免费福利视频在线观看| av片东京热男人的天堂| 99久久99久久久精品蜜桃| 婷婷成人精品国产| 一进一出好大好爽视频| a级毛片在线看网站| 法律面前人人平等表现在哪些方面| 又黄又粗又硬又大视频| 99久久人妻综合| 国产有黄有色有爽视频| 成年女人毛片免费观看观看9 | 一本—道久久a久久精品蜜桃钙片| 真人做人爱边吃奶动态| 久久久久久久大尺度免费视频| 蜜桃在线观看..| a级毛片在线看网站| av又黄又爽大尺度在线免费看| 午夜日韩欧美国产| 999久久久精品免费观看国产| kizo精华| 亚洲国产欧美一区二区综合| 久久久久国产一级毛片高清牌| 在线观看舔阴道视频| 成人av一区二区三区在线看| 精品少妇黑人巨大在线播放| 99热国产这里只有精品6| 亚洲精品av麻豆狂野| www.自偷自拍.com| 欧美日韩亚洲综合一区二区三区_| 操美女的视频在线观看| 亚洲专区字幕在线| 国产又色又爽无遮挡免费看| 精品国产乱码久久久久久男人| 亚洲国产av新网站| 丝袜美足系列| 精品亚洲乱码少妇综合久久| 亚洲人成77777在线视频| 亚洲免费av在线视频| 日本av手机在线免费观看| 757午夜福利合集在线观看| 久久精品国产a三级三级三级| 日韩三级视频一区二区三区| 亚洲天堂av无毛| 中文字幕最新亚洲高清| 亚洲精品一卡2卡三卡4卡5卡| 国产高清激情床上av| 母亲3免费完整高清在线观看| 成人18禁在线播放| 亚洲成人免费电影在线观看| 欧美黑人欧美精品刺激| 在线观看免费午夜福利视频| 色播在线永久视频| 精品人妻熟女毛片av久久网站| 中文字幕色久视频| 午夜免费成人在线视频| av福利片在线| 久久久久久免费高清国产稀缺| 亚洲欧美一区二区三区黑人| 午夜日韩欧美国产| 十八禁网站网址无遮挡| 国产高清videossex| 91成人精品电影| 两性夫妻黄色片| 五月天丁香电影| 久久久水蜜桃国产精品网| 免费久久久久久久精品成人欧美视频| 久久久久久久国产电影| 久久午夜亚洲精品久久| 亚洲五月色婷婷综合| 俄罗斯特黄特色一大片| svipshipincom国产片| 亚洲中文字幕日韩| 亚洲成人免费av在线播放| 女性被躁到高潮视频| 不卡一级毛片| 国产精品久久久人人做人人爽| 亚洲国产精品一区二区三区在线| 久久久国产一区二区| 欧美精品亚洲一区二区| 日韩欧美三级三区| 国产色视频综合| 国产成人精品久久二区二区免费| 国产av一区二区精品久久| 色婷婷av一区二区三区视频| 久久人人97超碰香蕉20202| 麻豆av在线久日| 一本大道久久a久久精品| 久久久久精品人妻al黑| 高清在线国产一区| 欧美黑人精品巨大| 高清av免费在线| 国产深夜福利视频在线观看| 操出白浆在线播放| 老司机靠b影院| 中亚洲国语对白在线视频| 咕卡用的链子| 久久久国产精品麻豆| 最近最新中文字幕大全电影3 | 激情在线观看视频在线高清 | 国产精品影院久久| 黄频高清免费视频| 最近最新中文字幕大全电影3 | 午夜日韩欧美国产| 日韩欧美一区视频在线观看| 麻豆国产av国片精品| 五月天丁香电影| 热99久久久久精品小说推荐| 香蕉国产在线看| 丝袜美足系列| 丁香欧美五月| 国产精品成人在线| 无限看片的www在线观看| 人人妻人人爽人人添夜夜欢视频| 亚洲国产av新网站| 色老头精品视频在线观看| 男女床上黄色一级片免费看| 性少妇av在线| 下体分泌物呈黄色| 国产在线免费精品| 欧美黄色淫秽网站| 中文字幕av电影在线播放| 18禁国产床啪视频网站| 国内毛片毛片毛片毛片毛片| 国产aⅴ精品一区二区三区波| 久久久久久久国产电影| 人成视频在线观看免费观看| 国产精品一区二区免费欧美| 国产野战对白在线观看| 欧美日韩成人在线一区二区| 精品少妇内射三级| 国产91精品成人一区二区三区 | 国产高清视频在线播放一区| 中文字幕人妻丝袜制服| 色视频在线一区二区三区| 美女午夜性视频免费| 18禁美女被吸乳视频| 黄色片一级片一级黄色片| 国产成人啪精品午夜网站| 老熟女久久久| 国产精品偷伦视频观看了| 9191精品国产免费久久| 蜜桃在线观看..| e午夜精品久久久久久久| 五月天丁香电影| 黄色视频不卡| 国产亚洲欧美精品永久| 999久久久国产精品视频| 免费看十八禁软件| 中文字幕高清在线视频| 久久久欧美国产精品| 人人妻人人澡人人看| 精品福利观看| 大陆偷拍与自拍| 欧美大码av| 精品亚洲乱码少妇综合久久| a级毛片在线看网站| 中文字幕高清在线视频| 欧美老熟妇乱子伦牲交| 桃花免费在线播放| 精品国产亚洲在线| 在线看a的网站| 中文字幕高清在线视频| 国产不卡av网站在线观看| e午夜精品久久久久久久| 黄色 视频免费看| 90打野战视频偷拍视频| 欧美黄色淫秽网站| 国产成人欧美| 色94色欧美一区二区| √禁漫天堂资源中文www| 新久久久久国产一级毛片| 视频在线观看一区二区三区| 91大片在线观看| 午夜免费成人在线视频| 18在线观看网站| 精品国内亚洲2022精品成人 | 精品国产国语对白av| 亚洲精品久久成人aⅴ小说| av片东京热男人的天堂| 亚洲三区欧美一区| 99国产精品一区二区蜜桃av | 亚洲精品国产区一区二| 交换朋友夫妻互换小说| 中文字幕精品免费在线观看视频| 亚洲熟妇熟女久久| 9色porny在线观看| 80岁老熟妇乱子伦牲交| 午夜激情久久久久久久| 日本五十路高清| 日本一区二区免费在线视频| 一区二区三区激情视频| 亚洲国产精品一区二区三区在线| netflix在线观看网站| 日韩三级视频一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 国产在线观看jvid| 国产麻豆69| 亚洲专区中文字幕在线| 亚洲五月婷婷丁香| 亚洲欧美精品综合一区二区三区| 久久人妻福利社区极品人妻图片| 男男h啪啪无遮挡| 亚洲精品粉嫩美女一区| 制服人妻中文乱码| 人妻 亚洲 视频| 可以免费在线观看a视频的电影网站| 黑人巨大精品欧美一区二区蜜桃| 欧美人与性动交α欧美精品济南到| 一边摸一边做爽爽视频免费| 欧美精品人与动牲交sv欧美| 老司机亚洲免费影院| 91av网站免费观看| 热re99久久国产66热| 热99久久久久精品小说推荐| 成年女人毛片免费观看观看9 | 妹子高潮喷水视频| 老鸭窝网址在线观看| 国产精品国产高清国产av | 成人手机av| 一级黄色大片毛片| 男女下面插进去视频免费观看| 超碰成人久久| 91av网站免费观看| 女人爽到高潮嗷嗷叫在线视频| 久久精品人人爽人人爽视色| 天堂8中文在线网| 午夜福利欧美成人| 桃花免费在线播放| 成人国产一区最新在线观看| a在线观看视频网站| www日本在线高清视频| 亚洲中文av在线| 国产单亲对白刺激| 久久久久久久大尺度免费视频| 一区二区三区精品91| 搡老乐熟女国产| 亚洲国产看品久久| 一本色道久久久久久精品综合| 天天躁夜夜躁狠狠躁躁| 成年人黄色毛片网站| 久热爱精品视频在线9| 久久99一区二区三区| 欧美乱码精品一区二区三区| 国产一区二区三区综合在线观看| 最近最新中文字幕大全电影3 | 免费在线观看视频国产中文字幕亚洲| 成年人免费黄色播放视频| 久久国产精品人妻蜜桃| 男女无遮挡免费网站观看| 两人在一起打扑克的视频| 999久久久国产精品视频| 亚洲色图 男人天堂 中文字幕| 亚洲欧美激情在线| 国产精品一区二区在线观看99| 久久人妻av系列| 久久国产精品男人的天堂亚洲| 日韩有码中文字幕| 侵犯人妻中文字幕一二三四区| 性色av乱码一区二区三区2| 免费一级毛片在线播放高清视频 | 国产精品香港三级国产av潘金莲| 久久久久视频综合| 精品亚洲成a人片在线观看| 操出白浆在线播放| 亚洲国产毛片av蜜桃av| av一本久久久久| 亚洲,欧美精品.| 国精品久久久久久国模美| 大陆偷拍与自拍| 色老头精品视频在线观看| 一边摸一边抽搐一进一小说 | 亚洲精品粉嫩美女一区| 国产成+人综合+亚洲专区| 纵有疾风起免费观看全集完整版| 国产亚洲一区二区精品| 久久国产精品男人的天堂亚洲| 中文字幕色久视频| 蜜桃在线观看..| 99精品在免费线老司机午夜| 中文字幕人妻丝袜制服| 侵犯人妻中文字幕一二三四区| 欧美日韩视频精品一区| 新久久久久国产一级毛片| 欧美另类亚洲清纯唯美| 中文字幕人妻丝袜一区二区| 欧美日韩国产mv在线观看视频| 人人妻人人添人人爽欧美一区卜| 90打野战视频偷拍视频| 激情在线观看视频在线高清 | 天堂8中文在线网| 视频区欧美日本亚洲| 免费在线观看视频国产中文字幕亚洲| 丝袜人妻中文字幕| 自线自在国产av| av又黄又爽大尺度在线免费看| 国产精品久久久人人做人人爽| 国产亚洲欧美精品永久| av视频免费观看在线观看| 我的亚洲天堂| 99国产综合亚洲精品| 欧美日韩av久久| tocl精华| 91成人精品电影| 成人18禁在线播放| 国产亚洲欧美在线一区二区| 757午夜福利合集在线观看| a在线观看视频网站| 男人舔女人的私密视频| 欧美日韩亚洲国产一区二区在线观看 | 精品少妇一区二区三区视频日本电影| 亚洲人成电影免费在线| 天天影视国产精品| 久久人妻av系列| 国产精品一区二区在线观看99| 成年人午夜在线观看视频| 国产色视频综合| 国产在线一区二区三区精| 国产激情久久老熟女| 高清视频免费观看一区二区| 美国免费a级毛片| 视频在线观看一区二区三区| 日韩制服丝袜自拍偷拍| √禁漫天堂资源中文www| 成年人免费黄色播放视频| 欧美 亚洲 国产 日韩一| 黄片大片在线免费观看| 国产在线一区二区三区精| 国产成人啪精品午夜网站| 国产一区二区三区综合在线观看| 不卡av一区二区三区| 国产黄频视频在线观看| 丁香六月天网| 久久午夜综合久久蜜桃| 亚洲性夜色夜夜综合| 九色亚洲精品在线播放| 国产国语露脸激情在线看| 国产不卡一卡二| 亚洲精品久久午夜乱码| 久久精品国产亚洲av高清一级| 精品一区二区三区视频在线观看免费 | 99re在线观看精品视频| 黄片播放在线免费| 亚洲九九香蕉| 亚洲中文字幕日韩| 欧美黄色淫秽网站| 一本一本久久a久久精品综合妖精| 男男h啪啪无遮挡| 夜夜爽天天搞| 自线自在国产av| 亚洲欧洲日产国产| 视频区欧美日本亚洲| 国产精品99久久99久久久不卡| 亚洲人成电影免费在线| 久久亚洲精品不卡| 国产免费视频播放在线视频| 夜夜爽天天搞| 嫩草影视91久久| 色尼玛亚洲综合影院| 久久精品人人爽人人爽视色| 国产精品1区2区在线观看. | 亚洲精品国产一区二区精华液| 捣出白浆h1v1| 国产精品国产高清国产av | 在线观看免费高清a一片| 亚洲av欧美aⅴ国产| 狠狠精品人妻久久久久久综合| 欧美日韩亚洲国产一区二区在线观看 | 久久亚洲精品不卡| 精品高清国产在线一区| 视频在线观看一区二区三区| 十八禁高潮呻吟视频| 国产片内射在线| 大陆偷拍与自拍| 成人特级黄色片久久久久久久 | 男人舔女人的私密视频| 成人18禁在线播放| 成在线人永久免费视频| 久久久精品区二区三区| 精品久久蜜臀av无| 久久久久精品人妻al黑| 9热在线视频观看99| 午夜精品久久久久久毛片777| 久久狼人影院| 精品高清国产在线一区| 18在线观看网站| 无遮挡黄片免费观看| 亚洲欧美日韩高清在线视频 | 婷婷成人精品国产| 一级片'在线观看视频| 国产午夜精品久久久久久| 一边摸一边抽搐一进一出视频| 别揉我奶头~嗯~啊~动态视频| h视频一区二区三区| 亚洲av片天天在线观看| 亚洲五月婷婷丁香| 久久国产精品男人的天堂亚洲| 一级毛片精品| 中亚洲国语对白在线视频| 99国产极品粉嫩在线观看| 欧美大码av| 久久人妻熟女aⅴ| 亚洲国产成人一精品久久久| 黑人猛操日本美女一级片| 色94色欧美一区二区| 亚洲成人免费电影在线观看| 精品久久蜜臀av无| av有码第一页| 国产日韩欧美视频二区| 久久久久视频综合| 国产三级黄色录像| 国产精品亚洲一级av第二区| 狂野欧美激情性xxxx| 大香蕉久久网| 在线观看免费视频日本深夜| 首页视频小说图片口味搜索| 国产欧美亚洲国产| 亚洲欧美日韩另类电影网站| 日韩大片免费观看网站| 天堂动漫精品| 国产精品99久久99久久久不卡| 亚洲成a人片在线一区二区| www.999成人在线观看| 国产精品久久久久久精品电影小说| 国产精品秋霞免费鲁丝片| 欧美黄色片欧美黄色片| 一边摸一边抽搐一进一出视频| 亚洲人成伊人成综合网2020| 国产伦理片在线播放av一区| 色老头精品视频在线观看| 菩萨蛮人人尽说江南好唐韦庄| 高清视频免费观看一区二区| 国产精品久久久久久精品电影小说| 精品久久久久久电影网| 日本精品一区二区三区蜜桃| 手机成人av网站| 国产成人免费观看mmmm| 捣出白浆h1v1| 亚洲精品成人av观看孕妇| 大码成人一级视频| 国产成人精品在线电影| 国产精品久久久久久精品古装| 欧美日韩亚洲国产一区二区在线观看 |