• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image to Image Translation Based on Differential Image Pix2Pix Model

    2023-12-12 15:49:34XiZhaoHaizhengYuandHongBian
    Computers Materials&Continua 2023年10期

    Xi Zhao,Haizheng Yu,? and Hong Bian

    1College of Mathematics and System Sciences,Xinjiang University,Urumqi,830017,China

    2School of Mathematical Sciences,Xinjiang Normal University,Urumqi,830017,China

    ABSTRACT In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image generation,such as the loss of important information features during the encoding and decoding processes,as well as a lack of constraints during the training process.To address these issues and improve the quality of Pix2Pixgenerated images,this paper introduces two key enhancements.Firstly,to reduce information loss during encoding and decoding,we utilize the U-Net++network as the generator for the Pix2Pix model,incorporating denser skip-connection to minimize information loss.Secondly,to enhance constraints during image generation,we introduce a specialized discriminator designed to distinguish differential images,further enhancing the quality of the generated images.We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model.The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics.Notably,the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics.An analysis of the experimental results reveals that the use of the U-Net++generator effectively reduces information feature loss,while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training.Both of these enhancements collectively improve the quality of Pix2Pix-generated images.

    KEYWORDS Image-to-image translation;generative adversarial networks;U-Net++;differential image;Pix2Pix

    1 Introduction

    Image-to-image translation [1] is one of the most prominent tasks in the field of computer vision,which has been widely applied in photography and engineering processing.The task aims to establish a mapping function between the source domain image and the target domain image,enabling the conversion between images,such as black-and-white photos to color photos,semantic segmentation images to real-world images,realistic images to oil painting style images,etc.Common image engineering tasks such as image denoising and image super-resolution reconstruction also belong to the image-to-image translation task.Therefore,researchers have proposed various deeplearning models to accomplish the image translation task.

    In 2017,Isola et al.proposed the Pix2Pix framework[1]based on conditional generative adversarial networks [2],which are designed from the perspective of image-to-image translation tasks.It is the first universal framework for image-to-image translation tasks and is almost compatible with all single-domain image translation tasks.The Pix2Pix model has many advantages,such as a simpler structure compared to other GANs,strong universality of the model,and a stable training process.However,Pix2Pix also has some significant drawbacks.For example,the traditional Pix2Pix model is prone to losing information features during the encoding and decoding process of image generation,which results in poor-quality generated images.Moreover,during the image translation process,it is easy to generate distorted images due to a lack of strong constraints.

    Therefore,in this paper,we aim to address the above-mentioned shortcomings of the Pix2Pix model and propose the following improvements:

    (1) To reduce information loss,we use a more densely connected U-Net++as the generator instead of the original U-Net generator and improve the convolutional blocks of U-Net++to make it more suitable for image translation tasks.

    (2) We add a special discriminator to the original Pix2Pix network framework to enhance the constraints during the image translation process.This discriminator discriminates the different images between the source domain and target domain images.We name it the Difference Image Discriminator.

    2 Related Work

    2.1 Early Image-to-Image Translation Models

    Early image-to-image translation models required the establishment of different domain models for different tasks.For example,in 2001,Efros proposed the Image Quilting model[3],which is suitable for generating texture details on images,Chen et al.proposed the Sketch2Photo model in 2009 [4],which can convert sketches into realistic images,and Laffont et al.proposed the Transient Attributes model in 2014[5],which can perform seasonal transformations on outdoor images.These models are only suitable for specific tasks and specific datasets,which greatly hindered the development of imageto-image translation.However,with the introduction of generative adversarial networks(GANs)by Goodfellow et al.in 2014[6],these obstacles were fundamentally alleviated.

    2.2 The Development of GANs in the Field of General Image-to-Image Translation

    In 2017,Isola et al.proposed a general framework for image-to-image translation called Pix2Pix[1].Pix2Pix has shown superior performance on multiple image-to-image translation tasks,and its model structure is simpler than other image-to-image translation models,with stronger training reliability.The original Pix2Pix model was improved based on conditional GANs,and the overall framework still consists of a generator and a discriminator.The network architecture of Pix2Pix is shown in the following diagram.

    Fig.1 is drawn based on an image-to-image translation task of converting sketch portraits to realistic portraits.As shown in Fig.1,the generator part consists of a U-Net [7],with feature fusion performed using Skip-Connect from ResNet in the intermediate layers.The discriminator part is composed of a Markov discriminator,which is different from the discriminator in other generative adversarial networks in that the typical discriminator outputs a single value(real or fake)to discriminate the entire input image,while the Markov discriminator segments the whole image into several patches and discriminates the authenticity of each patch,outputting multiple values,each of which evaluates the authenticity of the corresponding patch.

    Figure 1:The structure diagram of the Pix2Pix

    The optimization function of the Pix2Pix model is as follows:

    whereLcGAN(G,D)is the loss function of the conditional GANs,andLL1(G)is the added L1 norm to enhance the convergence of the objective function,improve the quality of images generated by Pix2Pix,and speed up the convergence rate.The specific expression is as follows:

    Although the Pix2Pix model has shown significant progress compared to previous models in the field of image-to-image translation,there is still significant room for improvement in terms of the quality of generated images.

    After the proposal of the Pix2Pix model,GANs quickly advanced in the domain of image translation tasks.CycleGAN,introduced by Zhu et al.in 2017[8],is a model capable of performing imageto-image translation without paired data.CycleGAN consists of two separate generative adversarial networks that respectively translate images from one domain to another.To enhance the accuracy of the translations,CycleGAN incorporates a cycle consistency loss into the objective function,ensuring the consistency of the transformations.The introduction of CycleGAN has significantly alleviated the challenge of acquiring paired datasets.

    In 2017,Liu et al.proposed the Unsupervised Image-to-Image Translation Network(UNIT)[9].UNIT assumes the existence of a latent factor space,where input images are encoded into latent factors and subsequently decoded into target domain images during the image-to-image translation process.Multiple cycle-consistency losses are incorporated into the objective function of the UNIT to improve the stability of the training process and enhance the quality of the generated images.

    In 2018,Huang et al.proposed the Multimodal Unsupervised Image-to-Image Translation(MUNIT)[10],which enables translation across multiple target domains,whereas traditional imageto-image translation models only cater to single-modal image-to-image translation tasks.MUNIT assumes the existence of two distinct factor spaces:a content factor space and a style factor space.The content factor space is shared across different domains,while the style factor space is domain-specific and non-shareable.By combining a content factor with different style factors and going through the decoding process,images from different target domains can be translated.

    Starting in 2018,the development of GANs in the domain of general image-to-image translation has slowed down.However,with the emergence of new models like GPT by OpenAI and Segment Anything by Meta in 2022,researchers have intensified their efforts in the study of Artificial General Intelligence(AGI).Consequently,we believe that further innovation and improvement in GANs for general image-to-image translation are highly meaningful in the current context.

    2.3 The Development of GANs in the Field of Specific Image-to-Image Translation

    Since 2018,GANs have exhibited remarkable progress in specific image translation tasks.Among these,StarGAN[11]stands out as a model specialized in facial transformation,facilitating translations across multiple target domains.StarGAN successfully overcomes the previous limitations of facial transformation models,which were confined to binary domain conversions,thereby enhancing the flexibility of translations to other domains.In 2019,StarGAN-V2[12]was introduced,incorporating the concepts of style diversity and domain diversity to further enhance the quality of transformations.

    PSGAN,introduced in 2021[13],represents the pioneering attempt to employ GANs for generating high-quality pan-sharpened images.PSGAN accepts panchromatic and multispectral images as input and maps them to the desired high-resolution images while selecting the optimal solution from various architectures and designs.

    In 2022,Amirkolaee et al.proposed a specialized GAN model for medical image translation at the image level.This model effectively combines local and global features,resulting in commendable performance[14].

    2.4 The Development of GANs in the Field of Multimodal Image-to-Image Translation

    Since 2019,multimodal tasks have become a mainstream direction in the development of deep learning models[15–18].As a result,there have been several multimodal models[19–23]in image-toimage translation.For example,in 2019,Park et al.proposed GauGAN [19],which transforms an image by inputting a semantic mask and its corresponding real image,producing impressive results.In 2022,Yan et al.proposed MMTrans[20],which is based on the Swin Transformer and GAN for medical image translation.

    Since 2022,large-scale models based on diffusion models[24–26]and CLIP[27]technology have gained significant attention in the field of text-to-image and image translation,causing a global sensation in AI art with generated images surpassing those generated by GANs in terms of image quality.However,in practical commercial applications,generative adversarial networks have matured significantly,and it is expected that they will remain one of the main models in the field of image generation for a considerable period.Therefore,researching generative adversarial networks in the field of image translation is still of great significance in the present and future.

    3 Models and Methods

    3.1 U-Net++Generator

    The original Pix2Pix uses a U-Net network as the generator to generate images.The U-Net network was proposed by Ronneberger in 2015 and plays a crucial role in medical image and semantic segmentation fields.As shown in Fig.2,the U-Net network consists of two parts.The first part is the encoder,which is composed of multiple downsampling layers.The input image is encoded by the encoder to extract deep information.The second part is the decoder,which is composed of multiple upsampling layers.The deep information extracted by the encoder is decoded by the decoder.To reduce information loss in this process,skip connections from ResNet are used.

    Figure 2:The structure diagram of the U-Net

    Although the U-Net network performs remarkably well in image-to-image translation tasks such as medical image segmentation,it has some significant drawbacks.For instance,researchers cannot determine the optimal depth of the network,and the addition of skip connections for feature fusion in U-Net imposes an unnecessary constraint by only allowing skip connections between feature maps of the same scale.Due to this constraint,U-Net’s feature fusion significantly affects the image quality generated by the network,leading to some unnecessary information loss.Therefore,if Pix2Pix continues to use U-Net as its generator,the quality of the generated images will inevitably be affected for the same reasons.Fortunately,effective solutions have been proposed to address these drawbacks.

    In 2020,Zhou et al.proposed the U-Net++[28],which is a very powerful model that improves upon the U-Net.It effectively overcomes the shortcomings of the original U-Net.As shown in Fig.3[28],the U-Net++network uses denser and more clever skip connections,allowing for feature fusion of information extracted at different scales and depths.Multiple decoders can share or partially share the same encoder,which largely avoids unnecessary information loss.

    Our improved Pix2Pix model,in terms of the generator,abandons the traditional U-Net and adopts the U-Net++for image generation.However,during the initial experimental verification,it was found that a large amount of noise interference appeared in the generated images after adversarial training using the original U-Net++generator.After analysis and consideration,it was believed that this was because U-Net++was proposed for the special task of medical image segmentation,which can also be seen as a classification problem.Therefore,the U-Net++used VGG block,which is suitable for classification tasks,but not for other image-to-image translation tasks.To make the generator more adaptable to a wider range of image-to-image translation tasks,the block used in this paper’s U-Net++consists of a stridden convolution layer,a channel normalization layer,and a Dropout layer,which performs better than the VGG block on non-image-segmentation-tasks.

    It is evident from Fig.4 that the face generated by the U-Net++before improvement has some facial features that are not only more blurred but also have some light blue noise interference.The improved U-Net++generates higher-quality images in the task of generating facial images,resulting in more natural and brighter facial expressions with no noise interference on the face.

    Figure 3:The structure diagram of the U-Net++

    Figure 4:Comparison of generated images before and after improvement of U-Net++.(a) Images generated by the original U-Net++generator.(b) Images generated by the improved U-Net++generator

    3.2 Differential Image Discriminator

    To further strengthen the constraints on the generated images,we have added a differential image discriminator to the traditional Pix2Pix network structure.As shown in Fig.5,the differential image here is obtained by taking the difference between the target domain image and the source domain image.This discriminator only discriminates between the two types of differential images generated.If the discriminator receives the differential image between the GroundTruth in the target domain and the input of the generator,it is judged as true;if it receives the differential image between the fake target domain image generated by the generator and its input,it is judged as false.

    The network structure of Pix2Pix with the addition of a differential image discriminator is shown in the following Fig.6.

    Figure 5:Example of differential image.(a) Real target domain images.(b) Source domain image inputted to the generator.(c)The difference image is obtained by subtracting the source domain image from the target domain image

    If we consider the image generation process of the generator in the Pix2Pix network as a simple addition model:

    In this context,xrepresents the input image to the generator,G(x)is the output image generated by the generator,and ?gis the additional data that the generator needs to simulate.The differential image discriminator proposed in this article is used to discriminate between the generated additional data ?gand the real additional datag,thereby providing stronger guidance to the generator during the training process.Correspondingly,the optimization function used during the training of the Pix2Pix network has also changed.The optimization function of the Pix2Pix network in this article is:

    In this case,Grepresents the generator,Drepresents the regular discriminator,Ddifferrepresents the differential image discriminator proposed in this chapter.xis the input source domain image,yis the real target domain image,G(x)is the fake target domain image generated by the generator,g is the differential image between the real target domain image and the source domain image and ?gis the differential image between the fake target domain image and the source domain image.

    From the above formula,it can be seen that the objective optimization function of Pix2Pix based on the differential image discriminator has a significant change compared to the original Pix2Pix objective optimization function at Eq.(1).This can be considered in adding a strong constraint to the entire Pix2Pix network on the original basis,which can more effectively guide the training of the generator,allowing the generator to more smoothly generate the required additional data.

    4 Results

    4.1 Experimental Environment and Model Training Steps

    We selected the open-source facades dataset and the CUHK Sketch Portrait Dataset for model training and validation.As shown in Table 1,the former consists of 606 images of buildings and their corresponding label maps,with 400 images used for training,100 images used for testing,and 106 images used for validation.The latter consists of 188 facial images of CUHK students and their corresponding face sketch images,with 168 images used for training,10 images used for testing,and 10 images used for validation.

    The model was trained on a Linux system with a 4-core Intel(R)Xeon(R)Gold 6330 CPU and one NVIDIA GeForce RTX 3090 GPU.The programming language used was Python 3.8,and the deep learning framework employed was PyTorch 1.8 with CUDA version 11.1.The training process consisted of 100 epochs with a batch size of 8.

    The pseudo-codes for training the original Pix2Pix and the Pix2Pix with the U-Net++generator are shown below:

    Algorithm 1:Pix2Pix training algorithm

    The pseudo-code for training our joint model is presented as follows:

    Algorithm 2:Our joint model training algorithm

    4.2 Evaluation Metrics

    In terms of selecting evaluation metrics,the quality of the images generated by the image generation model was evaluated.Besides directly showing the generated images,there are no very comprehensive objective metrics both domestically and abroad.Metrics such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) only calculate the pixel value differences between the real image and the generated image,and cannot reasonably compare the overall structure,texture,brightness,color,and other features between the images.Therefore,for the images generated by generative adversarial networks,researchers generally use the Inception Score (IS) [29] and Fréchet Inception Distance(FID)[30]to evaluate the generated images,which simulate subjective evaluation by the human eye.Therefore,this paper selects Inception Score,Fréchet Inception Distance,and Structural Similarity(SSIM)[31]as evaluation metrics.

    The formula for the Inception Score is as follows:

    wherexis the generated image,p(y|x)is the class probability distribution obtained by inputtingxinto the Inception V3 classification neural network,andp(y)is the marginal distribution probability obtained by averaging the predicted class probabilities for each generated image.DKLis the KL divergence fromp(y|x)top(y).From Eq.(8),it can be seen that IS only measures the distance between input images and the ImageNet dataset.

    The specific formula for Fréchet Inception Distance(FID)is as follows:

    wherexrepresents the generated image,yrepresents the real image,and assuming that both follow high-dimensional distributions,the statistics are denoted as (μx,Σ

    x) and (μy,Σy).From the above formula,it can be seen that FID measures the distance between two distributions.Therefore,the smaller the value of FID,the closer the generated image data distribution is to the real image data distribution.In addition,because FID is calculated based on the features that appear in the image,it cannot calculate the spatial position relationship between features,so FID has some controversy.

    The specific formula for Structural Similarity(SSIM)is as follows:

    Here,xandyrepresent individual sampled images from the generated and real images,respectively,μxandμyare the estimated pixel value means forxandy,Σxand Σyare the estimated variances,c1=(k1L)2,c2=(k2L)2,where L is the range of pixel values,k1=0.01,k2=0.03.Therefore,the SSIM metric can only measure a single sampled image at a time.In this paper,the SSIM was computed for 100 generated and corresponding real images,and the mean value was taken.

    4.3 Experimental Analysis

    4.3.1 Optimal Depth Exploration of U-Net++Generator

    U-Net++performs well when the depth is constrained.For example,for a certain task,the optimal depth of the U-Net model is L4,but even when the depth of the U-Net++model exceeds 4(such as L6 or L8),its performance can still be comparable to or even better than the U-Net model at L4.However,when the optimal depth of the U-Net model is greater than the maximum depth of the U-Net++model,the performance of U-Net++will be worse than that of U-Net.Therefore,before conducting empirical analysis based on the U-Net++generator in Pix2Pix,it is necessary to explore the maximum depth of U-Net++.We conduct exploration experiments using the facades datasets.

    Based on Fig.7,it can be visually perceived that as the depth of the U-Net++generator increases,the quality of the generated images gradually improves.Among them,it is difficult to conclude the quality of the images generated by the U-Net++generator with a depth of 6 and the one with a depth of 8,but it can be observed that the difference between them is very small and the improvement range is also very small.

    Based on Table 2,it can be seen that the three selected evaluation metrics generally improve with the increase of U-Net++generator depth.Among them,the images generated by the U-Net++generator with a depth of 8 achieved the best performance in the IS and SSIM evaluation metrics,while the images generated by the U-Net++generator with a depth of 6 achieved the best performance in the FID evaluation metric.

    Table 2:Performance metrics of U-Net++at different depths

    In addition,by observing the changes in the indicator data in the table,it can be understood that the greatest improvement in each indicator occurred when the depth increased from 2 to 6,while the improvement in each indicator was very small when the depth increased from 6 to 8,and entered a stagnant state.The U-Net++generators with depths of 6 and 8 are very close in all three evaluation metrics.Therefore,considering all the evaluation metrics,this study determined the depth of the U-Net++generator to be 8.

    4.3.2 Experimental Results

    For the task of semantic label-to-building image translation on the facades dataset,we select CycleGAN and the original Pix2Pix as the base models and compare them with Pix2Pix using only U-Net++as the generator,Pix2Pix with only the differential image discriminator added,and Pix2Pix using both U-Net++generator and differential image discriminator.The comparison results are shown in Fig.8.

    Figure 8:The validation results for the facades dataset.(a)GroudTruth:Real target domain images.(b) Validation results generated by CycleGAN.(c) Validation results generated by Pix2Pix.(d) UNet++P2P:Validation results generated by Pix2Pix using U-Net++generator.(e)Validation results generated by Pix2Pix using differential image discriminator.(f)Validation results generated by Pix2Pix using both U-Net++generator and differential image discriminator

    As shown in Fig.8,compared with CycleGAN and the original Pix2Pix,the improved model in this paper generates images with significant improvements in terms of image quality and clarity,whether it is Pix2Pix using only U-Net++as the generator(d),Pix2Pix with only the differential image discriminator added(e),or the joint model in this paper.Among them,the results of the CycleGAN network are the worst,which is because CycleGAN is an unsupervised model.In terms of image details,the images generated by Pix2Pix using only U-Net++as the generator and the joint model in this paper have more delicate and clear details.

    In the image-to-image translation task of transforming sketches into realistic portraits on the Sketch Portrait Dataset at the Chinese University of Hong Kong,the same base models were selected as in the previous experiment,and they were compared with the improved Pix2Pix model.

    Based on the results shown in Fig.9,on the sketch portrait dataset,the unsupervised CycleGAN model performs the worst,with significant color differences and uneven color distribution compared to the real images.Meanwhile,compared to the face images generated by the original Pix2Pix,the face images generated by the proposed model in this paper are closer to real images in terms of skin color and facial pose.The generated face images are also brighter and more realistic.

    Figure 9:The validation results for the Sketch Portrait Dataset at the Chinese University of Hong Kong

    Using the two datasets,validation images were generated for each model,and corresponding evaluation metrics were selected for evaluation.

    According to Table 3,Our improved Pix2Pix models outperformed the original model.The Pix2Pix model with only U-Net++as the generator had the largest improvement in the FID metric,decreasing by 43 points,but had a smaller improvement in the IS and SSIM metrics.The Pix2Pix model with only the differential image discriminator had the largest improvement in the IS and SSIM metrics,increasing by 17.1%and 3.9%,respectively,but had a smaller improvement in the FID metric.The joint model had the most evenly distributed improvement in all three metrics,with the largest improvement in the FID and SSIM metrics,decreasing by 38 points and increasing by 2.5%,respectively.

    In terms of computational complexity,the CycleGAN model has the highest complexity,reaching 68.73 G,due to its complex architecture consisting of two generators and two discriminators.The next highest complexity is found in our proposed joint model,which reaches 38.09 G.This increase is mainly attributed to the utilization of the U-Net++generator.Both the Pix2Pix model with the differential image discriminator and the original Pix2Pix model have relatively lower computational complexities,with only a minor difference between them.

    According to Table 4,we can see that the three improved Pix2Pix models have all shown improvements over the original model in both evaluation metrics.Among them,using only U-Net++as the generator in Pix2Pix has shown the greatest improvement in the IS metric,with an increase of 28.3%.In the SSIM metric,the joint model has shown the greatest improvement,with an increase of 3.84%.In the comparison between U-Net++Pix2Pix,Differ Pix2Pix,and the joint model,it can be seen that the joint model has shown the most balanced improvement in both metrics.

    Table 4:The index of each model on the Sketch Portrait Dataset

    4.3.3 Experimental Comparison of Two Types of Discriminators

    From Section 3.2,it can be seen that the differential image discriminator does indeed improve the performance of Pix2Pix.However,empirical analysis in Section 3.2 alone is not sufficient to determine whether the differential image works as expected.If an ordinary discriminator is added to the original Pix2Pix network framework,can it also achieve the same level of performance improvement as the differential image discriminator-based Pix2Pix?

    To verify this issue,we set up an additional Pix2Pix model with a regular discriminator called the dual-discriminator Pix2Pix,and compared it with the Pix2Pix model based on the differential image discriminator and the original Pix2Pix.

    Fig.10 shows the results of the discriminator comparison experiments on the facades dataset and the sketch-to-portrait dataset.It can be seen intuitively that the dual-discriminator Pix2Pix has indeed improved the quality of the generated images to some extent compared to the original Pix2Pix,and the images are also clearer,but the improvement is limited.The Pix2Pix based on the differential image discriminator generates the best image quality.Compared with the images generated by the dual-discriminator Pix2Pix,the brightness and darkness distribution of the images are more uniform,and the picture is also more delicate.

    Figure 10:(Continued)

    Figure 10:Example of verification results for comparative experiments.(a)GroudTruth:Real target domain images.(b) Validation results generated by Pix2Pix.(c) DDPix2Pix: dual-discriminator Pix2Pix.(d)Validation results generated by Pix2Pix using differential image discriminators

    According to Table 5,based on the IS and SSIM metrics,the performance of the Pix2Pix model with a differential image discriminator exceeds that of the original Pix2Pix and the dual-discriminator Pix2Pix,achieving the best performance among the three.While the dual-discriminator Pix2Pix also outperforms the original Pix2Pix in all aspects,the improvement is significantly less than that of the differential image discriminator-based Pix2Pix.For example,on the facades dataset,compared to the original Pix2Pix,the dual-discriminator Pix2Pix improves by only 3.3% on the IS metric,far less than the 9.9% improvement of the differential image discriminator-based Pix2Pix.Similarly,on the SSIM metric,the dual-discriminator Pix2Pix improves by only 5.5%,which is also smaller than the 19.8% improvement of the differential image discriminator-based Pix2Pix.Therefore,it can be concluded that the performance gain brought by adding a differential image discriminator far exceeds that brought by adding a conventional discriminator.Based on the additive model assumption in Section 2.3,differential images play a critical role in guiding the discriminator to identify the generated content in the Pix2Pix model with a differential image discriminator.

    Table 5:The index of comparative experiments

    5 Conclusion

    The structure of the original Pix2Pix model was improved to address some limitations in image-to-image translation tasks.The U-Net++was adopted as the generator,and a differential image discriminator was added.Experimental results demonstrated that the proposed improvements effectively enhanced the image generation quality of the Pix2Pix model,resulting in clearer and more detailed facial features.Furthermore,the experiments confirmed the crucial role of the differential image in the performance improvement of the differential image discriminator.However,our proposed joint model did not outperform the models with a single improvement in certain metrics,which may be attributed to unknown interactions introduced by the combination of the two improvements.Therefore,investigating how to mitigate these unknown effects can be considered in future research work.

    Acknowledgement:Authors gratefully acknowledge technical and financial support from the College of Mathematics and System Sciences,Xinjiang University,and Xinjiang Natural Science Foundation of China.We acknowledge the data resources from“Kaggle Inc”(https://www.kaggle.com/).

    Funding Statement:This work is supported in part by the Xinjiang Natural Science Foundation of China(2021D01C078).

    Author Contributions:Study conception and design: Xi Zhao,Haizheng Yu;data collection: Xi Zhao;analysis and interpretation of results: Xi Zhao,Haizheng Yu,Hong Bian;draft manuscript preparation:Xi Zhao,Haizheng Yu;All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data used in this study are all open-source and can be downloaded by readers from https://www.kaggle.com/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品久久久久久精品古装| 欧美激情极品国产一区二区三区 | 在线观看国产h片| 亚洲第一区二区三区不卡| 国产淫片久久久久久久久| 91精品一卡2卡3卡4卡| 春色校园在线视频观看| 亚洲国产欧美人成| 日本wwww免费看| 亚洲精品,欧美精品| 成人亚洲欧美一区二区av| 黄色配什么色好看| 久久精品国产亚洲av天美| 丝袜脚勾引网站| 国产精品嫩草影院av在线观看| 天美传媒精品一区二区| 综合色丁香网| 99久久综合免费| 久久97久久精品| 99九九线精品视频在线观看视频| 亚洲自偷自拍三级| 亚洲自偷自拍三级| 欧美国产精品一级二级三级 | 午夜福利在线观看免费完整高清在| 99国产精品免费福利视频| 国产一区有黄有色的免费视频| 欧美成人精品欧美一级黄| 亚洲人与动物交配视频| 大香蕉97超碰在线| 男男h啪啪无遮挡| 日韩成人av中文字幕在线观看| 高清av免费在线| 五月玫瑰六月丁香| 免费久久久久久久精品成人欧美视频 | av免费在线看不卡| 国产伦精品一区二区三区视频9| 插逼视频在线观看| 黄色欧美视频在线观看| 久久 成人 亚洲| 免费观看a级毛片全部| 国产一区有黄有色的免费视频| 亚洲aⅴ乱码一区二区在线播放| 一区在线观看完整版| 亚洲成色77777| 精品视频人人做人人爽| 日韩欧美精品免费久久| 亚洲av成人精品一二三区| 国产黄片美女视频| 晚上一个人看的免费电影| 91在线精品国自产拍蜜月| 美女xxoo啪啪120秒动态图| 国产精品欧美亚洲77777| 亚洲中文av在线| 国产精品蜜桃在线观看| 国产综合精华液| a级毛色黄片| 亚洲欧美精品自产自拍| 国产黄片美女视频| 成年美女黄网站色视频大全免费 | 我要看日韩黄色一级片| 成人18禁高潮啪啪吃奶动态图 | 精品一区二区三区视频在线| 日韩免费高清中文字幕av| videossex国产| 免费观看的影片在线观看| 美女cb高潮喷水在线观看| av在线播放精品| 一区二区三区四区激情视频| 国产精品女同一区二区软件| 亚洲精品国产成人久久av| 中国国产av一级| 久久久久人妻精品一区果冻| 欧美国产精品一级二级三级 | 国产精品国产三级国产av玫瑰| 国语对白做爰xxxⅹ性视频网站| 在线观看国产h片| 观看美女的网站| 国产成人免费无遮挡视频| 91精品伊人久久大香线蕉| 六月丁香七月| 嫩草影院入口| 男人爽女人下面视频在线观看| 久久久久精品性色| 男人狂女人下面高潮的视频| 美女高潮的动态| 国产精品蜜桃在线观看| 免费黄网站久久成人精品| 欧美高清性xxxxhd video| 老熟女久久久| 人人妻人人看人人澡| 日韩国内少妇激情av| 国产精品一区二区在线观看99| 欧美xxⅹ黑人| 成人毛片a级毛片在线播放| 亚洲精品aⅴ在线观看| 国产爱豆传媒在线观看| 亚洲精品aⅴ在线观看| 日韩av不卡免费在线播放| 国内少妇人妻偷人精品xxx网站| 亚洲av中文av极速乱| 汤姆久久久久久久影院中文字幕| 久久久午夜欧美精品| 国产精品偷伦视频观看了| 亚洲精品日韩在线中文字幕| 国产成人精品久久久久久| 91久久精品电影网| 亚洲精品,欧美精品| 精品国产三级普通话版| 九九久久精品国产亚洲av麻豆| 色婷婷av一区二区三区视频| av免费在线看不卡| 精品一区二区免费观看| 色吧在线观看| 九色成人免费人妻av| 高清av免费在线| 久久精品久久精品一区二区三区| 免费观看无遮挡的男女| 一区二区三区四区激情视频| 亚洲国产最新在线播放| 大香蕉97超碰在线| 最近2019中文字幕mv第一页| 少妇熟女欧美另类| 狠狠精品人妻久久久久久综合| 久热久热在线精品观看| 中文字幕人妻熟人妻熟丝袜美| 亚洲第一av免费看| 欧美人与善性xxx| 久久久久久久久久人人人人人人| 肉色欧美久久久久久久蜜桃| 日本-黄色视频高清免费观看| 男的添女的下面高潮视频| 久久99精品国语久久久| 最新中文字幕久久久久| 久久久成人免费电影| 777米奇影视久久| 18禁在线播放成人免费| 免费观看无遮挡的男女| 国产av国产精品国产| 我要看黄色一级片免费的| 成年免费大片在线观看| 熟妇人妻不卡中文字幕| 日日啪夜夜爽| 在线观看免费视频网站a站| 十八禁网站网址无遮挡 | 高清视频免费观看一区二区| 人体艺术视频欧美日本| 国产永久视频网站| 日日撸夜夜添| 亚洲,一卡二卡三卡| 免费观看的影片在线观看| 老司机影院成人| 99热这里只有精品一区| 久久久a久久爽久久v久久| 日韩欧美一区视频在线观看 | 亚洲第一av免费看| 国产白丝娇喘喷水9色精品| av网站免费在线观看视频| 在线观看一区二区三区| 一级a做视频免费观看| av在线播放精品| 日本黄色片子视频| 少妇精品久久久久久久| 人妻一区二区av| 大话2 男鬼变身卡| 少妇人妻久久综合中文| 国产精品欧美亚洲77777| 久久99热这里只频精品6学生| 日韩欧美精品免费久久| 国产黄色视频一区二区在线观看| 蜜桃亚洲精品一区二区三区| 亚洲人成网站高清观看| 在线播放无遮挡| 亚洲色图av天堂| 日韩av不卡免费在线播放| 精品久久久久久电影网| 日韩强制内射视频| 在现免费观看毛片| 亚洲国产精品国产精品| 午夜免费男女啪啪视频观看| 插阴视频在线观看视频| 国产老妇伦熟女老妇高清| 久久久久久久大尺度免费视频| 涩涩av久久男人的天堂| 青春草视频在线免费观看| 九草在线视频观看| 在线观看免费日韩欧美大片 | 免费少妇av软件| 黑丝袜美女国产一区| 肉色欧美久久久久久久蜜桃| 日韩人妻高清精品专区| 国内揄拍国产精品人妻在线| 一级片'在线观看视频| av国产精品久久久久影院| 女性被躁到高潮视频| 中文字幕制服av| 亚洲性久久影院| 日韩av不卡免费在线播放| 成人国产av品久久久| 久久综合国产亚洲精品| 大片免费播放器 马上看| 久久久久网色| 中文在线观看免费www的网站| 亚洲在久久综合| 国产极品天堂在线| 一级a做视频免费观看| 日日摸夜夜添夜夜爱| 在线 av 中文字幕| 久久精品国产鲁丝片午夜精品| 亚洲国产精品999| 午夜福利网站1000一区二区三区| 日本av免费视频播放| 一级片'在线观看视频| 欧美zozozo另类| 少妇 在线观看| 夜夜爽夜夜爽视频| 国产有黄有色有爽视频| 亚洲精品乱码久久久久久按摩| 久久久久久久久大av| 色哟哟·www| 午夜福利影视在线免费观看| 免费看日本二区| 日韩在线高清观看一区二区三区| 精品国产露脸久久av麻豆| 精品人妻视频免费看| 又粗又硬又长又爽又黄的视频| av国产精品久久久久影院| 老司机影院成人| 久久久久久久亚洲中文字幕| 日韩国内少妇激情av| 欧美+日韩+精品| 精品国产一区二区三区久久久樱花 | 美女高潮的动态| 乱码一卡2卡4卡精品| 乱码一卡2卡4卡精品| 久久婷婷青草| 成年美女黄网站色视频大全免费 | 人妻系列 视频| 精品一区二区三卡| 免费观看av网站的网址| 精品人妻视频免费看| 激情 狠狠 欧美| 免费黄色在线免费观看| 久久这里有精品视频免费| 国产av码专区亚洲av| 亚洲美女视频黄频| 少妇猛男粗大的猛烈进出视频| 国产 精品1| 国产在线男女| 久久热精品热| av一本久久久久| a级毛色黄片| 久久av网站| 亚洲精品,欧美精品| 在线观看三级黄色| videossex国产| 中文在线观看免费www的网站| 亚洲色图av天堂| 久久久成人免费电影| 99久久综合免费| av在线老鸭窝| 国产av码专区亚洲av| 99热国产这里只有精品6| 久久97久久精品| 亚洲国产成人一精品久久久| 久久亚洲国产成人精品v| 大话2 男鬼变身卡| 一边亲一边摸免费视频| 夜夜骑夜夜射夜夜干| 亚洲精品456在线播放app| 99热这里只有是精品50| 全区人妻精品视频| 亚洲av男天堂| 欧美日韩国产mv在线观看视频 | 欧美精品人与动牲交sv欧美| 亚洲美女视频黄频| 美女福利国产在线 | 91精品一卡2卡3卡4卡| 亚洲av日韩在线播放| 99热网站在线观看| 久久午夜福利片| 国产乱人偷精品视频| 国产一区二区三区综合在线观看 | 日产精品乱码卡一卡2卡三| 九色成人免费人妻av| 亚洲av免费高清在线观看| 欧美日韩综合久久久久久| 日韩av不卡免费在线播放| 国产精品爽爽va在线观看网站| 熟女人妻精品中文字幕| 一级毛片aaaaaa免费看小| 国产熟女欧美一区二区| 亚洲伊人久久精品综合| 久久韩国三级中文字幕| 亚洲精品456在线播放app| 久久精品国产亚洲av涩爱| 欧美亚洲 丝袜 人妻 在线| 免费av中文字幕在线| 大香蕉97超碰在线| 国产 一区 欧美 日韩| 身体一侧抽搐| 日韩三级伦理在线观看| 精品一区二区三区视频在线| 黄色欧美视频在线观看| 亚洲欧美精品自产自拍| 丰满乱子伦码专区| 男人狂女人下面高潮的视频| 亚洲国产日韩一区二区| 亚洲精品日韩av片在线观看| 色视频在线一区二区三区| 久久人人爽人人片av| 人人妻人人爽人人添夜夜欢视频 | 中文字幕亚洲精品专区| 国产在线免费精品| 国产精品秋霞免费鲁丝片| 久久99热这里只频精品6学生| 最近的中文字幕免费完整| 免费少妇av软件| 男的添女的下面高潮视频| 高清av免费在线| 一级黄片播放器| 免费大片黄手机在线观看| 亚洲国产av新网站| 在线 av 中文字幕| 久久久精品免费免费高清| 两个人的视频大全免费| 一边亲一边摸免费视频| 亚洲国产色片| 麻豆成人av视频| 97超视频在线观看视频| 日韩免费高清中文字幕av| 国产一区有黄有色的免费视频| 亚洲成色77777| 色吧在线观看| 黄色视频在线播放观看不卡| 丝袜喷水一区| 亚洲国产高清在线一区二区三| 一本色道久久久久久精品综合| 亚洲精品国产av成人精品| 老师上课跳d突然被开到最大视频| 精品国产露脸久久av麻豆| 精品午夜福利在线看| 亚洲欧美中文字幕日韩二区| 在线观看一区二区三区激情| av在线蜜桃| 久久久久人妻精品一区果冻| 美女高潮的动态| 国产男女超爽视频在线观看| 日韩制服骚丝袜av| 久久久色成人| 日韩在线高清观看一区二区三区| 亚洲欧美精品专区久久| 久久影院123| 久久精品国产亚洲av天美| 国产男女超爽视频在线观看| www.色视频.com| 精品人妻熟女av久视频| 亚洲精华国产精华液的使用体验| 亚洲精品乱码久久久久久按摩| 中国国产av一级| 久久精品国产a三级三级三级| 精品国产一区二区三区久久久樱花 | 两个人的视频大全免费| 色哟哟·www| 欧美 日韩 精品 国产| 人人妻人人澡人人爽人人夜夜| 中国三级夫妇交换| 韩国av在线不卡| 免费观看在线日韩| 国产精品一区二区在线不卡| 久热久热在线精品观看| 岛国毛片在线播放| av又黄又爽大尺度在线免费看| 97精品久久久久久久久久精品| 97超碰精品成人国产| 五月伊人婷婷丁香| 春色校园在线视频观看| 日韩一区二区视频免费看| 一区二区三区精品91| 亚洲欧美日韩卡通动漫| 新久久久久国产一级毛片| 身体一侧抽搐| 天堂8中文在线网| 老熟女久久久| 高清黄色对白视频在线免费看 | 精品国产乱码久久久久久小说| 毛片一级片免费看久久久久| 亚洲国产毛片av蜜桃av| 亚洲精品日韩在线中文字幕| 嫩草影院新地址| 丝袜喷水一区| 亚洲精品,欧美精品| 午夜福利视频精品| 久久这里有精品视频免费| 日本与韩国留学比较| 嫩草影院新地址| 夜夜爽夜夜爽视频| 久久国产乱子免费精品| 日日摸夜夜添夜夜爱| 在线观看人妻少妇| 国产黄频视频在线观看| 国产成人免费无遮挡视频| 亚洲人成网站高清观看| 91午夜精品亚洲一区二区三区| 搡老乐熟女国产| 乱码一卡2卡4卡精品| 国产黄色免费在线视频| 国产高清国产精品国产三级 | 黄片无遮挡物在线观看| 国产精品久久久久成人av| 中文资源天堂在线| 精品国产乱码久久久久久小说| 女性被躁到高潮视频| 91狼人影院| 国产男人的电影天堂91| 国产亚洲一区二区精品| 国产男女超爽视频在线观看| 少妇的逼好多水| 老师上课跳d突然被开到最大视频| 欧美日韩在线观看h| 久久久久久久精品精品| 91在线精品国自产拍蜜月| av在线app专区| 免费观看a级毛片全部| 国模一区二区三区四区视频| 成人综合一区亚洲| 亚洲欧洲国产日韩| 国产69精品久久久久777片| 国产白丝娇喘喷水9色精品| 2018国产大陆天天弄谢| 久热久热在线精品观看| 欧美三级亚洲精品| 女人十人毛片免费观看3o分钟| 一级爰片在线观看| 日本色播在线视频| 久久久久精品性色| 久久久久久久久久人人人人人人| 精品久久久久久久久av| 美女视频免费永久观看网站| 插逼视频在线观看| 日本av手机在线免费观看| 夜夜爽夜夜爽视频| 搡女人真爽免费视频火全软件| 在线免费观看不下载黄p国产| 黄色日韩在线| 黑人猛操日本美女一级片| 久久久久久久国产电影| 国产高潮美女av| 97热精品久久久久久| 80岁老熟妇乱子伦牲交| 亚洲精品,欧美精品| 欧美xxxx黑人xx丫x性爽| 国产免费又黄又爽又色| 久久99蜜桃精品久久| 大片免费播放器 马上看| 日韩在线高清观看一区二区三区| 日韩制服骚丝袜av| 久久久久久久大尺度免费视频| 夫妻性生交免费视频一级片| 又大又黄又爽视频免费| 简卡轻食公司| 免费观看在线日韩| videossex国产| 大香蕉久久网| 国产精品一区二区在线不卡| 精品亚洲成国产av| 深夜a级毛片| 久久人人爽人人片av| 不卡视频在线观看欧美| 国产精品免费大片| 国产大屁股一区二区在线视频| 久久99蜜桃精品久久| 热99国产精品久久久久久7| 亚洲在久久综合| 激情 狠狠 欧美| 人妻 亚洲 视频| 日日撸夜夜添| 丝瓜视频免费看黄片| 国产黄频视频在线观看| 国语对白做爰xxxⅹ性视频网站| 国产伦理片在线播放av一区| 久久久成人免费电影| 国产精品国产三级国产专区5o| 欧美xxⅹ黑人| 国模一区二区三区四区视频| 久久av网站| 多毛熟女@视频| 亚洲丝袜综合中文字幕| www.色视频.com| 亚洲精品456在线播放app| 1000部很黄的大片| 一本久久精品| 国产黄片美女视频| 久久精品久久精品一区二区三区| a级一级毛片免费在线观看| 成人国产av品久久久| 国产乱人偷精品视频| 99精国产麻豆久久婷婷| 七月丁香在线播放| 免费看av在线观看网站| 欧美日本视频| 3wmmmm亚洲av在线观看| 日韩 亚洲 欧美在线| 国产精品.久久久| 97超视频在线观看视频| 久久精品国产亚洲网站| 九九在线视频观看精品| 色视频在线一区二区三区| 精品视频人人做人人爽| www.色视频.com| 亚洲美女搞黄在线观看| freevideosex欧美| 色婷婷久久久亚洲欧美| 亚洲av二区三区四区| 在线观看国产h片| 日本欧美视频一区| 国产精品国产三级国产av玫瑰| 纯流量卡能插随身wifi吗| 91午夜精品亚洲一区二区三区| 最近手机中文字幕大全| 久久久久久久久久成人| 夫妻午夜视频| 免费少妇av软件| 亚洲精品乱久久久久久| 九九爱精品视频在线观看| 亚洲激情五月婷婷啪啪| 国产精品久久久久成人av| 亚洲国产精品999| 成人影院久久| 成人午夜精彩视频在线观看| av免费在线看不卡| 亚洲图色成人| 日韩制服骚丝袜av| 老熟女久久久| 国产成人免费观看mmmm| 一级av片app| 一区在线观看完整版| 久久久久网色| 我的老师免费观看完整版| 91精品伊人久久大香线蕉| 日日摸夜夜添夜夜添av毛片| 日韩av免费高清视频| 国产v大片淫在线免费观看| 美女脱内裤让男人舔精品视频| 亚洲国产欧美人成| 伊人久久国产一区二区| 亚洲精品乱码久久久久久按摩| 国产片特级美女逼逼视频| 色网站视频免费| 国产免费一区二区三区四区乱码| 国产在线视频一区二区| 国产精品精品国产色婷婷| 精品一区二区三卡| 中文精品一卡2卡3卡4更新| 99热这里只有精品一区| 日本色播在线视频| 欧美成人精品欧美一级黄| 免费人妻精品一区二区三区视频| 女性被躁到高潮视频| 身体一侧抽搐| 国产人妻一区二区三区在| 午夜福利在线观看免费完整高清在| 美女福利国产在线 | 欧美精品人与动牲交sv欧美| 国产亚洲精品久久久com| 美女xxoo啪啪120秒动态图| 又黄又爽又刺激的免费视频.| 菩萨蛮人人尽说江南好唐韦庄| 中文字幕免费在线视频6| 国产精品福利在线免费观看| 久久久久性生活片| 日本午夜av视频| 熟妇人妻不卡中文字幕| 久久久久精品久久久久真实原创| 18禁动态无遮挡网站| 日本色播在线视频| 狠狠精品人妻久久久久久综合| 3wmmmm亚洲av在线观看| 日本黄色片子视频| 亚洲精品日韩av片在线观看| 免费看日本二区| 亚洲欧美成人综合另类久久久| 国产一级毛片在线| 少妇人妻 视频| 99久久人妻综合| 99热网站在线观看| 国产一区有黄有色的免费视频| 日本wwww免费看| 熟妇人妻不卡中文字幕| 免费观看a级毛片全部| 国产精品伦人一区二区| 深夜a级毛片| 欧美成人午夜免费资源| 哪个播放器可以免费观看大片| 性色av一级| 亚洲成人中文字幕在线播放| 亚洲成人一二三区av| 国产一区二区三区av在线| 久久99热6这里只有精品| 国产亚洲5aaaaa淫片| 亚洲欧美一区二区三区国产| 亚洲欧美一区二区三区黑人 | 女人久久www免费人成看片| 久久97久久精品| 黄色日韩在线| 亚洲四区av| 永久免费av网站大全| 能在线免费看毛片的网站| 在线亚洲精品国产二区图片欧美 | 亚洲国产精品专区欧美| 欧美3d第一页| 成人国产麻豆网| 99热这里只有是精品50| 九九久久精品国产亚洲av麻豆| av在线app专区| 亚洲内射少妇av|