• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Perceptual Image Outpainting Assisted by Low-Level Feature Fusion and Multi-Patch Discriminator

    2022-08-23 02:18:02XiaojieLiYongpengRenHongpingRenCanghongShiXianZhangLutaoWangImranMumtazandXiWu
    Computers Materials&Continua 2022年6期

    Xiaojie Li,Yongpeng Ren,Hongping Ren,Canghong Shi,Xian Zhang,Lutao Wang,Imran Mumtaz and Xi Wu,

    1College of Computer Science,Chengdu University of Information Technology,Chengdu,610225,China

    2Xihua University,Chengdu,610039,China

    3University of Agriculture Faisalabad,Pakistan

    Abstract: Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However, due to the lack of fully extracting image information, the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method, which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically, we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic (or structural) information of deep feature map such that we could utilize more sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture, which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover, we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.

    Keywords: Deep learning; image outpainting; low-level feature fusion;multi-patch discriminator

    1 Introduction

    Nowadays,artificial intelligence(AI)has ushered in a new big data era.The improvement of AI also promotes deep learning technology to be widely employed in many fields[1–4],especially in image processing field.Combining deep learning technology with image processing method,AI system could acquire more available environmental information to make correct decisions.For example,applying deep learning-based image processing to the pattern recognition and automatic control field for the efficient analysis and real-time response,which is considered as a promising prospect.

    Recently, deep learning-based methods [5–8] have been widely applied to image inpainting task and have made remarkable achievements.Image inpainting,as a common image editing task,aims to restore damaged images and remove objects.Existing image inpainting methods can mainly be divided into two groups:non-learning methods and learning-based methods.The former group is composed of diffusion-based [9,10] and distribution-based approaches [11,12].Concretely, the diffusion-based approaches use the texture synthesis to fill the unknown parts, and search or collect the suitable pixels of known regions to diffuse into the unknown regions.These methods can generate meaningful textures for the missing regions.However, they generate inpainting results often with blurry and distorted contents when meet a big hole or sophisticated textures, because they fail to capture the semantic information of images.On the other hand, the distribution-based approaches utilize the whole dataset to obtain the data distribution information, and finally generate inpainting images.Similarly, due to only extracting low-level pixel information, they can’t produce a fine texture.By contrary,learning methods[13–15]generally use convolutional neural networks to extract the semantic information of images such that they could realize a natural,realistic and plausible inpainting result.

    Compared with image inpainting, image outpainting is studied relatively fewer in the image processing field.It uses the known parts of images to recursively extrapolate a complete picture.Moreover, image outpainting faces a greater challenge because of the less neighboring pixel information.Furthermore,the outpainting model must produce plausible contents and vivid textures for the missing regions.In practice,image outpainting can be applied in panorama synthesis,texture synthesis and so on.The generative adversarial network(GAN)[16,17]is commonly employed in image outpainting,and it is suitable for unsupervised learning on complicated distribution.GAN,as a generative model,aims to train jointly its generator and discriminator for an adversarial idea.Specifically,the generator minimizes the loss function,and the discriminator maximizes the loss function.Since the adversarial training promotes the generator to capture the real data distribution, the network can generate fine and reasonable images.

    Existing image outpainting methods generally fail to effectively extract image information(such as structure and texture information), resulting in the unclarity and unnaturalness of outpainting results.To generate more semantically reasonable and visually natural outpainting results,we present a perceptual image outpainting method assisted by low-level feature fusion and multi-patch discriminator(LM).It is known that the low-level features map with higher resolution could acquire plentiful detail information(such as location information and texture information).However,it contains less semantic information.The high-level feature map could acquire more semantic information, and it perceives the less detail information.Therefore, we first fuse the texture information in the lowlevel feature map of encoder, and simultaneously incorporate these aggregated features reusability with semantic (structural) information of deep feature map by element-wise adding such that we could utilize more sophisticated texture information to generate more authentic outpainting images.Moreover, we introduce a multi-patch discriminator to enhance the generated texture information and comprehensively judge the reality of outpainting images.We design its outputs as an×ntensor equal to judge the number of patches of an image,which could perceive the relatively bigger receptive field.Therefore,our multi-patch discriminator further effectively judges the generated image from the different level features and indirectly promotes the generator to grasp the real distribution of input data.This could impel our network to produce more natural and clearer outpainting images.

    Furthermore,we employ perceptual loss[18]to extract the high-level feature information of both generated images and ground truths.Therefore, our network could restrain the texture generation of outpainting regions.Meanwhile, style loss [19] is employed to estimate the relevance of different features extracted by pre-trained Visual Geometry Group 19(VGG19)network[20],and we further compute a Gram matrix to obtain the global style of outpainting images.In this way,our model can generate real and consistent outpainting results.

    In general,our contributions are as follows:

    (1) We effectively fuse and reuse the texture information of the low-level feature map of encoder and simultaneously incorporate these aggregated features reusability with semantic(structural)information of deep feature map in the decoder,which could utilize more sophisticated texture information to generate more authentic outpainting results with finer texture.

    (2) We propose two multi-patch discriminators to comprehensively judge the generated images from the different level features,which further enlarges receptive field of discriminator network and finally improves the clarity and naturalness of outpainting results.

    The rest part of paper is organized as follows: Section 2 presents related image outpainting works.The detail theory of our proposed method is illustrated in Section 3.Section 4 introduces our experimental results which include qualitative and quantitative comparisons with existing methods.In the last section,we present conclusions and future works.

    2 Related Work

    In the early time,image inpainting fills the missing areas through non-learning methods,including patch-based [21–23] and diffusion-based methods [24–26].Caspi et al.[27] use bidirectional spatial similarity to maintain the information of input data, which can be applied in retargeting or image inpainting.Nonetheless,the spatial similarity estimation costs a large number of computation resources.Barnes et al.[28]propose a PatchMatch method,using a fast nearest neighbor estimation to match reasonable patches.Therefore, PatchMatch could save expensive computation cost.These methods all assume that the missing contents come from the known regions,thus they search and copy the patches of known areas to fill unknown areas.By this way,they can generally produce meaningful contents for the missing regions.However,they often exhibit badly for complicated structures or bigger holes,due to they only gain low-level image information such as the non-learning statistics information and simple pixel information of images.

    Context Encoder (CE) [29] firstly applies the deep learning-based and GAN-based method to image inpainting task.It presents a new unsupervised learning method which is based on contextual pixel prediction.CE can be used to generate realistic contents according to known pixel information.Its overall network is an encoder-decoder architecture.The encoder maps the missing image into the latent space,and then the decoder utilizes these features of latent space to generate missing contents.A channel-wise fully-connected layer is introduced to connected encoder and decoder.In addition,both reconstruction loss and adversarial loss are used to train the CE model for realizing a sharp inpainting result.In this way, CE could simultaneously obtain both structure representation and semantic information of images.However,owing to the limitation of the fully-connected layer in the network,it fails to produce clear inpainting results.

    Chao et al.[30] propose a multi-scale neural patch synthesis algorithm, which is composed of content network and texture network.It can generate fine content and texture through training jointly the two networks.The content network is used to fill contents for the missing areas, while the texture network is used to further improve texture of output results generated by content network.Furthermore, in the texture network, a pre-trained VGG network is employed to force the patches in the inpainting regions to be perceptually similar to the patches in the known regions.Since they fully take the texture of missing regions into account, the network performs well for producing fine structures.However,due to the multi-scale learning which costs a lot of computation resources,this method has significant limitations.

    Then,Iizuka et al.[31]present a novel image inpainting method which guarantees the inpainting images with both local and global consistency.More specifically, it uses a local discriminator and a global discriminator to realize fine inpainting results.The local discriminator judges the inpainting areas to achieve local detail consistency, while the global discriminator judges the whole image to ensure the consistent overall structure.Thanks to ensuring the consistency of local and global details,the model could produce much finer inpainting results.Moreover, it also achieves a more flexible inpainting without the limitation of image resolution and missing shape.

    To get over the influence of subordinate pixels in the missing regions,Liu et al.[32]create a partial convolution for irregular image inpainting.In the method, they use the masked and renormalized convolution to force the network to focus on the valid pixels of input images.Moreover, they also present a method to automatically update the mask value for the next convolutional layer.By this way,the influence of subordinate information can be reduced in some degree,which promotes the network to process the input image more effectively.Ultimately,they realize natural and clear inpainting results.

    Zheng et al.[33] propose a pluralistic image inpainting method (PICnet), which could produce multiple output for one input image.The most image inpainting methods only output one result,due to the limitation of one instance label provided by the ground truth.To let the model output diverse inpainting results, they invent a novel probabilistic theory to settle the problem.In addition, their network architecture contains two parallel paths,which are composed of the reconstructive path and the generated path.Concretely,the reconstructive path is used to obtain the distribution information of missing regions, and finally reconstructing a complete image.On the other hand, the generated path utilizes the distribution information of reconstructive path to guide the generation of missing images.By sampling from the variational auto-encoder(VAE)(another generative model),the network can produce pluralistic inpainting images.Owing to the considering of prior distribution of missing regions,they not only generate high-quality results but also create the diversity of images.

    Mark et al.[34] recently apply GAN to the image outpainting for painting outside the box(IOGnet).They employ the deep learning-based GAN approach to outpaint the panorama contents for the sides of missing images,and finally recursively expand the parts beyond the border.Furthermore,they adopt a three-stage strategy to stabilize the training process.In the first stage,the generator is trained by the L2 distance between the generated images and the ground truths.In the second stage,the discriminator is trained alone according to the adversarial loss.In the last stage,the generator and discriminator are trained jointly through the adversarial loss.Finally,the model could even generate a five-time outpainting result than the original input.However, the obscure contents appear in the outpainting parts.As a result,the work needs to be improved in some aspects.

    3 Perceptual Image Outpainting Assisted by Low-Level Feature Fusion and Multi-Patch Discriminator(LM)

    To produce high-quality outpainting results, we present a simple perceptual image outpainting method assisted by low-level feature fusion and multi-patch discriminator.Moreover, we simultaneously employ both perceptual loss and style loss to improve the texture and style of outpainting images.Network architecture will be introduced in Subsection 3.1, and the rest of subsections are used to introduce the principle of our method.

    3.1 Network Architecture

    As shown in Fig.1, a simple GAN-based network, mainly consisting of the generator and discriminator, is used in our network.Firstly, our encoder in generator maps input images (bothImandIc) into a latent feature space.We first fuse the texture information in the low-level feature map of encoder, and simultaneously incorporate these aggregated features reusability with semantic(structural) information of deep feature map by element-wise adding in decoder.This could utilize more sophisticated texture information to generate more authentic outpainting images.Furthermore,the inference module(yellow block)connects encoder with decoder for utilizing the latent feature more effectively.In fact,the inference module is equal to the function of VAE[35],which computes the mean and variance of latent features to sample useful features.Finally,to generate more realistic results,we inject outpainting image into the pre-trained VGG[36]network for obtaining the feature information,which will be used to compute perceptual loss and style loss.In addition, we use the Least Squares Generative Adversarial Network(LSGAN)loss[37]to stabilize the training of our model.Then we present a multi-patch discriminator to enhance the generated texture information, which effectively judges the generated image from the different level features and impels our network to produce more natural and clearer outpainting images.

    Figure 1:Overview of our network architecture

    3.1.1 Generator

    Fig.1 shows that our network structure consists of two paths:yellow path in the top and blue path in the bottom.Note that the former path aims to reconstruct inpainting images and the latter path aims to generate outpainting results.In the training, both masked imagesImandIc(complement ofIm)are concatenated by the channel-wise operation such that both can be simultaneously processed.Then we detach output features into both different inference modules(yellow block)to compute their latent features’mean and variance,which will be used to sample latent features.To simultaneously deal with both latent features,we concatenate both sampling features and feed them into the decoder.To easily grasp more sophisticated texture information and generate more authentic outpainting images,when decoder processes the latent features we fuse the texture information in the low-level feature map of encoder, and simultaneously incorporate these aggregated features reusability with semantic(structural) information of deep feature map by element-wise adding in decoder.It is formally defined as:

    whereFiis i-th layer’s aggregated features,Idenotes input image,Eiis i-th layer in the encoder, ⊕denotes channel-wise concatenation, andCDis down-sampling operation.Namely, we first downsample (i-1)-th layer’s features, and concatenate the down-sampling features with the i-th layer’s features by channel-wise concatenation.Therefore,Ficontains (i-1)-th and i-th layers’ aggregated feature information (see Eq.(1)).Then, we could pass aggregated featuresFiinto the decoder via element-wise adding.Therefore, the network could generate more sophisticated texture for the generated images.Finally,we produce both reconstructive imageIrecand generated imageIgen.

    3.1.2 Discriminator

    We design multi-patch discriminators (both Discriminator 1 and Discriminator 2) to enhance the generated texture information, which effectively judges the generated imageIgenandIrecfrom the different level features and impels our network to produce more natural and clearer outpainting images.Formally,it is defined as:

    whereis the generator’s adversarial loss,Diis the i-th layer of discriminator,andIgenis the generated image.Specifically, we judge the output patches in the last three layers of discriminator are real or fake.From the multi-patch information, the discriminator could effectively reinforce the ability of judgement for the output patch of discriminator (see Eq.(2)).Therefore, the discriminator could comprehensively judge an input image is real or fake.Finally,the real distribution of data is grasped by the generator,and the model could produce finer outpainting results.

    3.2 Perceptual Loss and Style Loss

    To further improve the texture and style of outpainting images and generate more realistic result,we simultaneously introduce both perceptual loss and style loss.Perceptual loss aims to extract semantic(structure)feature information via the pre-trained VGG19 network.By constraining theL1distance of these features,it can force outpainting results perceptually close to ground truths.Formally,the perceptual loss is defined as:

    whereIgenis the generated image andIgtis the ground truth.Φi(·)denotes the i-th layer features map of VGG.Actually, the perceptual loss is used to measure the difference of corresponding features extracted by VGG.The features in the convolutional neural network generally represent the semantic information of images such as the low-level textures or high-level attributes.Through penalizing these features dissimilar to the feature labels in the VGG, the outpainting parts can be improved in some degree.Thanks to the applying of perceptual loss in the training of GANs, the generator could be gradually tuned to produce a finer output result.

    Style loss aims to extract the general style of generated images and ground truth.Concretely,to capture the overall style,we calculate the Gram matrix of their features extracted by VGG network.As a result of theL1norm constraint on the corresponding Gram matrices, the outpainting images will approach the realistic style by degrees.Analogously,the style loss is defined as follow:

    where GΦi(·)denotes the Gram matrix of i-th layer’s feature extracted by VGG network.In fact,Gram matrix is the covariance matrix of eigenvectors in the Euclidean space,and it estimates the correlation of pair eigenvectors.Convolutional Neural Network(CNN)extracts the low-level texture information of images in the shallow layer,while in the deeper layer it obtains the high-level semantic information.The genuine attribute of an image is up to the combination of low-level and high-level information.Therefore, it can be used to measure the correlation of different features, including the important essence of images.Since we force the style of outpainting images to be similar to the style of ground truths,our model can produce the outpainting results with natural and authentic appearance.

    3.3 Other Loss

    Moreover,we apply the loss from PIC.Formally,

    where the subscriptrdenotes the reconstructive path (see yellow path in Fig.1), andgdenotes the generated path (see blue path in Fig.1).LKLis the KL loss for restraining the distribution of both reconstructive images and generated images.Lappis the reconstruction loss,andLadis the adversarial loss for GAN.

    In our model,the total loss is defined as follow:

    whereλ1=0.1,λ2=250.0 in our experiments.

    4 Experimental Results

    4.1 Dataset

    We evaluate our method on both Places2[38]and Paris StreetView[39]datasets.Places2 dataset is a natural scene dataset which is widely used in image outpainting.We divided Places2 into training set 308,500 and test set 20,000.Paris dataset is building view dataset,and we divided Paris into training set 14,900 and test set 100.All of images are resized to 128 × 128 and normalized to [0,1].These normalized inputs of[0,1]can accelerate the training of model,and it also summarizes the statistical distribution of uniform samples.

    4.2 Experimental Setup

    All experiments are implemented on Pytorch framework with Ubuntu 16.04, Python 3.6.9,PyTorch 1.2.0,and RTX 2080TI GPU.Moreover,we set a batch size of 64,and use Adam optimizer to train our network with an initial learning-rate of 0.00001, and the orthogonal method is used to initialize the parameters of model.Although the network consists of two paths,it is trained in an endto-end style.We also employ a LSGAN loss to make the training stable.In the training procedure,we update the discriminator once and update the generator once to complete the adversarial training.The test input is the masked image with missing center regular holes or long strips.Note that,during test,we only use the bottom blue path to output final results.During training time,our model spent 6 days and 5 days on Places2 and Paris datasets respectively, while PICnet spent 7 days and 6 days on Places2 and Paris datasets respectively.Therefore,it proves that our method is more efficient for training times.

    4.3 Evaluation Metrics

    We compare our method(PICnet-SP-LM)with PICnet and its variants(PICnet-S(PICnet with style loss) and PICnet-SP (PICnet with style loss and perceptual loss)) in terms of qualitative and quantitative aspects.In the qualitative aspect,we can visually judge whether the outpainting parts are fine or bad.In the quantitative aspect, six types of metrics are used to measure the performance of different methods:

    (1) Inception Score(IS)[40]is a common quantitative metric which is used to judge the quality of generated images.GANs,which can generate clear and diverse images,are considered as good generated models.IScan be used to measure the clarity and diversity of images.Formally,ISis defined as follow:

    wheregis the generator,ydenotes the generated image,andzis the label predicted by the pre-trained Inception V3 model.The higher IS score signifies that the generated images are clearer and more diverse.

    (2) Another metric usually used to measure the quality of GAN is Frechet Inception Distance(FID)[41].FID aims to estimate the distance between the feature vectors of generated image and ground truth in a same domain.Formally:

    wherexdenotes the ground truth andydenotes the generated image.μis the mean value of eigenvectors, andΣis the covariance matrix of eigenvectors.The lower FID score also means that the generated images are higher-quality for clarity and diversity.

    (3) Structural similarity (SSIM) aims to evaluate the quality of image based on the luminance,contract and structure of two images.Formally,

    wherex,ydenote ground truth and generated image respectively,μxis the mean value ofx,σxdenotes the variance ofx,andσxydenotes the covariance ofxandy.The higher SSIM means the generated images possess finer luminance,contract and structure.

    (4) Peak signal-to-noise ratio(PSNR)is a full reference estimation metric,and it is used to measure the degree of image distortion.Formally,

    wheredenotes the max pixel value in an image,andMSEis the abbreviation of mean square error.A higher PSNR score signifies the generated images are more natural.

    (5)L1loss measures the pixel-wise difference by computing the L1 distance.Formally,

    wherex,ydenote ground truth and generated image respectively, (i, j) denotes the position in the image, andmsignifies the number of total elements.The lowerL1loss means generated images are closer to ground truths for pixel-wise difference.

    (6) RMSE is used to measure the deviation between generated image and ground truth.Formally,

    wherex,ydenote ground truth and generated image respectively,(i,j)denotes the position in the image,andmsignifies the number of total elements.Similarly,the lower RMSE means generated images are closer to ground truths.

    4.4 Qualitative Results

    Figs.2 and 3 illustrate the qualitative results of different methods with 64 × 64 valid pixels input on the different datasets.It is easy to see that the original PICnet generated blurry textures and distorted structures in the outpainting areas (see Fig.2c).To solve the existing problems, we first introduce perceptual loss and style loss.For style loss, PICnet-S (PICnet with style loss) could improve the existing distorted structures,and these coarse results become much smoother(see Fig.2d).Furthermore,we used both style loss and perceptual loss in PICnet(denoted as PICnet-SP)(PICnet with style loss and perceptual loss) to improve the outpainting results.We can see the details from Fig.2e.Compared with the results of PICnet-S,PICnet-SP exhibits better on the Places2.For instance,with style loss and perceptual loss,the results are more realistic and more natural in general.To further improve the quality of outpainting images, we fuse the texture information in the low-level feature map of encoder, and simultaneously incorporate these aggregated features reusability with semantic(structural)information of deep feature map by element-wise adding in decoder.Simultaneously,we designed multi-patch discriminator into the network.This could utilize more sophisticated texture information to generate more authentic outpainting images(see Fig.2f).We can see that our PICnet-SP-LM achieved a more authentic outpainting result.Moreover,we also find a similar effect on the Paris dataset.In the Fig.3c, the vanilla PICnet method produces poor results which are filled with fuzzy contents and shadows.However, the outpainting parts are improved a lot when we add style loss alone or both style loss and perceptual loss (see Figs.3d and 3e).Specifically, these shadows disappear in some degree and the blurry textures become clearer.Fig.3f with the low-level feature fusion and the multi-patch discriminator exhibits better than the former methods.This proves that low-level feature fusion and multi-patch discriminator could promote the network to generate higherquality outpainting images.

    Figure 2:Qualitative results of different methods with 64×64 valid pixels’input on the Places2 dataset

    To further evaluate the effectiveness of our method, we set 128 × 64 valid pixels as the input of network (see Figs.4b and 5b).Figs.4 and 5 show the qualitative results of different methods on Places2 and Paris,respectively.From Figs.4c and 5c,the original PICnet produces poor outpainting results with apparent boundaries and warped structures.Nonetheless, these situations are greatly improved when we use perceptual loss and style loss.In the Figs.4 and 5,the structures become more natural and clearer(see Figs.4d,4e, 5d and 5e).Moreover,Figs.4f and 5f,generated by our PICnet-SP-LM,reach the higher effect than the others.Thus,these results once again demonstrate that both low-level feature fusion and the multi-patch discriminator are instrumental for network to improve the quality of outpainting images.

    Figure 3: Qualitative results of different methods with 64 × 64 valid pixels’ input on the Paris StreetView dataset

    Figure 4: Qualitative results of different methods with 128 × 64 valid pixels’ input on the Places2 dataset

    Figure 5: Qualitative results of different methods with 128 × 64 valid pixels’ input on the Paris StreetView dataset

    4.5 Quantitative Results

    The qualitative results of different methods on both Paris and Places2 datasets with different inputs are shown in Tabs.1–4.The quantitative results with 64×64 valid pixels’input on Paris and Places2 are shown in the Tabs.1 and 2.In the Tab.1,we exhibit the quantitative metrics of 20,000 test images on the Places2.In the experiments, our method with low-level feature fusion and the multipatch discriminator also achieves better metrics.Specially, our PICnet-SP-LM method achives the lower 30.81 for FID, signifying our model can realize clearer and more diverse outpainting results.The higher PSNR of 13.72 and SSIM of 0.4261, proving our results have a better image structure.Besides, we also obtain lower L1 loss of 34.47 and RMSE of 64.76, which indicates our results are closer to ground truths for pixel difference.Tab.2 shows the quantitative metrics on the Paris.As result of the limitation of the 100 test images of Paris,we only measure the metrics SSIM and RMSE.From the quantitative results, low-level feature fusion and multi-patch discriminator again improve the results generated by the vanilla PICnet.

    Furthermore, Tabs.3 and 4 show the quantitative results of different methods with 128 × 64 valid pixels’ input on Places2 and Paris.The effect of low-level feature fusion and the multi-patch discriminator once presents in the tables.Vanilla PICnet method produces the poor results which have lower-quality quantitative metrics.Contrarily,the quantitative metrics of outpainting results produced by PICnet-SP-LM can realize a better degree.Specially,with the effect of the low-level feature fusion and the multi-patch discriminator,PICnet-SP-LM achieves higher PSNR of 16.78 and SSIM of 0.6452 on the Places2 dataset.Meanwhile,PICnet-SP-LM also realizes the lower FID of 9.99 and L1 loss of 19.25.In addition,on the Paris dataset,PICnet-SP-LM also exhibits better for SSIM and RMSE.All the experiments demonstrate that both low-level feature fusion and the multi-patch discriminator are beneficial for outpainting network to improve the quality of outpainting images.

    Table 1:Quantitative results of different methods with 64×64 valid pixels’input on the Places2 dataset

    Table 2:Quantitative results of different methods with 64×64 valid pixels’input on Paris StreetView.Because the limitation of the 100 test images of Paris StreetView,we only evaluate the SSIM and RMSE

    Table 3: Quantitative results of different methods with 128 × 64 valid pixels’ input on the Places2 dataset

    Table 4:Quantitative results of different methods with 128×64 valid pixels’input on Paris StreetView.Because the limitation of the 100 test images of Paris StreetView,we only evaluate the SSIM and RMSE

    4.6 Ablation Study

    In addition,we also implement other experiments for further selecting the better PICnet-SP-LM method.Tab.5 is the quantitative results of implemental experiments on the Places2 dataset.Specifically, PICnet-SP-LM-1 and PICnet-SP-LM-2 are the different hyper parameters for reconstruction loss and KL loss,respectively.(PICnet-SP-LM-1 with hyper parameter 20 for reconstruction loss and hyper parameter 20 for KL loss, and PICnet-SP-LM-2 with hyper parameter 20 for reconstruction loss and hyper parameter 40 for KL loss.)From the experimental results,PICnet-SP-LM-1 achieves a better degree.Thus,PICnet-SP-LM-3 and PICnet-SP-LM-4 adopt the hyper parameters of PICnet-SP-LM-1.PICnet-SP-LM-3 utilizes one layer’s aggregated features,and PICnet-SP-LM-4 utilizes two layers’aggregated features.Apparently,PICnet-SP-LM-4 utilizing more aggregated features achieves a better effect.Therefore, PICnet-SP-LM-4 is an optimal experimental setup, which could generate more natural and more realistic outpainting results.Moreover, for the qualitative aspect, the results generated by PICnet-SP-LM-4 are also clearer and more authentic than other methods.In the Fig.6,we also select some outpainting results with borders in baseline model.Then we relieve or eliminate these borders through gradually adding our core blocks, which could present the obvious effect of these core blocks.

    Table 5: Quantitative results of ablation study with 64×64 valid pixels’input on the Places2 dataset

    Figure 6:Qualitative results of ablation study on the Places2 dataset.(a)Input,(b)PICnet-SP-LM-1,(c)PICnet-SP-LM-2,(d)PICnet-SP-LM-3,(e)PICnet-SP-LM-4

    5 Conclusion

    In fact,image outpainting plays an important role in image processing field,and it can be also used to promote the image inpainting.In this paper, we present a perceptual image outpainting method,which is assisted by low-level feature fusion and multi-patch discriminator.In details, we first fuse the low-level texture information in the encoder,and simultaneously incorporate these fused features with semantic(or structural)information of deep feature map,which could promote the network to generate finer outpainting results.At the same time, we also present a multi-patch discriminator to enhance the generated image texture,which effectively judges the generated image from the different level features and impels our network to produce more natural and clearer outpainting results.To fully evaluate our model,we implement experiments on Places2 and Paris dataset.Finally,the experimental results show that our method is better than PICnet for qualitative effects and quantitative metrics,which proves the effectiveness and efficiency of our method for image outpainting task.In the future,we will further study more challenging image outpainting field,such as the input images with bigger missing regions.We also try to realize higher-quality outpainting results.

    Acknowledgement:I would like to thank those who helped me generously in this research.

    Funding Statement:This work was supported by the Sichuan Science and Technology program(2019JDJQ0002, 2019YFG0496, 2021016, 2020JDTD0020), and partially supported by National Science Foundation of China 42075142.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    九九热线精品视视频播放| 免费一级毛片在线播放高清视频| 激情 狠狠 欧美| 天天躁日日操中文字幕| 岛国在线免费视频观看| 国产亚洲av嫩草精品影院| 一本久久中文字幕| 我要搜黄色片| 亚洲精品国产av成人精品 | 成熟少妇高潮喷水视频| 一个人观看的视频www高清免费观看| 哪里可以看免费的av片| 免费看光身美女| 国产精品一区www在线观看| 日韩欧美免费精品| 美女xxoo啪啪120秒动态图| 欧美精品国产亚洲| 3wmmmm亚洲av在线观看| 一进一出抽搐gif免费好疼| 亚洲人成网站在线播放欧美日韩| 久久久久久久亚洲中文字幕| 久久人人爽人人爽人人片va| 亚洲自拍偷在线| 国产精品av视频在线免费观看| 日韩三级伦理在线观看| 91麻豆精品激情在线观看国产| 91午夜精品亚洲一区二区三区| 久久久久久九九精品二区国产| 秋霞在线观看毛片| 少妇丰满av| 一区福利在线观看| 永久网站在线| 少妇丰满av| 日韩强制内射视频| 高清毛片免费观看视频网站| 国产精品久久电影中文字幕| 日韩一区二区视频免费看| 欧美最新免费一区二区三区| 亚洲精品在线观看二区| 欧美精品国产亚洲| 国产精品爽爽va在线观看网站| 午夜福利在线观看吧| 亚洲欧美精品综合久久99| av专区在线播放| 亚洲无线观看免费| 三级经典国产精品| 在线播放无遮挡| 欧美激情国产日韩精品一区| 精品人妻熟女av久视频| 国产 一区 欧美 日韩| 毛片一级片免费看久久久久| 亚洲av一区综合| 91久久精品电影网| 91久久精品电影网| 国产乱人视频| 国产精品一区二区三区四区久久| 精品久久久久久久久av| 国产精品久久久久久av不卡| 可以在线观看的亚洲视频| 欧美国产日韩亚洲一区| 少妇的逼水好多| 日韩成人伦理影院| 成人特级黄色片久久久久久久| 久久这里只有精品中国| 听说在线观看完整版免费高清| 国产亚洲精品久久久com| 久久99热6这里只有精品| 国产视频内射| 丰满人妻一区二区三区视频av| 国内精品久久久久精免费| 久久久午夜欧美精品| 国产在线精品亚洲第一网站| 老师上课跳d突然被开到最大视频| 内地一区二区视频在线| 日本撒尿小便嘘嘘汇集6| 成人亚洲精品av一区二区| 亚洲国产精品国产精品| a级一级毛片免费在线观看| 欧美激情国产日韩精品一区| 亚洲人成网站在线播放欧美日韩| 日本黄大片高清| 12—13女人毛片做爰片一| 亚洲最大成人中文| 中文亚洲av片在线观看爽| 成人高潮视频无遮挡免费网站| 亚洲经典国产精华液单| 亚洲精品亚洲一区二区| ponron亚洲| 国产高清不卡午夜福利| 欧美三级亚洲精品| 永久网站在线| 欧美bdsm另类| 国产一区二区亚洲精品在线观看| av天堂在线播放| a级一级毛片免费在线观看| 国产激情偷乱视频一区二区| 国产私拍福利视频在线观看| 一个人看视频在线观看www免费| 一个人观看的视频www高清免费观看| 69人妻影院| 国产亚洲精品久久久com| 日韩高清综合在线| 久久人人爽人人片av| 老女人水多毛片| 亚洲精华国产精华液的使用体验 | 一区二区三区免费毛片| 俺也久久电影网| 少妇猛男粗大的猛烈进出视频 | 黄色配什么色好看| 国内精品久久久久精免费| 国产高清不卡午夜福利| 国产三级在线视频| 丰满人妻一区二区三区视频av| 夜夜爽天天搞| 欧美3d第一页| 一本一本综合久久| 欧美一区二区国产精品久久精品| 精品99又大又爽又粗少妇毛片| 日韩高清综合在线| 热99re8久久精品国产| 亚洲欧美日韩高清专用| 最近最新中文字幕大全电影3| 成人亚洲欧美一区二区av| 国产真实伦视频高清在线观看| 日本a在线网址| 亚洲av电影不卡..在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 久久国产乱子免费精品| 俄罗斯特黄特色一大片| 亚洲最大成人av| 亚洲av成人av| videossex国产| 成人av在线播放网站| 亚洲一区高清亚洲精品| 99热这里只有是精品50| 大香蕉久久网| 国产高清激情床上av| 亚洲欧美成人综合另类久久久 | 日本黄大片高清| 久久久久久伊人网av| 国产乱人视频| 亚洲色图av天堂| 亚洲国产日韩欧美精品在线观看| 波多野结衣高清无吗| 99国产精品一区二区蜜桃av| 午夜亚洲福利在线播放| 菩萨蛮人人尽说江南好唐韦庄 | 久久久色成人| 国产综合懂色| 久久久国产成人精品二区| 狂野欧美白嫩少妇大欣赏| 午夜福利视频1000在线观看| 99久国产av精品国产电影| 成年免费大片在线观看| 麻豆精品久久久久久蜜桃| 少妇熟女欧美另类| 国产片特级美女逼逼视频| 九九爱精品视频在线观看| 亚洲,欧美,日韩| 91久久精品电影网| 少妇的逼好多水| 久久国内精品自在自线图片| 久久久久久久久久久丰满| 91久久精品国产一区二区三区| 性色avwww在线观看| 中文字幕人妻熟人妻熟丝袜美| 免费一级毛片在线播放高清视频| 国产精品国产高清国产av| av中文乱码字幕在线| 乱人视频在线观看| 国产精品美女特级片免费视频播放器| 最新在线观看一区二区三区| 丝袜喷水一区| 亚洲国产日韩欧美精品在线观看| 老熟妇仑乱视频hdxx| 长腿黑丝高跟| 天天躁日日操中文字幕| 能在线免费观看的黄片| 级片在线观看| 国产精品人妻久久久久久| 高清午夜精品一区二区三区 | 一个人看视频在线观看www免费| 亚洲乱码一区二区免费版| 色综合亚洲欧美另类图片| 超碰av人人做人人爽久久| 一级毛片久久久久久久久女| 国产精品亚洲美女久久久| 免费av观看视频| 99视频精品全部免费 在线| 亚洲成人久久性| 亚洲精品国产成人久久av| 麻豆精品久久久久久蜜桃| 天天一区二区日本电影三级| 精品一区二区免费观看| 麻豆国产av国片精品| 国产欧美日韩精品一区二区| 身体一侧抽搐| 国产高清视频在线观看网站| 简卡轻食公司| 91在线观看av| 亚洲第一区二区三区不卡| 亚洲av成人av| 麻豆av噜噜一区二区三区| 色5月婷婷丁香| 五月伊人婷婷丁香| 久久6这里有精品| 久久久久性生活片| 亚洲人成网站在线播| 亚洲人成网站在线观看播放| 欧美日韩国产亚洲二区| 久久国产乱子免费精品| 国产私拍福利视频在线观看| 免费人成视频x8x8入口观看| 亚洲国产精品国产精品| 日本成人三级电影网站| 亚洲精华国产精华液的使用体验 | 午夜日韩欧美国产| .国产精品久久| 日韩欧美一区二区三区在线观看| 一个人观看的视频www高清免费观看| 少妇人妻精品综合一区二区 | 国产精品嫩草影院av在线观看| 午夜日韩欧美国产| 最新中文字幕久久久久| 国产高清有码在线观看视频| 色视频www国产| 少妇熟女aⅴ在线视频| 草草在线视频免费看| 一本一本综合久久| 国产大屁股一区二区在线视频| av在线蜜桃| 久久久久久大精品| 美女高潮的动态| 国产黄片美女视频| 国产成人福利小说| 亚洲国产精品成人久久小说 | 欧美zozozo另类| 男女边吃奶边做爰视频| 又黄又爽又免费观看的视频| 欧美+日韩+精品| 五月伊人婷婷丁香| 女人十人毛片免费观看3o分钟| 女人十人毛片免费观看3o分钟| 在线播放国产精品三级| 亚洲专区国产一区二区| 亚洲av美国av| 国产欧美日韩一区二区精品| 国产高清三级在线| 内射极品少妇av片p| 国产熟女欧美一区二区| 少妇丰满av| 午夜老司机福利剧场| 真实男女啪啪啪动态图| 人妻制服诱惑在线中文字幕| 搡老岳熟女国产| 精品欧美国产一区二区三| 在线播放无遮挡| 成人国产麻豆网| 国产真实乱freesex| 真人做人爱边吃奶动态| 国产女主播在线喷水免费视频网站 | 三级经典国产精品| 男人和女人高潮做爰伦理| 极品教师在线视频| 欧美激情久久久久久爽电影| 久久久久国产精品人妻aⅴ院| 亚洲乱码一区二区免费版| 久久久久久久久久黄片| 久久久精品94久久精品| 精品乱码久久久久久99久播| 国内精品久久久久精免费| 国产伦一二天堂av在线观看| 国产美女午夜福利| 国产又黄又爽又无遮挡在线| 国产精品一区二区三区四区免费观看 | 欧美日本视频| 黄色一级大片看看| 成年版毛片免费区| 高清毛片免费观看视频网站| 久久久久国内视频| 国产单亲对白刺激| 免费观看精品视频网站| 午夜日韩欧美国产| 亚洲精品国产av成人精品 | 99riav亚洲国产免费| 最新在线观看一区二区三区| a级毛色黄片| 成人欧美大片| 伊人久久精品亚洲午夜| 欧美国产日韩亚洲一区| 深爱激情五月婷婷| 天堂动漫精品| 在线观看免费视频日本深夜| av在线观看视频网站免费| 国产成年人精品一区二区| 91久久精品国产一区二区三区| 欧美一区二区亚洲| 男插女下体视频免费在线播放| 亚洲美女搞黄在线观看 | 人人妻人人澡欧美一区二区| 日韩三级伦理在线观看| 成人性生交大片免费视频hd| 欧美一级a爱片免费观看看| 日本成人三级电影网站| 色噜噜av男人的天堂激情| 女的被弄到高潮叫床怎么办| 国产精华一区二区三区| 国产探花在线观看一区二区| 久久精品国产自在天天线| 久久国内精品自在自线图片| 精品人妻一区二区三区麻豆 | 22中文网久久字幕| 一夜夜www| 国产 一区 欧美 日韩| 日韩精品青青久久久久久| 九九热线精品视视频播放| 97超级碰碰碰精品色视频在线观看| 伦精品一区二区三区| 色噜噜av男人的天堂激情| 日韩欧美精品免费久久| 天堂av国产一区二区熟女人妻| 国产精品综合久久久久久久免费| 久久精品国产清高在天天线| 免费av毛片视频| 国产精品乱码一区二三区的特点| 久久欧美精品欧美久久欧美| 亚洲精品久久国产高清桃花| 天堂av国产一区二区熟女人妻| 国产熟女欧美一区二区| 免费电影在线观看免费观看| 日本撒尿小便嘘嘘汇集6| 亚洲精品一卡2卡三卡4卡5卡| 亚洲美女视频黄频| 亚洲精品久久国产高清桃花| 伦精品一区二区三区| 久久久久久久久久黄片| 午夜激情欧美在线| 欧美不卡视频在线免费观看| 国产精品av视频在线免费观看| 日韩欧美精品v在线| av在线蜜桃| 一级毛片电影观看 | 白带黄色成豆腐渣| 久久99热这里只有精品18| 美女 人体艺术 gogo| 免费av不卡在线播放| 97超视频在线观看视频| 欧美中文日本在线观看视频| 国产乱人视频| 亚洲欧美清纯卡通| av在线亚洲专区| 久久亚洲精品不卡| 午夜福利视频1000在线观看| 搞女人的毛片| 少妇的逼好多水| 精品99又大又爽又粗少妇毛片| 日韩人妻高清精品专区| 亚洲婷婷狠狠爱综合网| 亚洲欧美成人精品一区二区| 香蕉av资源在线| 极品教师在线视频| 大型黄色视频在线免费观看| 久久午夜福利片| 天天一区二区日本电影三级| 国产私拍福利视频在线观看| 久久久久国内视频| 欧美绝顶高潮抽搐喷水| 99热网站在线观看| 麻豆乱淫一区二区| 国产高清不卡午夜福利| 久久久精品94久久精品| 99热这里只有精品一区| 国产三级在线视频| 最近在线观看免费完整版| 人人妻人人澡人人爽人人夜夜 | 免费观看的影片在线观看| 国产精品99久久久久久久久| 精品午夜福利在线看| 欧洲精品卡2卡3卡4卡5卡区| 自拍偷自拍亚洲精品老妇| 一进一出抽搐动态| 亚洲av二区三区四区| 最近视频中文字幕2019在线8| aaaaa片日本免费| 国产精品久久久久久久电影| 夜夜爽天天搞| 日本熟妇午夜| 亚洲最大成人av| 国产激情偷乱视频一区二区| 久久中文看片网| 久99久视频精品免费| 国产精品野战在线观看| 精品国产三级普通话版| 日本黄色片子视频| 婷婷精品国产亚洲av在线| 国产一区二区亚洲精品在线观看| 欧美高清性xxxxhd video| 最好的美女福利视频网| 一本一本综合久久| 老熟妇乱子伦视频在线观看| 91久久精品国产一区二区三区| 99热这里只有精品一区| 日本免费一区二区三区高清不卡| 亚洲最大成人手机在线| 亚洲专区国产一区二区| 成人午夜高清在线视频| 特级一级黄色大片| a级毛片a级免费在线| 精品久久久噜噜| 日韩制服骚丝袜av| 欧美成人免费av一区二区三区| av天堂在线播放| 别揉我奶头~嗯~啊~动态视频| 在线观看免费视频日本深夜| 亚洲五月天丁香| 一个人看视频在线观看www免费| 床上黄色一级片| 欧洲精品卡2卡3卡4卡5卡区| 久久人人精品亚洲av| 春色校园在线视频观看| 久久久久久久久久成人| 久久人人爽人人爽人人片va| 最近手机中文字幕大全| 日韩,欧美,国产一区二区三区 | 在线a可以看的网站| 日本爱情动作片www.在线观看 | 亚洲性夜色夜夜综合| 亚洲在线观看片| 九九久久精品国产亚洲av麻豆| 能在线免费观看的黄片| 亚洲专区国产一区二区| 日韩成人伦理影院| 小说图片视频综合网站| 免费电影在线观看免费观看| 熟女人妻精品中文字幕| 国产精品国产高清国产av| 国产黄a三级三级三级人| 老师上课跳d突然被开到最大视频| 国模一区二区三区四区视频| 97在线视频观看| 国产成人a区在线观看| 免费电影在线观看免费观看| 一本精品99久久精品77| 我的女老师完整版在线观看| 99久久成人亚洲精品观看| 日韩制服骚丝袜av| 午夜福利成人在线免费观看| 村上凉子中文字幕在线| 可以在线观看的亚洲视频| 亚洲成人av在线免费| 噜噜噜噜噜久久久久久91| 成人欧美大片| 亚洲内射少妇av| 亚洲一区高清亚洲精品| 人人妻,人人澡人人爽秒播| 日本一二三区视频观看| 国语自产精品视频在线第100页| 波多野结衣高清作品| 观看免费一级毛片| 亚洲综合色惰| 亚洲国产精品久久男人天堂| av国产免费在线观看| 欧美中文日本在线观看视频| 日韩av不卡免费在线播放| 亚洲最大成人手机在线| 欧美最新免费一区二区三区| 日日摸夜夜添夜夜爱| 内射极品少妇av片p| 欧美日韩乱码在线| 在线国产一区二区在线| 精品久久久久久久久av| 成人美女网站在线观看视频| 欧美精品国产亚洲| 天堂网av新在线| 亚洲国产欧洲综合997久久,| 久久久久久久亚洲中文字幕| 99久久中文字幕三级久久日本| 国产av一区在线观看免费| 国产伦精品一区二区三区视频9| 亚洲国产精品久久男人天堂| 久久精品夜夜夜夜夜久久蜜豆| 国产69精品久久久久777片| 人人妻人人澡欧美一区二区| 中文字幕精品亚洲无线码一区| 一进一出抽搐gif免费好疼| 久久九九热精品免费| av.在线天堂| 禁无遮挡网站| 在线观看免费视频日本深夜| 亚洲五月天丁香| 人人妻人人澡人人爽人人夜夜 | 国产男人的电影天堂91| 亚洲欧美日韩高清专用| 看十八女毛片水多多多| 亚洲国产精品成人综合色| 日韩,欧美,国产一区二区三区 | 一进一出抽搐gif免费好疼| 麻豆成人午夜福利视频| 身体一侧抽搐| 国产欧美日韩一区二区精品| 一进一出抽搐gif免费好疼| 亚洲成人久久爱视频| 男女啪啪激烈高潮av片| 可以在线观看的亚洲视频| 此物有八面人人有两片| 一进一出抽搐动态| 欧美又色又爽又黄视频| 国产精品一二三区在线看| 久久久午夜欧美精品| 国产片特级美女逼逼视频| 在线观看66精品国产| 欧美zozozo另类| 国产精品乱码一区二三区的特点| 成人漫画全彩无遮挡| 中文资源天堂在线| 九九热线精品视视频播放| 亚洲人成网站在线观看播放| 色哟哟·www| 小蜜桃在线观看免费完整版高清| 久久久久久国产a免费观看| 亚洲精华国产精华液的使用体验 | 观看美女的网站| 成人高潮视频无遮挡免费网站| 成人午夜高清在线视频| 国产精品国产高清国产av| 女人十人毛片免费观看3o分钟| 亚洲经典国产精华液单| 国产乱人视频| 麻豆精品久久久久久蜜桃| 2021天堂中文幕一二区在线观| 日本黄色片子视频| 亚洲成人中文字幕在线播放| 一卡2卡三卡四卡精品乱码亚洲| 久久精品久久久久久噜噜老黄 | 国产一区二区亚洲精品在线观看| 午夜免费男女啪啪视频观看 | 日日摸夜夜添夜夜添av毛片| a级毛片a级免费在线| 成人永久免费在线观看视频| 欧美性感艳星| 熟妇人妻久久中文字幕3abv| 日本爱情动作片www.在线观看 | 免费人成视频x8x8入口观看| 日本熟妇午夜| 亚洲va在线va天堂va国产| 欧美一区二区精品小视频在线| 最新在线观看一区二区三区| avwww免费| 淫妇啪啪啪对白视频| 亚洲18禁久久av| 成人欧美大片| 91麻豆精品激情在线观看国产| 久久久久久久亚洲中文字幕| АⅤ资源中文在线天堂| 看片在线看免费视频| 深爱激情五月婷婷| 精品福利观看| av卡一久久| 亚洲中文字幕一区二区三区有码在线看| 熟女人妻精品中文字幕| 无遮挡黄片免费观看| 午夜精品国产一区二区电影 | 免费在线观看成人毛片| 成年女人看的毛片在线观看| 一本一本综合久久| 嫩草影视91久久| 免费看日本二区| 黄片wwwwww| 亚洲最大成人av| 国产精品99久久久久久久久| 久久久精品欧美日韩精品| 久久久久久久久久成人| 国产精品av视频在线免费观看| 最近手机中文字幕大全| 国产一级毛片七仙女欲春2| 亚洲av免费高清在线观看| 中文字幕av成人在线电影| 麻豆av噜噜一区二区三区| www日本黄色视频网| 尤物成人国产欧美一区二区三区| 深夜精品福利| 天堂av国产一区二区熟女人妻| 成人特级av手机在线观看| 国产免费男女视频| videossex国产| 日本黄大片高清| 在线看三级毛片| 国产 一区精品| 亚洲精品乱码久久久v下载方式| 日日撸夜夜添| 啦啦啦韩国在线观看视频| 舔av片在线| 级片在线观看| 日本成人三级电影网站| av女优亚洲男人天堂| 国产日本99.免费观看| 日产精品乱码卡一卡2卡三| 成人二区视频| 久久婷婷人人爽人人干人人爱| 精品一区二区三区av网在线观看| 国国产精品蜜臀av免费| 91狼人影院| av卡一久久| 午夜福利在线观看免费完整高清在 | 深爱激情五月婷婷| 日韩精品中文字幕看吧| 天天躁日日操中文字幕| 亚洲中文字幕一区二区三区有码在线看| 久久人人爽人人片av| 久久精品国产亚洲av香蕉五月| 日韩大尺度精品在线看网址| 日韩强制内射视频| 成年版毛片免费区| 亚洲四区av| 大型黄色视频在线免费观看|