• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A 360-Degree Panoramic Image Inpainting Network Using a Cube Map

    2021-12-14 03:48:38SeoWooHanandDougYoungSuh
    Computers Materials&Continua 2021年1期

    Seo Woo Han and Doug Young Suh

    Department of Electronic Engineering,Kyung Hee University,Youngin,17104,South Korea

    Abstract:Inpainting has been continuously studied in the field of computer vision.As artificial intelligence technology developed,deep learning technology was introduced in inpainting research,helping to improve performance.Currently,the input target of an inpainting algorithm using deep learning has been studied from a single image to a video.However,deep learning-based inpainting technology for panoramic images has not been actively studied.We propose a 360-degree panoramic image inpainting method using generative adversarial networks(GANs).The proposed network inputs a 360-degree equirectangular format panoramic image converts it into a cube map format,which has relatively little distortion and uses it as a training network.Since the cube map format is used,the correlation of the six sides of the cube map should be considered.Therefore,all faces of the cube map are used as input for the whole discriminative network,and each face of the cube map is used as input for the slice discriminative network to determine the authenticity of the generated image.The proposed network performed qualitatively better than existing single-image inpainting algorithms and baseline algorithms.

    Keywords:Panoramic image;image inpainting;cube map;generative adversarial networks

    1 Introduction

    The consumption of images and videos increases exponentially as technology advances.People and devices not only consume images but actively generate images and videos.This trend has made an image and video editing and modification essential.An inpainting algorithm is a technique for restoring an image and removing unwanted objects based on information such as the texture or edges of an object[1].Inpainting is used in many fields,such as image restoration[2,3],video transmission error repair[4],and image editing[5].Inpainting has been a long-standing challenge in the field of computer vision[6].Inpainting methods can be divided into non-learning-based methods and learning-based methods.Nonlearning-based methods are divided into patch-based methods and diffusion-based methods.Patch-based methods[7,8]are used to fill a hole in an image by finding a similar pattern in an intact area within the image with which to fill the hole.Conversely,diffusion-based methods[9]fill a hole by successively filling in small portions from around the boundaries of the hole based on information gathered from the periphery of the hole.Elharrouss et al.[10]Non-learning-based methods do not require a dataset or training a network so that inpainted results can be obtained with less calculation.However,if the background doesn’t feature a repeated pattern or the hole is exceptionally large,the inpainted results are not good[11].To solve this problem,some researchers have studied learning-based methods.Methods using deep learning can be divided into those using convolution neural networks(CNNs)and those using generative adversarial networks(GANs).Recently,research on inpainting using GANs has been actively conducted because the features of GANs generate similar patterns based on the input image rather than simply using the information in the image,resulting in more plausible inpainted results.Generating datasets and training networks are time-consuming,but the inpainted results are more plausible compared to those of nonlearning-based methods.However,the results of inpainting using deep learning are not good when filling exceptionally large holes or when the image features intricate patterns.Liu et al.[12]Research on image inpainting is continuing,and research on video inpainting is also actively underway.Video inpainting is difficult when applying an algorithm dedicated to image inpainting because video inpainting requires an accurate contextual understanding of frame and motion as well as the temporal smoothness of the output video[13].Therefore,an inpainting algorithm using temporal and spatial information in video has been studied.Representative studies include a consistency-aware learning framework which simultaneously generates appearance and ow[14]and a method using high-quality optical ow[15].

    In this paper,we study a method of panoramic inpainting.The panoramic image,or panorama,is an image with a wide-angle of view.Panoramas are used for a variety of purposes,including landscape photography,group photography,and street views.Zhu et al.[16]Advances in camera technology have made it possible to shoot panoramic images,360-degree panoramic images,and 360-degree videos without expensive professional equipment.360-degree panoramas help create immersive content used to represent virtual reality(VR)and describe real space in a three-dimensional sense with a head-mounted display(HMD)used when viewing VR content.Very little research has been done on inpainting panoramic images.The inpainted result of a single-image inpainting algorithm on an equirectangular format panoramic image input is not good because the distortion of an equirectangular format panoramic image is not trained.Also,a memory shortage typically occurs during network training due to the very high resolution of equirectangular format panoramic images.To solve these two problems,we use a cube map format for the panoramic image inpainting instead of an equirectangular format.

    The main contributions of this paper are as follows.First of all,a novel 360-degree panoramic image inpainting algorithm using deep learning is proposed.Instead of an equirectangular format,we use a cube map format with less distortion and propose a network structure which understands the correlation of the six sides of the cube map.Secondly,to train the cube map format panoramic image inpainting network,we use whole and slice discriminative networks trained to distinguish real images from inpainted results.The whole discriminative network looks at each entire cube map face to assess if it is correlative as a cube map image.The slice discriminative network looks at each face of the cube map to ensure local consistency.Finally,we validated the proposed network using 360-degree StreetView,the only publicly available 360–degree panoramic image dataset.

    The paper is organized as follows:Section 2 brie y introduces the theoretical background.Section 3 explains the proposed model,and Section 4 describes the dataset used to train the proposed network.Section 5 describes the experiment and analysis with the proposed network,and Section 6 summarizes the results.

    2 Related Works

    This section brie y introduces a single image inpainting algorithm,a panoramic image inpainting algorithm,a conceptual description of a generative adversarial network,and research trends.

    2.1 Single Image Inpainting

    An inpainting algorithm can erase unwanted objects in an image or plausibly restore damaged or missing parts of an image.Inpainting technology is gradually diversifying from single images to videos.As mentioned in Section 1,inpainting is divided into non-learning-based methods and learning-based methods.Non-learning-based methods are effective in restoring texture but have difficulty in restoring overall shape.Yan et al.[17]For better performance than non-learning-based methods,an encoderdecoder network structure using CNNs has been proposed.Reference Zhu et al.[18]proposed a patchbased inpainting method for forensics images.U-nets Ronneberger et al.[19]and dense blocks were used to alleviate the gradient disappearance effect[20].The latest trend in inpainting research is the use of GANs.A conceptual explanation of a GAN is given in Section 2.3.The use of only CNNs imposes many limitations because CNNs use only information from the input image.However,GANs can generate similar information based on the input image,so the inpainted result is more plausible than that of a method which only uses CNNs.Reference Liu et al.[21]proposed an inpainting method for faces using GANs.Since only GANs are used,the image resolution is low or tends to be unstable for training purposes.Therefore,many network structures using both CNNs and GANs have been proposed.In Nazeri et al.[22],a two-stage generator architecture was proposed to generate an image based on the edges around the hole in an image.After estimating the edge of the hole,the texture inside the edge is restored.The GAN-based inpainting method shows good performance but takes a long time to learn and has the disadvantage of requiring a high-performance training machine to calculate many parameters and perform many convolution operations.

    2.2 Panoramic Image Inpainting

    There are many ways to use inpainting algorithms on panoramic images.Just like an inpainting algorithm is used on a single image,it is used to erase unwanted objects in a panoramic image and reconstruct a damaged image.The study of panoramic inpainting has not progressed much compared to the study of single image inpainting.Zhu et al.[16]proposed a method for inpainting the lower part of a 360-degree panoramic image.This algorithm requires the projection map of the panoramic image.After the input image is projected onto a sphere,the lines and shapes are preserved and inpainted through matrix calculation.This algorithm inpaints only the lower part of the panoramic image,and the inpainted result is not good because it is not a learning-based method.Besides,it is limited in that it only works on images with simple patterns.Akimoto et al.[23]proposed an inpainting method using GANs using the symmetry of a 360-degree panoramic image.In this paper,there is no function to remove a specific object in an image.Only half of the buildings in a 360-degree panoramic street view image are used as input to the proposed network.This network restores a missing building by mirroring the building with the input image.After that,empty space is filled with plausible content.Uittenbogaard et al.[24]proposed the need for inpainting in a panorama to ensure privacy within a street view.This paper proposed a GAN-based inpainting algorithm using a multi-view of a 360-degree video with depth information which could detect and remove moving objects within the video.However,it has the limitation that it cannot be used on a single image.Also,to protect privacy,it provides results by blurring the detected object rather than erasing the object and filling its contents.Panoramic inpainting is also used in image extension technology that converts a single image into wide field-of-view images like a panoramic image.Extending images using existing inpainting algorithms leads to blurry results.To solve this problem,Teterwak et al.[25]proposed a panorama generation and image expansion technique by applying semantic conditioning to a GAN discriminative network.

    2.3 Generative Adversarial Networks

    Generative adversarial networks Goodfellow et al.[26]have brought about tremendous advances in artificial intelligence.GANs are composed of two networks,as shown in Fig.1:A generator,which creates new data instances,and a discriminator,which evaluates authenticity.The generator is called a generative network,and the discriminator is called a discriminative network.The generative network takes as its input a random vectorzto generate an image.At this time,the discriminative network receives the real image and the image created from the generative network as inputs to determine which image is real or fake.The goal of the adversarial network is to make the newly created data instance in the generative network and the real image indistinguishable to the discriminative network.

    Figure 1:Generative adversarial networks architecture

    For generative adversarial networks,the objective function satisfies Eq.(1).As in game theory,the two networks find a balance point with a single objective function.

    Let the real data bex.The actual data distribution isPdata(x)and the random vector distribution isPz(z).GANs learn to maximize the value functionVforDandGto minimize log{1-D(Gz))}.Discriminative networks are trained such thatD(Gz))is 0 andD(x)is 1.The discriminative network trains to distinguish whether the input image is a generated image or a real image.The generative network trains the network so that the generated image is as similar as possible to the real image.Therefore,this structure is called a generative adversarial network because the generative and discriminative networks are trained as adversarial.

    In conditional GANs,the input of the generative network is a random vector.Conditional generative adversarial networks(cGANs)Isol et al.[27]are complementary and modified structures which incorporate existing GANs into images.cGANs train the mapping function from one image domain to another image domain and distinguish whether it is real or not through a discriminative network.The objective function of a cGAN satisfies Eq.(2).The first and second terms are the same as the existing GANs’ objective function.xandyare paired.Letxbe the actual image andythe label image.zis a random vector used in the existing GANs.

    Gulrajani et al.[28]confirmed that it is more effective to use the objective function of cGANs with traditional loss functions rather than simply using the objective function of cGANs.Therefore,the reconstruction loss function used in the CNN-based learning method was adopted.It was explained that theL1distance or theL2distance were used as a reconstruction loss function,and several tasks were tested,but the distanceL1showed less blurry results than the distanceL2and was used as the final objective function.The final objective function used in[28]satisfies Eq.(3)

    cGANs feature that input images and label images are input in pairs to the discriminative network.They also use the u-net structure as the generative network.Information loss occurs when using the encoderdecoder structure commonly used when dealing with images.The u-net is a structure in which an encoder-decoder structure adds a skip-connection which connects the corresponding encoder and decoder layers.Fig.2 below shows the structure of cGANs and their difference from the original GANs.xis the real image,yis the label image paired withx,and G(x)is the fake image created by the generative network.

    Figure 2:The conditional generative adversarial network architecture

    3 Proposed Network

    In this section,we describe the novel network structure and objective functions for the panoramic image inpainting.The input of the panoramic inpainting network proposed in this paper is an equirectangular format panoramic image and mask.When an equirectangular format panoramic image and mask are input,they are converted into a cube map format,then used as input to the generative network,which restores the damaged image.

    In order to delicately restore the damaged part of each face of the cube map,it is input to a one-sided discriminative network.To train the correlation of the six sides,all six sides are input to the two discriminative networks at once.The output of the generative network is an inpainted image of the cube map format.While training on the panoramic image in the cube map format,we set the objective function suitable for this network using adversarial loss and reconstruction loss to obtain a plausibly inpainted result.The key parts of this paper are as follows.We used a cube map format with less distortion to inpaint the panoramic image.To train the texture of each image in the cube map,we designed a slice discriminative network which accepts as input one face of the cube map at a time.To train the correlation of the entire cube map,we designed a discriminative network which accepts as input all six faces of the cube map simultaneously.The proposed network structure is illustrated in Fig.3.

    Figure 3:The proposed panoramic image inpainting network structure based on cGANs

    3.1 Generative Network

    The generative network is based on u-nets.A feature of a u-net is that it connects the encoder layer to the decoder layer,thus reducing the loss of image information.We modified the structure of the u-net to fit the cube map format image.We used LeakyReLU,ReLU,convolution(Conv.),transposed convolution(DeConv.),and batch normalization in the generative network.Tab.1 shows the network structure of the generative network proposed in this paper.

    Table 1:Generative network structure

    Table 1(continued).Decoder[Layer8]DeConv.Input channel=512,output channel= 512,kernel size =4,stride= 2,padding= 1;Batch norm;ReLU;Dropout =0.5 Concatenated Layer(Layer 8,Layer 6)[Layer9]DeConv.Input channel=1024,output channel =512,kernel size= 4,stride= 2,padding= 1;Batch norm;ReLU;Dropout =0.5 Concatenated Layer(Layer 9,Layer 5)[Layer10]DeConv.Input channel=1024,output channel =512,kernel size= 4,stride= 2,padding= 1;Batch norm;ReLU;Concatenated Layer(Layer 10,Layer 4)[Layer11]DeConv.Input channel=1024,output channel =256,kernel size= 4,stride= 2,padding= 1;Batch norm;ReLU;Concatenated Layer(Layer 11,Layer 3)[Layer12]DeConv.Input channel=512,output channel= 128,kernel size =4,stride= 2,padding= 1;Batch norm;ReLU;Concatenated Layer(Layer 12,Layer 2)[Layer13]DeConv.Input channel=256,output channel= 64,kernel size= 4,stride=2,padding= 1;Batch norm;ReLU;Concatenated Layer(Layer 13,Layer 1)[Layer14]DeConv.Input channel=128,output channel= 3,kernel size =4,stride= 2,padding= 1;

    3.2 Discriminative Network

    The generative network is the same as that of[27],but the discriminative network is slightly different.The proposed network uses two discriminative networks.The whole discriminative network was made to be able to discriminate based on the correlation of the six sides of the cube map.The slice discriminative network was designed to determine whether inpainting was well done considering the texture of each side of the cube map.The channel size of the output layer is 1 for both the whole discriminative network and the slice discriminative network because the discriminative network must only discriminate whether its input is real or fake.

    3.2.1 Whole Discriminator

    The whole discriminative network was used to train the correlation of each side of the cube map format panoramic image because when the inpainting is performed without considering the correlation of the six sides,a discontinuous image results when transformed into an equirectangular format.Tab.2 shows the structure of the whole discriminative network used in this paper.Convolution,linear,LeakyReLU,and batch normalization are used for the whole discriminative network.The final output layer type of the entire discriminative network is(Batch number,1).

    3.2.2 Slice Discriminator

    The slice discriminative network was created to determine whether the input image for each side of the cube map format panoramic image is real or fake.The configuration of the slice discrimination network is illustrated in Fig.4.The six cube map faces are sequentially entered into one slice discrimination network,and the outputs are combined into one.Therefore,the final output layer type of the slice discriminative network is(Batch number,1).

    Table 2:The whole discriminative network structure

    Figure 4:The slice discriminative network,which inputs the cube map one side at a time

    Tab.3 shows the structure of the slice discriminative network used in this paper.The slice discriminative network uses convolution,linear,LeakyReLU,and batch normalization.

    Table 3:The slice discriminative network structure

    3.3 Objective Function

    The network proposed in this paper does not use the objective function of a cGAN described in Section 2.As mentioned in Gulrajani et al.[28],GANs are difficult to train.Methods to find ways to train GANs continuously are still being studied.To address the training difficulties of GANs,Gulrajani et al.[28]proposed the Wasserstein GAN gradient penalty(WGAN-GP).An existing Wasserstein GAN(WGAN)used the earth mover’s distance(EMD)to calculate the distribution of generated data and real data.The objective function of a WGAN was created by applying the duality of Kantorovich-Rubinstein.The objective function of a WGAN in the generative network is Eq.(4),and the objective function of a WGAN of the discriminative network is to refer to Eq.(5).

    Gulrajani et al.[28]developed a WGAN by adding a gradient penalty such as Eq.(6)to a WGAN.The points sampled in a straight line between the points sampled from the real data distributionand the generated data distributionis called ^x.

    In this paper,the objective function was defined by adopting ideas from Yu et al.[29],which was used by slightly modifying WGAN-GP.Since the image inpainting can be done by predicting the hole area in the image,the slope penalty is calculated using the product of the slope and the input mask m.It was modified and defined as in Eq.(7).⊙denotes the pixel product.If the mask value is 0,it is a damaged pixel;otherwise,it is 1.

    We used the weighted sum of thel1losses in the pixel direction and the adversarial losses in the WGAN.Thel1loss function is Eq.(8)and the final objective function is Eq.(9).

    In all experiments,λ1was set to 0.001,λ2was set to 10,and λ3was set to 1.2.

    4 Dataset

    This section shows an example of the dataset used in this paper.The proposed network trains network by using the image converted from the equirectangular format panoramic image to the cube map format panoramic image.

    4.1 Image

    In this paper,we used the street view equirectangular format panorama dataset provided in Chang et al.[30],which contains approximately 19,000 images.Because the images are street views,they can be divided into buildings and scenery.Let’s call the building-rich images the building dataset and the tree-rich images the scenery dataset.There are 10,650 and 5080 images of buildings and scenery,respectively.In this paper,we confirmed the performance of the network with two building datasets and two scenery datasets.

    As shown in Fig.5a,when training with the equirectangular format panoramic dataset itself,the resolution of the panoramic images is high,resulting in a memory shortage,and the distortion of the equirectangular format panoramic images is challenging to train.To solve this,we lowered the resolution of the equirectangular format panoramic images and used images converted to a cube map format with relatively little distortion,as shown in Fig.5b below.

    Figure 5:(a)Equirectangular format panoramic image and(b)Cube map format panoramic image

    The panoramic image in the cube map format has six faces,as shown in Fig.5b.Each side is referred to by a face name listed in Fig.6.In this paper,the six faces are used in the following order:F,R,B,L,T,and D.

    4.2 Mask

    Most inpainting studies use two hole types to study inpainting:Rectangular masks in the form of Fig.7a and free-form masks such as Fig.7b are used to erase the shape of objects.

    Figure 6:Cube map face names:L(Left),F(Front),R(Right),B(Back),T(Top),D(Down)

    Figure 7:(a)Rectangular mask and(b)Free-form mask

    In this paper,our network used a rectangular mask because it has many applications,such as erasing or modifying objects and buildings,rather than delicately modifying the image,as do single-image inpainting algorithms.We made a rectangular hole before training a random number of images.Let the width and height of the panoramic image in equirectangular format bewandh,respectively.Let the width and height of the rectangular hole beRwandRh,respectively.The width and height of the hole used for training are expressed in Eq.(10).

    When the user edits the image,the constraints are set on the width and height of the rectangular hole,considering the size of the mask used.Also,in the cube map format,constraints were set to mask multiple faces of the cube map to train the correlation of the connected parts of each face.As shown in Fig.8,the mask was also preprocessed to be converted from an equirectangular format to a cube map format.However,since the mask in the equirectangular format has no distortion,unlike the panoramic image in the equirectangular format,a straight line looks like a curve when converted to the cube map format.

    5 Experiment and Analysis

    In this section,we evaluate our method on one dataset:360-degree StreetView.Since there is only one publicly available 360-degree street view image dataset,it was not possible to evaluate against various panoramic image datasets.

    The system proposed in this paper uses a graphics processing unit(GPU)and is implemented using Pytorch.We measure the proposed panoramic inpainting system against a panoramic dataset[30].Our model has full 7.3 M parameters and was trained on Pytorch v1.5 and CUDA v10.2.When learning the proposed network,the learning rate was set to 0.0004 and the batch size to 8.Even when validating the proposed network,the hole is Eq.(10).It is defined within the range according to Eq.(10),the equirectangular format panoramic image resolution is 512 × 256,and the cube map format panoramic image resolution is 256× 256.

    Figure 8:(a)Equirectangular format mask and(b)Cube map format mask

    5.1 Qualitative Results

    We compare our results with the state-of-the-art single image inpainting algorithm(GI)[29],a baseline using an equirectangular format panoramic image as input(OE),and a baseline using the cube map format panoramic image as input(OC).Our baseline models are comprised of cGANs.OE and OC used the cGAN network structure and objective function.OE is compared with our model to confirm that learning is difficult due to distortion when using an equirectangular format panoramic image.OC is compared with our model to check that the inpainted result is discontinuous when training the correlation of each side when using a cube map format panoramic image as input.GI,OE,and OC are implemented with Pytorch,and the hole size limitation is the same as in Eq.(9).The GI network was trained with a learning rate of 0.0001 and a batch size of 6,and the OE and OC networks were trained with a learning rate of 0.0002 and a batch size of 32.Fig.9 shows the result of inpainting a 360-degree panoramic image using the proposed network.The first through fourth rows are the results of inpainting using the scenery dataset,and the fifth through eighth rows are results of inpainting using the building dataset.These are the plausible inpainted results when compared with the original images with masks and the inpainted images,which were the output of our proposed network.

    Fig.10 summarizes the qualitative results of the scenery dataset.The scenery dataset is relatively easy to inpaint because trees,roads,and sky are the main components of the images.Therefore,the inside of the mask is filled with very different objects than those in the original image.The palm tree trunks weren’t well-erased in GI,OC,and OE,but they were in ours.Besides,in the case of OC,the correlation of each face of the cube map was not trained,so the boundary of each cube face is visible.GI and OE do not see the boundary like ours or OC because the equirectangular format panoramic image is used as input,but there is a distortion inherent in the equirectangular format,which confirms that the inpainted result is not natural.

    Fig.11 summarizes the qualitative results for the building dataset.The building dataset has more image components than the nature dataset,and the inpainting is difficult due to the presence of various buildings and roads in the images,and shadows caused by sunlight.Because GI uses a contextual attention mechanism,it is restored using similar colors and textures in the image.Therefore,it can be seen that similar results are used repeatedly,and the inside of the mask is restored.Ours and OC use the cube map format and convert it to an equirectangular format,so the cube map boundaries are visible and give implausible results.OC shows a blurry inpainted result,and OE is restored using plausible colors and textures,but when part of a building was expressed like a road,the result was implausible when the result is confirmed with the whole image.

    Figure 9:Qualitative results using the scenery and buildings dataset

    Figure 10:Qualitative results using the scenery dataset

    5.2 Quantitative Results

    As mentioned in Yu et al.[29],image inpainting lacks a good quantitative evaluation scale.Structural similarity(SSIM),peak signal to noise ratio(PSNR),L1 distance and L2 distance values were compared for several algorithms and our proposed algorithm,regarding the evaluation metrics used in Yu et al.[29].L1 distance and L2 distance find the pixel value difference from the original image.When GANs are used,the data distribution of the input image is learned to fill the empty hole in the image.The purpose of the inpainting study is to restore missing parts of an image plausibly.Therefore,L1 and L2 distances are relatively challenging to confirm network performance compared to SSIM and PSNR.

    Figure 11:Qualitative results using the building dataset

    Tab.4 shows that the OE model shows overall good performance.However,there is little difference between the metric value of the method we propose and the metric value of the OE model.Compared to a single image,a 360-degree panoramic image typically depicts various objects(e.g.,trees,mountains,buildings,cars)in one image.Compared to the results of a single image inpainting algorithm,the panoramic image inpainting algorithm results may feature a variety of objects newly created by GANs and may differ significantly from the original image.For example,a road may be rendered in the space where a person or car had been deleted in a panoramic image.Therefore,the quantitative comparison of original and generated images is not a sufficient method of evaluating the models.

    Table 4:Table of the quantitative results

    6 Conclusion

    We proposed a novel deep learning-based 360-degree panoramic image inpainting network.There is only one prior study of a deep learning-based 360-degree panoramic image inpainting method;Akimoto et al.[23]is an inpainting method using symmetric characteristics in an equirectangular format panoramic image,unlike a single image inpainting method.Therefore,only a limited number of 360-degree panoramic images can be inpainted with the network of Akimoto et al.[23],because the images must include a symmetrical building to be successfully inpainted.In contrast,the proposed network has the advantage of being able to inpaint like single-image inpainting methods by converting a panoramic image from an equirectangular format to a cube map format.However,since a plausible inpainted result is obtained only by training the correlation between the cube map format panoramic images,a panoramic image inpainting network is proposed comprised of a whole discriminative network and a slice discrimination network.

    Training image inpainting networks using equirectangular format panoramic images is challenging because of distortion.To solve this problem,when using a cube map format panoramic image as input,it was confirmed that an additional algorithm or additional network layers were needed to train the correlation of each face of the cube map.Therefore,we obtained a plausible 360-degree panoramic image inpainted result by adding the whole discriminative network and the slice discriminative network to the baseline model.The whole discriminative network receives the six sides of the cube map as input simultaneously trains the correlation of the six sides of the cube map,and determines their authenticity.On the other hand,the six faces of the cube map are input one by one into the slice discriminative network to train the detailed texture of each face and to determine its authenticity.

    The proposed network showed better qualitative and quantitative results than the single image inpainting algorithm.However,as mentioned in several image inpainting papers,there is no clear evaluation metric for comparing image inpainting performance.The L1 and L2 distances,which are traditionally used to evaluate image performance,are very inaccurate in evaluating the performance of the GANs because the original and generated images are compared.Although the proposed network did not show the best performance in quantitative results,it proved that it did not differ significantly from other networks.Besides,the proposed network produced the most plausible inpainted results through quantitative result images.

    Acknowledgement:I would like to thank San Kim for his comprehensive advice and assistance in building and training networks.I would also like to thank my colleague Eun Young Cha for proofreading this article.

    Funding Statement:This research was supported by Korea Electric Power Corporation(Grant No.R18XA02).

    Conflicts of Interest:We declare that we have no con icts of interest to report regarding the present study.

    三级经典国产精品| 最近2019中文字幕mv第一页| 有码 亚洲区| 日本与韩国留学比较| 日本熟妇午夜| 精品久久久久久久久久久久久| 一本一本综合久久| 一级毛片久久久久久久久女| 18禁黄网站禁片免费观看直播| 精品福利观看| 联通29元200g的流量卡| 成人亚洲精品av一区二区| 麻豆国产av国片精品| 色播亚洲综合网| 国产精品一及| av天堂在线播放| 国语自产精品视频在线第100页| 欧美最黄视频在线播放免费| 国产精品久久久久久久久免| 精品久久久久久久末码| 国产精品国产三级国产av玫瑰| 国产爱豆传媒在线观看| 亚洲人成网站在线观看播放| 亚洲人成网站在线播| 国产真实乱freesex| 亚洲精品日韩在线中文字幕 | 亚洲美女视频黄频| 国产高清视频在线播放一区| 亚洲最大成人中文| a级一级毛片免费在线观看| 欧美一区二区国产精品久久精品| 欧美+日韩+精品| 成人美女网站在线观看视频| 成人高潮视频无遮挡免费网站| 一进一出抽搐动态| 天天躁日日操中文字幕| 18禁在线播放成人免费| 亚洲av成人av| 成人三级黄色视频| 久久久久久九九精品二区国产| 一个人看的www免费观看视频| 亚洲三级黄色毛片| 亚洲久久久久久中文字幕| 精品久久久久久久末码| 狂野欧美激情性xxxx在线观看| 久久亚洲精品不卡| 国产乱人视频| 国产成人一区二区在线| 久久人妻av系列| 老司机福利观看| 成人av在线播放网站| 男女视频在线观看网站免费| 国产一区亚洲一区在线观看| 久久国产乱子免费精品| 国产三级中文精品| 亚洲国产高清在线一区二区三| 国产成人一区二区在线| 小蜜桃在线观看免费完整版高清| 成人一区二区视频在线观看| 99久久成人亚洲精品观看| 国产v大片淫在线免费观看| 最近在线观看免费完整版| 国产高清有码在线观看视频| 免费电影在线观看免费观看| 神马国产精品三级电影在线观看| 变态另类成人亚洲欧美熟女| 两性午夜刺激爽爽歪歪视频在线观看| 免费人成在线观看视频色| 亚洲精品乱码久久久v下载方式| 国产人妻一区二区三区在| 国产精品久久久久久av不卡| 欧美在线一区亚洲| 久久精品综合一区二区三区| avwww免费| 亚洲成人精品中文字幕电影| 国产女主播在线喷水免费视频网站 | 九九久久精品国产亚洲av麻豆| 久久精品人妻少妇| 在线播放无遮挡| 国产69精品久久久久777片| 久久久久国内视频| 免费看美女性在线毛片视频| 国产男人的电影天堂91| 欧美日韩精品成人综合77777| 你懂的网址亚洲精品在线观看 | 亚洲成人久久爱视频| 免费电影在线观看免费观看| 欧美+亚洲+日韩+国产| 少妇高潮的动态图| 国产色婷婷99| 内地一区二区视频在线| 精品午夜福利视频在线观看一区| 最新在线观看一区二区三区| 美女 人体艺术 gogo| 在线免费十八禁| 三级毛片av免费| 男人和女人高潮做爰伦理| 一边摸一边抽搐一进一小说| 卡戴珊不雅视频在线播放| 国产精品日韩av在线免费观看| 免费黄网站久久成人精品| 国产欧美日韩一区二区精品| 久久久久久国产a免费观看| 日本 av在线| 最好的美女福利视频网| 国产女主播在线喷水免费视频网站 | 少妇丰满av| 欧美国产日韩亚洲一区| 欧美色欧美亚洲另类二区| av女优亚洲男人天堂| 国产精品嫩草影院av在线观看| 亚洲精品影视一区二区三区av| 一级毛片久久久久久久久女| 欧美性感艳星| 中国国产av一级| 国产私拍福利视频在线观看| 国产伦精品一区二区三区视频9| 亚洲精品日韩av片在线观看| 国产一级毛片七仙女欲春2| 伦理电影大哥的女人| 国产白丝娇喘喷水9色精品| 久久鲁丝午夜福利片| 蜜桃亚洲精品一区二区三区| 久久九九热精品免费| 亚洲内射少妇av| 长腿黑丝高跟| 丰满的人妻完整版| 最好的美女福利视频网| ponron亚洲| 身体一侧抽搐| 免费无遮挡裸体视频| 伦精品一区二区三区| 又黄又爽又刺激的免费视频.| 老司机午夜福利在线观看视频| 麻豆乱淫一区二区| 给我免费播放毛片高清在线观看| 久久久久国内视频| 一本久久中文字幕| 俺也久久电影网| 日韩av在线大香蕉| 一区二区三区四区激情视频 | av国产免费在线观看| 亚洲成人久久性| 国产午夜福利久久久久久| 精品熟女少妇av免费看| 日韩国内少妇激情av| 国产成人一区二区在线| 又黄又爽又刺激的免费视频.| 亚洲国产精品国产精品| 女的被弄到高潮叫床怎么办| 老司机福利观看| 精品久久久久久成人av| 真实男女啪啪啪动态图| 成人美女网站在线观看视频| 日日摸夜夜添夜夜添小说| 精品久久久久久久久久免费视频| 晚上一个人看的免费电影| 国产亚洲欧美98| 国产单亲对白刺激| 老司机午夜福利在线观看视频| 99热全是精品| 一区二区三区免费毛片| 69av精品久久久久久| 色播亚洲综合网| av卡一久久| 99riav亚洲国产免费| 久久天躁狠狠躁夜夜2o2o| 两个人视频免费观看高清| 亚洲国产精品成人综合色| 免费看美女性在线毛片视频| 综合色丁香网| 久久综合国产亚洲精品| 免费观看人在逋| 岛国在线免费视频观看| 成人漫画全彩无遮挡| 国产精品日韩av在线免费观看| 淫秽高清视频在线观看| 国产精品精品国产色婷婷| 精品久久久久久久久av| 五月玫瑰六月丁香| 国产高清视频在线观看网站| 国产一区亚洲一区在线观看| 国产精品免费一区二区三区在线| 天堂动漫精品| 嫩草影院精品99| 校园人妻丝袜中文字幕| 欧美性感艳星| 国产精品久久久久久av不卡| 久久午夜福利片| 午夜激情福利司机影院| 又爽又黄无遮挡网站| 人妻久久中文字幕网| 级片在线观看| 久久久久免费精品人妻一区二区| 亚洲色图av天堂| 丰满人妻一区二区三区视频av| 国产免费一级a男人的天堂| 国产女主播在线喷水免费视频网站 | 下体分泌物呈黄色| 一区二区三区免费毛片| 在现免费观看毛片| 国内精品宾馆在线| 国产成人aa在线观看| av.在线天堂| 天天操日日干夜夜撸| 少妇人妻精品综合一区二区| 蜜桃久久精品国产亚洲av| av线在线观看网站| 亚洲国产毛片av蜜桃av| 日本黄大片高清| 国产视频首页在线观看| 在线 av 中文字幕| 精品卡一卡二卡四卡免费| 国产黄色视频一区二区在线观看| 久久久久久久久久成人| 久久6这里有精品| 亚洲精品456在线播放app| 国产毛片在线视频| 女人精品久久久久毛片| 久久6这里有精品| 久久久久久久大尺度免费视频| 80岁老熟妇乱子伦牲交| 2022亚洲国产成人精品| 精品少妇久久久久久888优播| 亚洲不卡免费看| 亚洲中文av在线| 2022亚洲国产成人精品| 一边亲一边摸免费视频| 免费大片18禁| 啦啦啦中文免费视频观看日本| 亚洲精品日本国产第一区| 夜夜骑夜夜射夜夜干| 国产精品一区www在线观看| 国产成人aa在线观看| 内地一区二区视频在线| 国产午夜精品一二区理论片| 国产熟女欧美一区二区| 久久99精品国语久久久| 精品国产乱码久久久久久小说| 大片电影免费在线观看免费| 啦啦啦视频在线资源免费观看| 国产美女午夜福利| 亚洲国产精品999| 国产精品久久久久久久电影| 成人国产av品久久久| 国产一区二区在线观看日韩| 国产精品福利在线免费观看| 色吧在线观看| 在线观看美女被高潮喷水网站| 亚洲人与动物交配视频| 国产免费又黄又爽又色| 精品少妇黑人巨大在线播放| 日韩熟女老妇一区二区性免费视频| 国产一区二区在线观看av| 亚洲成人手机| 黑人巨大精品欧美一区二区蜜桃 | 嫩草影院新地址| 久热久热在线精品观看| 99国产精品免费福利视频| 成人黄色视频免费在线看| 久久99热这里只频精品6学生| 在线观看www视频免费| 国产成人午夜福利电影在线观看| 一区二区三区精品91| 亚洲国产精品一区三区| 日本黄色片子视频| 内射极品少妇av片p| 国产精品成人在线| 日韩成人av中文字幕在线观看| 国产91av在线免费观看| 国产精品一区二区在线观看99| 一级毛片 在线播放| 免费观看的影片在线观看| 大又大粗又爽又黄少妇毛片口| 久久99一区二区三区| 亚州av有码| 男人爽女人下面视频在线观看| 欧美97在线视频| 亚洲av欧美aⅴ国产| 大码成人一级视频| 久久久久网色| 国产欧美日韩精品一区二区| 亚洲精品aⅴ在线观看| 各种免费的搞黄视频| 一本久久精品| 在线观看免费视频网站a站| 久久久久久久久久久久大奶| 成人国产av品久久久| 国产精品一区二区在线不卡| 精品久久久精品久久久| 国产 精品1| 中文在线观看免费www的网站| 久久女婷五月综合色啪小说| av在线观看视频网站免费| 日韩av免费高清视频| 在线观看美女被高潮喷水网站| 免费在线观看成人毛片| 丝瓜视频免费看黄片| 欧美高清成人免费视频www| 日产精品乱码卡一卡2卡三| 大码成人一级视频| 亚洲国产精品成人久久小说| 日韩欧美一区视频在线观看 | 精品一区二区三区视频在线| 亚洲精品乱码久久久v下载方式| 黄色一级大片看看| 99热网站在线观看| 精品少妇内射三级| 国产色爽女视频免费观看| 大话2 男鬼变身卡| 搡老乐熟女国产| 天堂俺去俺来也www色官网| 久久av网站| av福利片在线| 卡戴珊不雅视频在线播放| 亚洲av在线观看美女高潮| 黑丝袜美女国产一区| 女人精品久久久久毛片| 亚洲欧美一区二区三区国产| 两个人免费观看高清视频 | 热99国产精品久久久久久7| 国产成人精品一,二区| 美女脱内裤让男人舔精品视频| 我的女老师完整版在线观看| 欧美日韩国产mv在线观看视频| 又黄又爽又刺激的免费视频.| 精品人妻偷拍中文字幕| 日韩强制内射视频| 婷婷色av中文字幕| 人妻制服诱惑在线中文字幕| 美女国产视频在线观看| 国产老妇伦熟女老妇高清| 在线观看免费视频网站a站| av在线观看视频网站免费| 啦啦啦视频在线资源免费观看| 精品人妻一区二区三区麻豆| 永久免费av网站大全| 精品久久久久久久久av| 国产中年淑女户外野战色| 久久久久国产网址| 九九久久精品国产亚洲av麻豆| 国产亚洲精品久久久com| 99久久精品一区二区三区| 免费高清在线观看视频在线观看| 一级毛片电影观看| 免费黄色在线免费观看| 亚洲第一av免费看| 男女免费视频国产| 大片电影免费在线观看免费| 中文乱码字字幕精品一区二区三区| 熟妇人妻不卡中文字幕| 中文字幕人妻丝袜制服| 日日撸夜夜添| av在线播放精品| 蜜桃久久精品国产亚洲av| 天堂中文最新版在线下载| 亚洲av福利一区| 久久 成人 亚洲| 中文字幕免费在线视频6| 亚洲国产日韩一区二区| 国产伦精品一区二区三区视频9| 国产在视频线精品| 又黄又爽又刺激的免费视频.| 日韩人妻高清精品专区| 亚洲综合色惰| 男人爽女人下面视频在线观看| 在线播放无遮挡| 亚洲欧洲国产日韩| 男人和女人高潮做爰伦理| 国产伦理片在线播放av一区| 少妇人妻 视频| 女性被躁到高潮视频| 99re6热这里在线精品视频| 亚洲国产av新网站| 中文乱码字字幕精品一区二区三区| 十八禁网站网址无遮挡 | 亚洲成人一二三区av| 你懂的网址亚洲精品在线观看| 青青草视频在线视频观看| 成人综合一区亚洲| 黄片无遮挡物在线观看| 亚洲成人手机| 免费少妇av软件| av.在线天堂| 丰满乱子伦码专区| 亚洲图色成人| 这个男人来自地球电影免费观看 | 国产一区亚洲一区在线观看| 亚洲av综合色区一区| 你懂的网址亚洲精品在线观看| 久久久久久久精品精品| tube8黄色片| 免费人妻精品一区二区三区视频| 国产亚洲精品久久久com| 国产伦理片在线播放av一区| 精品一区二区免费观看| 熟女人妻精品中文字幕| 久久毛片免费看一区二区三区| 自拍偷自拍亚洲精品老妇| 一级毛片电影观看| 久久精品久久久久久久性| 中文字幕人妻丝袜制服| 欧美性感艳星| 黑人巨大精品欧美一区二区蜜桃 | 一级av片app| 高清黄色对白视频在线免费看 | 十八禁网站网址无遮挡 | 久久久国产欧美日韩av| 一级,二级,三级黄色视频| 一区二区三区免费毛片| 国产又色又爽无遮挡免| 日韩大片免费观看网站| 极品教师在线视频| 在线看a的网站| 黄色毛片三级朝国网站 | 久久久国产一区二区| 国产爽快片一区二区三区| 国产精品一区二区性色av| 少妇裸体淫交视频免费看高清| 老女人水多毛片| 成人黄色视频免费在线看| 久久精品夜色国产| 午夜福利在线观看免费完整高清在| 啦啦啦中文免费视频观看日本| 久久久欧美国产精品| 色吧在线观看| a级毛色黄片| av.在线天堂| 精品一区在线观看国产| 亚洲图色成人| 免费人妻精品一区二区三区视频| 一二三四中文在线观看免费高清| 久久狼人影院| 五月伊人婷婷丁香| 一本一本综合久久| 国产探花极品一区二区| 中文精品一卡2卡3卡4更新| 啦啦啦啦在线视频资源| 日本黄色日本黄色录像| 黄色视频在线播放观看不卡| 久久久精品免费免费高清| 精品国产一区二区久久| 日韩一区二区三区影片| 在线天堂最新版资源| 丰满乱子伦码专区| 精品亚洲乱码少妇综合久久| 久久热精品热| 欧美变态另类bdsm刘玥| 哪个播放器可以免费观看大片| 热99国产精品久久久久久7| 不卡视频在线观看欧美| 九色成人免费人妻av| 丝瓜视频免费看黄片| 亚洲真实伦在线观看| 午夜福利影视在线免费观看| 在线精品无人区一区二区三| 97在线视频观看| 久久久a久久爽久久v久久| 国产欧美日韩综合在线一区二区 | 两个人免费观看高清视频 | 一级爰片在线观看| 菩萨蛮人人尽说江南好唐韦庄| 免费看av在线观看网站| 91久久精品电影网| 国模一区二区三区四区视频| 人妻制服诱惑在线中文字幕| 天堂8中文在线网| 秋霞伦理黄片| 爱豆传媒免费全集在线观看| 涩涩av久久男人的天堂| 精华霜和精华液先用哪个| 狂野欧美激情性bbbbbb| 亚洲欧美一区二区三区黑人 | 三上悠亚av全集在线观看 | 亚洲精品中文字幕在线视频 | 日本爱情动作片www.在线观看| 国产精品国产av在线观看| 有码 亚洲区| 午夜影院在线不卡| 色婷婷av一区二区三区视频| 如何舔出高潮| 日本与韩国留学比较| 欧美xxxx性猛交bbbb| 亚洲国产精品一区二区三区在线| 国产免费一区二区三区四区乱码| 国产在线视频一区二区| 丰满迷人的少妇在线观看| 两个人免费观看高清视频 | 久久狼人影院| 午夜激情久久久久久久| 国产高清三级在线| 成人国产av品久久久| 免费观看av网站的网址| 亚洲精品久久午夜乱码| kizo精华| 男女边吃奶边做爰视频| 欧美精品高潮呻吟av久久| 日韩精品有码人妻一区| 精品人妻熟女av久视频| 一级毛片黄色毛片免费观看视频| 精品久久久精品久久久| 麻豆成人av视频| 成人影院久久| 欧美一级a爱片免费观看看| 美女主播在线视频| 亚洲成人一二三区av| 欧美亚洲 丝袜 人妻 在线| 午夜激情福利司机影院| 国产黄色免费在线视频| 狂野欧美白嫩少妇大欣赏| 高清黄色对白视频在线免费看 | 肉色欧美久久久久久久蜜桃| 亚洲精品国产av成人精品| 精品国产露脸久久av麻豆| 在线观看国产h片| a级一级毛片免费在线观看| 啦啦啦在线观看免费高清www| 亚洲第一av免费看| 一本色道久久久久久精品综合| 涩涩av久久男人的天堂| 美女国产视频在线观看| 乱人伦中国视频| 亚洲欧洲国产日韩| 亚洲人成网站在线播| 国产极品粉嫩免费观看在线 | 国内精品宾馆在线| 欧美区成人在线视频| 嫩草影院入口| 美女内射精品一级片tv| 自线自在国产av| 婷婷色综合大香蕉| 欧美成人精品欧美一级黄| 汤姆久久久久久久影院中文字幕| 下体分泌物呈黄色| 亚洲精品aⅴ在线观看| 久久久久网色| 亚洲成色77777| 人人澡人人妻人| 欧美变态另类bdsm刘玥| 夫妻午夜视频| 久久婷婷青草| 一级片'在线观看视频| 久久久a久久爽久久v久久| 99久久中文字幕三级久久日本| av在线app专区| 亚洲一级一片aⅴ在线观看| 中文字幕人妻熟人妻熟丝袜美| 久久久亚洲精品成人影院| 国产 精品1| 国内少妇人妻偷人精品xxx网站| 久久久久精品性色| 边亲边吃奶的免费视频| 人妻少妇偷人精品九色| 制服丝袜香蕉在线| 2021少妇久久久久久久久久久| 在线看a的网站| 精品一区二区三区视频在线| 三级国产精品欧美在线观看| 天堂8中文在线网| 亚洲精华国产精华液的使用体验| 18禁在线播放成人免费| 不卡视频在线观看欧美| 最近中文字幕高清免费大全6| 精品人妻熟女毛片av久久网站| 日本爱情动作片www.在线观看| 日本黄色日本黄色录像| 一区二区三区四区激情视频| 国产白丝娇喘喷水9色精品| 国产免费视频播放在线视频| 亚洲经典国产精华液单| 国模一区二区三区四区视频| 看十八女毛片水多多多| 好男人视频免费观看在线| 亚洲成人av在线免费| 能在线免费看毛片的网站| 在线天堂最新版资源| 中文乱码字字幕精品一区二区三区| 男女免费视频国产| 少妇 在线观看| 最新的欧美精品一区二区| 免费看av在线观看网站| av免费观看日本| 亚洲精品国产av蜜桃| 成人黄色视频免费在线看| 一级黄片播放器| 精品人妻偷拍中文字幕| 乱系列少妇在线播放| 大话2 男鬼变身卡| 2021少妇久久久久久久久久久| 久久99一区二区三区| 成年人午夜在线观看视频| 免费观看av网站的网址| 亚洲综合精品二区| 国产精品一区二区在线观看99| 91精品伊人久久大香线蕉| 免费看光身美女| 成人影院久久| 夜夜爽夜夜爽视频| 日本wwww免费看| 亚洲人成网站在线播| 国产成人91sexporn| 欧美成人精品欧美一级黄| 欧美日韩精品成人综合77777| 国产69精品久久久久777片| 日韩一本色道免费dvd| 人妻少妇偷人精品九色| 国产黄色免费在线视频| 妹子高潮喷水视频| 亚洲熟女精品中文字幕| 久久精品国产亚洲网站| 2022亚洲国产成人精品| 久久免费观看电影| 少妇 在线观看| 国内少妇人妻偷人精品xxx网站| 亚洲va在线va天堂va国产|