• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Efficient Facial Recognition Authentication Using Edge and Density Variant Sketch Generator

    2022-11-09 08:14:12SummraSaleemUsmanGhaniKhanTanzilaSabaIbrahimAbunadiAmjadRehmanandSaeedAliBahaj
    Computers Materials&Continua 2022年1期

    Summra Saleem,M.Usman Ghani Khan,Tanzila Saba,Ibrahim Abunadi,Amjad Rehman,*and Saeed Ali Bahaj

    1Department of Computer Science,UET,Lahore,Pakistan

    2Al-Khwarizmi Institute of Computer Science,UET,Lahore,Pakistan

    3Artificial Intelligence&Data Analytics Lab CCIS Prince Sultan University,Riyadh,11586,Saudi Arabia

    4MIS Department College of Business Administration,Prince Sattam Bin Abdulaziz University,Alkharj,Saudi Arabia

    Abstract: Image translation plays a significant role in realistic image synthesis,entertainment tasks such as editing and colorization,and security including personal identification.In Edge GAN,the major contribution is attribute guided vector that enables high visual quality content generation.This research study proposes automatic face image realism from freehand sketches based on Edge GAN.We propose a density variant image synthesis model,allowing the input sketch to encompass face features with minute details.The density level is projected into non-latent space,having a linear controlled function parameter.This assists the user to appropriately devise the variant densities of facial sketches and image synthesis.Composite data set of Large Scale CelebFaces Attributes(ClebA),Labelled Faces in the Wild(LFWH),Chinese University of Hong Kong (CHUK),and self-generated Asian images are used to evaluate the proposed approach.The solution is validated to have the capability for generating realistic face images through quantitative and qualitative results and human evaluation.

    Keywords: Edge generator;density variant sketch generator face translation;recognition;residual block

    1 Introduction

    PAKISTAN as an underdeveloped country is facing a lot of challenges such as exponential growth of population,ever-decreasing rate of economic growth and current waves of crimes,etc.Pakistan is facing 2.77% per annual growth of population,which looks quite demanding for developing countries.Metropolitan cities i.e.,Lahore,Karachi,and Islamabad are concentrated with a major ratio of population.This further weakens the planning infrastructures of government agencies.These cities are concentrated with migrating people,who abruptly find a deficiency of social laws and feel curious to violate established laws for social control.This situation leads to an exponential increase in poverty and unemployment,which are prime factors for the increased rate of crimes.

    In addition to theft activities,black markets run by criminologists put a severe burden on the economy of the country by not providing license fees and strict security measures should be applied.In Pakistan,security-sensitive areas such as less populated areas are mostly crime scenes.There has been a rapid increase in serious security threats i.e.,motor-vehicle theft,robbery,theft,and burglary.In recent years,automatic face recognition systems [1] are extensively used by security agencies.Most conventional face identification systems [2] target photo-to photo matching.However,in law enforcement agencies manual sketching of the suspect is performed,as photos are generally unavailable.They usually sketch some narrative descriptions by the victim prone to human errors.Sketch recognition systems attain low efficiency due to considerable variation between drawn sketches and photos.

    Some of the basic limitations of current face recognition systems from sketches are: 1) fewer details in sketches tends to decrease the accuracy of face recognition system 2) noise in sketches can also degrade the performance of face recognition systems 3) skin color-based detection and recognition systems can completely fail on sketches 4) sketch images have less detail of beard and mustaches that might lead to the wrong prediction.Conversion of sketches to photos is a solution to the above-mentioned problems,as photos of suspects are usually available in police records.Transforming sketches to images for criminal cases is a solution for increasing security measures especially in metropolitan cities of Pakistan.

    Our proposed system will be efficient enough to extract information from the synthesized photo.This information can be used for security purposes in sensitive areas like less populated and less crowded areas.Moreover,information can be utilized for security authorized areas like hospitals,offices,judicial courts by security alerts.This research is an effort to develop a low power and low-cost efficient sketch to image synthesis system.In the proposed system manually drawn sketches are transformed into photo-realistic images.After extracting the visual depiction of criminals,information is processed to all central security authorities for further inquiry.This will make the security system more efficient for quick disciplinary actions.Our major target is Pakistan and as we see in Pakistan crime rate is increasing exponentially day by day to reduce criminals by achieving the following objectives: a low-cost automatic system to facilitate law enforcement systems for investigation.

    There has been plenty of work for the image-to-image translation [3,4].Image translation has gained remarkable improvement after the birth of generative adversarial networks (GAN) [5].Providing training data from multiple representations enables GAN based network to transform an image from one representation to another.For example,the images of sketches can be one representation and other representations can be a realistic face.The efficiency of face recognition systems is dependent on the availability of facial features.Face identification from sketch images has to deal with a high rate of false predictions because of lesser details.This research is focused on a sketch to realistic image synthesis system that can assist security personals to increase safety in society.

    Existing facial sketch to image translation data-sets is com-posed of faces from European and Chinese ethnicity.There is variation in facial features and skin tone among different ethnicity.For example,faces from European ethnicity might have light-colored eyes in general but in other communities such as Asia,people usually have dark-colored eyes.Therefore,there is a need for a benchmark data-set that is specific to the Asian community.There is no end-to-end system available that can recognize a person using the hand-made sketch.In traditional methods,human resources are employed to identify a person’s sketch.In the proposed research,we automate the process of identification by synthesizing photo-realistic facial images from sketch and high-level feature information.Major contributions of research work are mentioned below:

    · Face translation system generates realistic faces of human-based on attribute guided approach using the contrastive learning features preserving low-level details such as color and ethnicity.

    · The accuracy of the sketch-based face identification system has improved remarkably to assist law enforcement and security agencies.

    · Asian data-set is generated for the learning of the network for a sketch to image translation with annotation.

    A detailed survey for face recognition and face translation system has been given in Section 2.Section 3 discusses the proposed methodology.Sections 4 and 5 explain experiments and evaluations,respectively.Lastly,the conclusion and future work is presented in Section 6.

    2 Related Work

    In this section,research work related to face recognition and image translation techniques has been discussed.

    2.1 Face Recognition System

    Face recognition techniques have become a demand of the current era due to increased challenges in security issues of society.During past decades numerous algorithms have been devised for face recognition.This research work discusses methodologies based on computer vision and neural network-based.The most well-known techniques for face recognition are Eigenface [6] and Fisher face [7].The Eigenface technique employs Principal Component Analysis (PCA);feature reduction technique.PCA is used to reduce the feature vector set along with maximum change.The Eigen-face is a low-level feature based-technique;preserving the texture features of the face.In contrast to the unsupervised Eigen-face approach,Fisher-face is a supervised approach.It finds unique face descriptors by using a linear discriminator.Both of the above-mentioned approaches rely on the Euclidean structure for extracted face features.The local binary pattern has also been used by researchers for face recognition [8,9].

    Neural networks have brought remarkable improvements in terms of performance and it is highly dependent on big data-sets.Deep neural networks automatically extract feature vectors from data to learn face attributes.Lu et al.[10] proposed a methodology based on ResNet to identify faces.They employed face recognition architecture using a residual block comprising of two supporting networks.FaceNet architecture proposed by Schroff et al.[11] preserved the euclidean feature vector directly.The trained model tends to minimize the distance between similar faces and vice versa.The inception model was employed to extract feature-set and produced remarkable results on the LFW [12] data-set.

    2.2 Image Transformation

    Generating images by learning image collection is a fascinating task in the computer vision and graphics field.Successful approaches in the past years tend to use image fragments for nonparametric techniques [3,4,13,14].In previous years,parametric deep learning networks achieved promising results [1,15,16].GAN’s [5] are the most promising techniques for image synthesis.A discriminator network is trained simultaneously for the classification of images as original or synthesized.GAN attempts to fool the discriminator network,while discriminator alarm generator from synthesizing fake images.The trained generator network is capable of producing distinct images by learning low dimensional latent space.

    Optimization in latent space representation can lead to the manifold of natural images in network visualization [17] and image editing [18].Furthermore,latent space is not well formulated semantically,specific dimension does not correlate to semantics,aligning them to intermediate image structure can give more understanding.Rather than drawing hard constraints on the input sketch,the method proposed by [19] learns the joint distribution of sketch and corresponding real image.Hence keeping the constraint on the input sketch,weak.The output from their model depicts the freedom in the appearance of the generated image.Unsupervised image translation is proposed by Liu et al.[20] proposed an efficient probabilistic approach by learning the most likely output and rendering style.They change the image contents by updating [20] the statistics of the input image.By changing these,they changed image clarity,resolution,appearance,and realism.Wang et al.[21] proposed a novel technique for image translation using semi-coupled dictionary learning (SCDL).Hitoshi Yamauchi et al.worked on removing defects from input images [22].Deep image synthesis is learning low-level latent representation to regenerate images with Generative Adversarial Network (GAN’s) or Variation Auto-encoders (VAE’s) networks [23].Generally,the deep image synthesizing networks can be conditioned on different input vectors [23] like grayscale images [24],object characteristics and 3-dimensional (3D) view parameters,attributes [25],or images and desired view [26].

    Pattern Sangkloy et al.proposed a novel approach of image-to-image translation from a sketch of the image as input [26].Conditional GANs synthesize images based on conditions that are generated on more relevant input from the rest of the data-set.Different techniques are formulated for relevant inputs such as low-resolution images [27],class labels [28],incomplete or partial images [29],or text [30] rather than generating images from latent vectors.Conditional GANs have also been implemented for specific applications such as diverse artistic styles [31],super-resolution [14],video prediction [32],texture synthesis [33],and product images.Image to image translation for general-purpose requires a huge number of paired labeled images as presented by Isola et al.[31].

    Discriminator can be conditioned on specific inputs like input text embedding condition on generator and discriminator [34],which contributes to the powerful discriminator.The unsupervised approach for image translation proposed by Taigman et al.proposed a network that can learn image translation without labeled pair images [35].Furthermore,this mechanism needs a pre-trained function for mapping images to an intermediary representation of cross-domain which relies on labeled images in other formats.Kazemi et al.proposed a framework [24] based on facial attributes and is a conditional version of Cycle-GAN has been presented in this paper.Rather than based on aligned face sketch pairs,the purposed framework only required facial attributes like skin and hair color for training purposes.The performance is evaluated on the FERET data-set and WVU Multi-modal data-set.

    To reduce the requirement of labeled data,a dual learning approach was introduced by Xia et al.[36].The main idea of the dual learning mechanism is to involve two learning agents.In CycleGAN [35] concept for unpaired image translation is introduced,for cyclic mapping dual relation in DualGAN is required.The predominant characteristic of CycleGAN is deter-mined by numerous problems where training data is hard to find like weather transfer and painting style transformation.

    3 Proposed System and Methodology

    This section provides a detailed methodology for trans-forming the sketches to realistic facial images for enhanced face recognition.We proposed a novel system for facial image generation based on the generative adversarial network.The generated face from the proposed system is feed as the input to the face identification system.We have employed contrastive learning using edge and density variant generative adversarial network for transforming the input image of the sketch to a realistic face image.

    3.1 Edge Generator

    Generally,speaking the Architecture of EdgeGAN.Straightforwardly demonstrating the planning between a solitary picture and its relating portrays,for example,SketchyGAN [9],is troublesome in light of the gigantic size of the planning space.We hence all things considered address the test in another plausible manner all things being equal: we gain proficiency with a typical portrayal for an article communicated by cross-space information.To this end,we plan ill-disposed engineering,which is appeared in Fig.1,for EdgeGAN.Instead of straightforwardly construing pictures from draws,Edge-GAN moves the issue of the sketch-to-picture age to the issue of creating the picture from a quality vector that is encoding the articulation purpose of the freehand sketch.At the preparation stage,EdgeGAN learns a typical trait vector for an item picture and its edge maps by taking care of ill-disposed organizations with pictures and their various drawing-style edge maps.At the derivation stage Fig.1,EdgeGAN catches the client’s appearance goal with a quality vector and afterward creates the ideal picture from it.Structure of EdgeGAN.As appeared in Fig.1,the proposed.EdgeGAN has two channels: one including generator GE and discriminator DE for edge map age,the other including generator GI and discriminator DI for picture age.Both GI and GE take a similar clamor vector along with a onehot vector prosecuting a particular class as info.Discriminators DI and DE endeavor to recognize the produced pictures or edge maps from the genuine conveyance.Another discriminator DJ is utilized to energize the produced counterfeit picture and the edge map portraying a similar item by telling if the created counterfeit picture coordinates the phony edge map,which takes the yields of both GI and GE as info (the picture and edge map are connected along with the width measurement).The Edge Encoder is utilized to energize the encoded trait data of edge guides to be near the commotion vector took care of to GI and GE through an L1 misfortune.The classifier is utilized to gather the classification mark of the yield of GI,which is utilized to energize the produced counterfeit picture to be perceived as the ideal class using a central misfortune [20].The itemized structures of every module of EdgeGAN are delineated in Fig.1.

    We actualize the Edge Encoder with the equivalent encoder module in bicycleGAN [37]since they assume a comparative job practically,i.e.,our encoder encodes the “content” (e.g.,the posture and shape data),while the encoder in bicycleGAN encodes properties into dormant vectors.For Classifier,we utilize a design like the discriminator of SketchyGAN while disregarding the antagonistic misfortune and as it was utilizing the central misfortune [20] as the arrangement misfortune.The design of all generators and discriminators depends on WGAP-GP [16].Target capacity and all the more preparing subtleties can be found in the valuable materials.

    3.2 Density Variant Sketch Generator

    Figure 1:Architecture of proposed system for face transformation

    Our objective for DVSG is to create sketches with persistent solidity,which is difficult because of the accompanying reasons: clearly,it is difficult to acquire constant ground truth pictures for each scale factor,which implies the DVSG is a semi-administered model;moreover,we intend to create density variant sketches and utilize a scale factor to control its visual unpredictability,it is interesting to develop an exact association between high dimensional dissemination (picture)and a scalar.This can help for assessing the visual multifaceted nature in the sketch age stage,however non-straight planning between the scalar and sketch densities can likewise be embraced,it is troublesome to gauge the real thickness of yield with a given number.Even though there is no constant substance picture,nevertheless;all the same.Similar nevertheless,we can still utilize a few sketch pictures that depict the basic semantic data of the input picture as the key density pictures K=K1;K2...Kng,K2Y,which are relating to a set of inspected thicknesss=s1,s2,...,sn.At that point,the undertaking now is changed to produce semantically nonstop substance pictures between two key thickness pictures.We first stretch out the thickness factor to a thickness veilMsby filling the veil with the thickness factor,joined with the reference picture,and send them into the content generatorGc,where(Ys)=Gc(Ms;X).Considering the circumstance that we have the ground truth picture,and afterward,a picture recreation misfortune can be utilized to recreate the key thickness pictures as shown in Eq.(1):

    Since there are no such ground truth pictures amongKiandKi+1,it is a semi-directed issue requiring circuitous control.Roused by InfoGAN [3],we attempt to fabricate an association between 5 the created sketch pictures and its comparing thickness factors by embracing a thickness encoder Es,which intends to encode the produced portrays (∧Y (s)) back to a thickness scale that(∧s=E_s((∧Y_(s))).Since we need to express authority over the generated pictures,and then it is improved by a scale reconstruction loss as shown in Eq.(2):

    where,KiandY_(si)show a similar sketch picture and∧Y (si) is the reproduction ofYsi.In our underlying investigations,we found that the scale reproduction misfortune can just assist with assessing the sketch pictures around key thickness sketches.On the off chance that there is a huge mathematical or semantic variety between two key thickness pictures,the thickness encoder just as the scale remaking misfortune is not,at this point ready to guarantee the linearity of pictures between the two key thickness outlines.Thusly,we plan a versatile component distance loss (AFD)to drive the organization to lean the connection between two key thickness sketches.

    For a scale s0 coordinating the non-key thickness sketch Y (s0).It has two neighbors key thickness outlinesKiandKi+1 that are relating to two thick factors si and si + 1,where I=h(si),we characterized the AFD loss as follows in Eq.(3):

    where above parameter shows the feature extraction from the encoder of the content generatorGcand the scale encoderEs.The AFD loss guarantees the linearity in latent space as the versatile loads are contrarily identified with the distance current content picture and its neighbor key thickness outlines.At the end of the day,the produced sketch is compelled to be of higher closeness to its nearest neighbor key thickness portrays.Ours explores likewise show that the AFD misfortune fundamentally improves the coherence of produced content pictures.

    3.3 Discriminator

    The point of the discriminator is to recognize the produced pictures and genuine pictures.The discriminator needs its production to be valid for genuine pictures.For produced picturesGs,the discriminator needs its production to be bogus.The deficiency of the discriminator is composed as shown in Eqs.(4):

    Moreover,multi-scale discriminatorsD1,D2,andD3are used,which are regular in picture production.The coarse to the fine model can improve the nature of the produced picture.At the coarse scale,it can catch the worldwide data and improve the consistency of the produced picture with the enormous open field.At the fine scale,it catches the data at the nearby view and jellies the subtleties,for example,edges,and lines.Also,the multi-scale techniques can diminish the weight on the organization and make the organization simpler for preparing.

    3.4 Losses Involved in GAN

    Multiple loss functions need to be optimized for generating a perfect GAN-based system.

    3.4.1 Adversarial Loss

    Adversarial loss is utilized for creating a realistic image that looks like the original image.The adversarial loss can be described by Eq.(5).

    The above equation shows the generator G for generating images G(x;c) under the condition c.In the above equation,D represents a discriminator that differentiates between real and fake imagery.The generator tries to minimize the objective function of falseness and the discriminator tries otherwise.Domain Classification Loss: This part of the thesis describes the conversion of a single input image x to a transformed image y based on some condition variable c.For converting the sketch to multiple colored faces we utilized domain classification loss with generator and discriminator.Domain classification loss of real images is used for optimizing the discriminator and similarly,domain classification loss on fake images to optimize the generator.Eq.(6) shows the basic loss of real images.

    In the above equation,6 D depicts the probability of real images by the discriminator.Discriminator minimizes this function to find the exact class type of the real image.The loss classification for fake images can be described by following Eq.(7).

    3.4.2 Reconstruction Loss

    Reflection of the original sketch is necessary to enable sketch-based face identification of any person.Optimizing the above two losses ensures that generated images are real enough and belong to the specific color category but to make sure whether the transformed image reflects the original identity of the person,we utilized reconstruction loss.Following Eq.(8) shows the reconstruction loss.

    where Gen represents the generator that takesC(x;c) image as the translated input image and reconstructs the original image from the translated image.All of the above losses contribute towards the full objective function for conditional GAN.The overall objective function of the system for generator and discriminator is shown in Eqs.(9) and (10),respectively.

    3.5 Proposed Face Recognition System

    The proposed face recognizer is residual,aiming to extract deep features.The extracted deep features are robust for generated faces from the sketch.Features extracted from fc7 layers of discriminator are stored in the database for making a comparison with the input probe image features.The system inherently compares stored features from the database and features of an input image for finding the identity of the person.The comparison of prob image features with the database image features is made by using Euclidean distance Eq.(10).

    4 Dataset and Experiments

    In this section,we discuss the details of the evaluation for face recognition and sketch-face transformation system.We have provided an evaluation of both systems (combined and face generation systems).The major evaluation steps involved in our proposed system are elaborated in detail.

    4.1 Training and Validation Dataset

    Details are provided in the below section for traning and validation dataset used for experimentation.

    4.1.1 CelebA Dataset

    Celeb Faces Attributes Dataset (CelebA) is a large-scale face attributes dataset with more than 200 K celebrity images,each with 40 attribute annotations.CelebA has large diversities,large quantities,and rich annotations.There are 10,177 identities,202,599 face images,5 landmark locations,40 binary attributes annotation per image.From 1-162770 images represent the training set,from 162771-182637 images are representing the validation set.and from 182638-202599 images are considered as the testing set.Sample images from the CelebA dataset are shown in Fig.2a.

    Figure 2:Sample images from the dataset (a) CelebA dataset (b) CHUK dataset (c) Self-generated dataset (d) Labeled faces in the wild (LFW)

    4.1.2 CHUK Dataset

    CHUK dataset comprises 188 sketches of students collected from the Chinese University of Hong Kong (CHUK).Out of 188 faces,100 are selected for training while the rest of 88 faces are used for testing.Sketches and images of the dataset are of size 200 250.Some sample instances from the CHUK dataset are shown in Fig.2b.

    4.1.3 Self-generated Dataset

    For Asian community face images are collected from the local community at the University of Engineering and Technology,Lahore.Total 10,764 out of which 4,567 sketches are of females and 6,197 sketches are of males.Sample images from the self-generated dataset are shown in Fig.2c.

    4.1.4 Labeled Faces in the Wild(LFW)

    LFW dataset [12] is gathered to think about unconstrained face acknowledgment issues.This dataset comprises more than thirteen thousand facial pictures gathered from an assortment of sources.Complete 1680 subjects are envisioned and they have at least two discriminative pictures in the dataset.The confinement and size of appearances in unique LWF were dictated by utilizing computerized locater (Viola jones),new cut face pictures,in the LFW crop dataset portrays sensible situations,including mix arrangement,scale decent variety,the pivot of appearances inplane and out of the arrangement.Few sample images from the LFW face dataset are shown in Fig.2d.

    4.2 Training and Implementation Details

    In this section,we throw light on the training part of our proposed system for identifying faces.We employed a Soft-max classifier for learning the best possible features on the face.Optimization of the proposed system is achieved by using stochastic gradient descent.It is the optimization function used for back-propagation we have employed 64 batch sizes with a momentum 0.9.For reducing the over-fitting of the proposed system,we introduced a dropout layer with a 0.35 removal rate.The second important parameter that is set externally is the learning rate during the training procedure.We employed a dynamic learning rate by changing the value of the learning rate to train the system optimistically.Initially,the learning rate was set to 0.001 which is dropped by a factor of ten when the loss stopped decreasing.The final learning rate e for the proposed identification system m was 0.00001.We have utilized Gaussian distribution with zero as a mean value and 0.01 as a standard deviation value.Initial biases are set to zero that are further updated to non-zero value based on back-propagation.

    5 Evaluation

    In the current section details of evaluation for face recognition and sketch-face transformation system have been discussed.We have provided an evaluation of both systems (combined and face generation systems).Extensive images from diverse ethnic groups are employed for training the sketch to photo translation system.We used the standard Celebrity Attribute dataset along with some additional images from the Asian community.CelebA dataset is a diverse dataset in terms of facial features,posed,ethnicity,and poses.This dataset is mostly for European and English actors as shown in Fig.3.It contains quite fewer images from the South Asian community.Considering our local environment we have enhanced this data of 0.2 million im-ages with 12 thousand images from the local Pakistani community.The face transformation system is optimized based on adversarial loss involved during generator and discriminator competition.The loss is optimized based on Adam optimizer with two hyper-parameters of value1=0:5 and2=0:99.We have trained this system on Nvidia 1080 Ti GPU with 11GB of memory.The complete dataset is processed in the form of batches and we used a batch size of 16 for feeding the images.For optimizing the complete system we have set a learning rate of 0.001 initially with dynamic decay after fifty epochs.The network was trained for about eighteen hundred epochs.Five experts having profound knowlwdge of the domain and research task were elected for evaluation of results.The major evaluation steps involved in our proposed system are elaborated as follows in Tab.1.

    Figure 3:Sample sketches from ClebA dataset with translated realistic and ground truths

    Table 1:Task-based evaluation of face realism system by five judges

    5.1 Evaluation of Face Realism

    For evaluating the quality of generated photos,three metrics are used namely structural similarity (SSIM),product-moment correlation coefficient R,and peak signal to noise ratio(PSNR) [37].The proposed at1tributed guided network generates images preserving details of the overall structure.In comparison to this other methods generates an image with blurriness and low-frequency particulars.The comparison of results is shown in Tab.2.

    Table 2:Comparison with state of the art quantitative evaluation

    5.2 Qualitative Evaluation

    Image realism.

    · Quality of the generated image.

    · Identity preserve.

    Kappa calculates the inter-annotator agreement for the classification of the data into y target classes.The standard can be depicted as Eq.(11),The Face transformation system is evaluated by qualitative analysis and subjective evaluation.Fig.3 shows the resultant visualization of output from the proposed sketch to photorealism.From the resultant image,it can be seen that our system outperformed the previous state-of-the-art systems in the past.From the image,it can be observed that the proposed system preserves the originality of the person by maintaining the identity of the individuals.The identity is preserved based on convolution features rather than low-level features.Results of the face recognition system are shown in Tab.3.

    Other prominent systems do preserve the identity up-to some extent but they lose the detail of the original image.Systems like examples show less realistic results than our proposed systems in terms of realism,originality,and identity preservation.The basic contributing factor of the proposed system are listed below,

    · Unique generator and discriminator for transforming the input sketch into different possible faces based on the color of the face.

    · Features are selected from convolution layers.

    · Data is augmented in the proposed system to enhance robustness.

    Table 3:Confusion matrix for the proposed face recognition system

    5.3 Inter-Annotator Agreement

    For comprehensively checking the results of our proposed system,we have utilized an interannotator agreement to check the performance of the proposed system.We took advantage of the famously developed evaluation Cohen Kappa standard for subjective analysis.Cohen Kappa is the measure for inter-rater agreement for covering the realistic views of the developed system described in Eq.(12).It is considered more efficient because it takes the probability of chance rather than percentage agreement only.Provided an input photo to the system,five different judges were asked to rate the system based on questions about the performance of the system.

    where,above probabilities shows the observed and random agreement,respectively.The transformed photos from different baseline models and our system are shown to the judges.The transformed image can be generated based on two different conditions,as mentioned in Tab.5.The complete results of the system are shown in Tab.5.As depicted from Tab.5 our system outperformed in terms of all conditional parameters.The results for white people for other systems are not that much worse because they are primarily trained based on the white people dataset.Five different judges ranked the 100 transformed images from the proposed system.Each image is transformed based on ethnicity and color condition.Following Tab.3 provides the detailed ranks of the images.

    5.4 Face Recognition System Evaluation

    This section provides the detailed results of a combined sketch-based face recognition system.We have employed the Labeled Faces in the Wild dataset for training and evaluating the combined complete system.LFW dataset consists of 5,749 different individuals with two photographs on average.The dataset is split into two portions with 80% for training and 20% for testing the system.For evaluation of the complete system,accuracy has been used as an evaluation parameter.The system is checked based on the face verification task.In the verification task if we are given a pair of photographs of the same individual then the Euclidean distance decides that images are similar or not.The set of images belonging to the same person are represented asISand for different persons,the pair is represented asId.Similarity or dissimilarity is decided based on distance value is whether greater than a particular threshold or not.We set this threshold as 0.5.The correct identified person is decided based on the following Eq.(13).

    Similarly,false accepted are calculated as the following Eq.(14).

    Finally,the accuracy can be computed by Eq.(15).

    For training and testing the complete LFW is split into training and validation set.Tab.5 provides the data-set division for LFW.For evaluating the complete system comprehensively,the validation set is further classified into 1000 known and 150 unknown individuals,means single image from 1000 individuals are placed in the gallery for matching purpose.Similarly,2305 images are divided into 2000 and 305 images.Out of 2000 known images single image is placed in a gallery that makes overall 1305 testing images.Tab.5 shows the detailed results of the proposed combined face recognition system.We have also compared the accuracy of the proposed hybrid system with previous systems.Tab.5 shows the comparison of face recognition based on the sketch.

    As depicted from Tab.4 our system outperformed in terms of all conditional parameters.The results for white people for other systems are not that much worse because they are primarily trained based on the white people dataset.Five different judges ranked the 100 transformed images from the proposed system.Each image is transformed based on ethnicity and color condition.Tab.5 shows the comparative analysis of face recognition based on sketch translation.

    Table 4:Proposed edge and density variant GAN for varying attributes and noise

    Table 5:Comparison table of sketch-based face recognition system

    6 Applications

    Notwithstanding broad sketch-to-photograph interpretation,we illustrate a few applications requiring numerous thickness sketches as a contribution to our model.

    6.1 Multi-Scale Face Editing

    The coarse level compared to a little si permits the client to alter the enormous shapes while disregarding the subtleties which would be dealt with by the model.In the coarse level altering,the client can without much of a stretch change the overall trademark of a human face,including face shape,length of hair,facial articulation,and so forth,as appeared in Fig.4.The fine Level comparing to a huge si underpins modern control on subtleties,for example,the hair surface(wavy to straight),and skin surfaces (adding or eliminating wrinkles),has appeared in Fig.4.Contrasted with past face altering work dependent on division mask [24] or landmarks [10],our strategy has two preferences.The first drawing is a more instinctive and easy-to-use path for picture altering.Also,our strategy can control the altering cycle at various scales from significant article limits to explained miniature structures.

    Figure 4:Face translation based on different attributes

    Figure 5:Cartonification of sample face images

    6.2 Anime Colorization

    Our strategy is generally to various kinds of information,so it can be applied to Anime Colorization and Editing too.Not the same as the past sketch colorization strategy that our model can colorize pictures in both coarse and fine levels.In expansion,we additionally uphold postaltering after colorization.Such altering permits unpleasant adjustment as well as point-by-point changes like adding shadows or features,adjusting the minor surfaces,and so forth,results have appeared in Fig.5.

    7 Conclusion and Future Work

    In this research study,we have developed a hybrid system for improving the robustness of face recognition technology.Face recognition tends to behave worst for sketch-based face recognition.Usually in crime scene photos of the culprit are not available but only eyewitnesses.Recognizing the person through sketches is a challenging task that may lead to false results.To make face recognition possible through sketches,we proposed a unified system by combining sketch to colored-photo transformation and face recognition.We have used contrastive learning using edge and multivariant generative adversarial network for the generating first part of the heterogeneous system.The generated colored image would be from the user-selected choice.Residual learningbased networks enabling the shortcut connections between lower layers and high-level layers are employed for face recognition.Evaluation results show that the proposed system performed satisfactorily for the recognition of sketches.Results also depicted that the transformation system is capable of generating original faces from sketches by minimum loss of identity.In the future,the proposed system can be extended to work for more noisy sketches.Furthermore,the addition of more high-level features for sketch transformation like pose,emotion,and hair color can be employed.

    Acknowledgement: This research work was supported by Computer Science department,UET Lahore and KICS.This research is also supported by Artificial Intelligence &Data Analytics Lab(AIDA) CCIS Prince Sultan University,Riyadh,11586,Saudi Arabia.The authors also would like to acknowledge the support of Prince Sultan University for paying the Article Processing Charges(APC) of this publication.

    Funding Statement: The authors received no specific funding for this study.

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    国产又色又爽无遮挡免| 亚洲国产看品久久| 成人免费观看视频高清| 91aial.com中文字幕在线观看| 成人国语在线视频| 校园人妻丝袜中文字幕| 国精品久久久久久国模美| 国产国语露脸激情在线看| 日韩中字成人| 国产一区二区在线观看av| 亚洲国产精品专区欧美| 国产精品秋霞免费鲁丝片| 男女啪啪激烈高潮av片| 午夜av观看不卡| 日韩制服骚丝袜av| 中文精品一卡2卡3卡4更新| 国产亚洲欧美精品永久| 人人澡人人妻人| 一级毛片我不卡| 人妻少妇偷人精品九色| 免费看av在线观看网站| av在线app专区| 青春草视频在线免费观看| 免费看不卡的av| 国产一区二区三区综合在线观看 | 中文天堂在线官网| 亚洲av成人精品一二三区| 午夜激情久久久久久久| 女性被躁到高潮视频| 高清毛片免费看| 亚洲中文av在线| 欧美日韩精品成人综合77777| 免费大片18禁| 中文字幕制服av| 欧美激情国产日韩精品一区| 欧美日韩视频精品一区| 亚洲精品久久午夜乱码| 国产精品久久久久久久久免| 看免费成人av毛片| www.色视频.com| 久久韩国三级中文字幕| 男人舔女人的私密视频| 插逼视频在线观看| 一级毛片 在线播放| 99九九在线精品视频| 日韩制服丝袜自拍偷拍| 人人妻人人澡人人爽人人夜夜| 中文天堂在线官网| 97在线视频观看| 亚洲欧美中文字幕日韩二区| 18+在线观看网站| 美女大奶头黄色视频| 最近最新中文字幕免费大全7| www.熟女人妻精品国产 | 日韩不卡一区二区三区视频在线| 精品久久蜜臀av无| av天堂久久9| 啦啦啦视频在线资源免费观看| 国产亚洲精品久久久com| 亚洲婷婷狠狠爱综合网| 精品一区二区三区视频在线| 熟女人妻精品中文字幕| 亚洲精品国产av蜜桃| 巨乳人妻的诱惑在线观看| 久久久久久久久久成人| av.在线天堂| 男人操女人黄网站| 国产精品免费大片| 国精品久久久久久国模美| 成年人午夜在线观看视频| 女人被躁到高潮嗷嗷叫费观| 美女中出高潮动态图| 黄色配什么色好看| 天堂中文最新版在线下载| av播播在线观看一区| 18禁裸乳无遮挡动漫免费视频| 黑人欧美特级aaaaaa片| 久久久久精品性色| 亚洲性久久影院| 国产精品一国产av| 91久久精品国产一区二区三区| 日韩 亚洲 欧美在线| 赤兔流量卡办理| av在线观看视频网站免费| 久久影院123| 国产乱来视频区| 久久亚洲国产成人精品v| 蜜臀久久99精品久久宅男| 精品第一国产精品| 高清毛片免费看| 美女脱内裤让男人舔精品视频| 一本—道久久a久久精品蜜桃钙片| 欧美人与性动交α欧美软件 | 免费久久久久久久精品成人欧美视频 | av国产精品久久久久影院| 中文字幕制服av| 亚洲经典国产精华液单| 亚洲综合色网址| 亚洲国产精品成人久久小说| 99国产精品免费福利视频| 日本午夜av视频| 久久国产亚洲av麻豆专区| 欧美成人精品欧美一级黄| 欧美激情极品国产一区二区三区 | 免费人成在线观看视频色| 天天躁夜夜躁狠狠久久av| 久久ye,这里只有精品| 99热这里只有是精品在线观看| 欧美激情国产日韩精品一区| 女人精品久久久久毛片| 亚洲国产色片| 日本黄色日本黄色录像| 久久精品久久久久久噜噜老黄| 精品熟女少妇av免费看| 秋霞在线观看毛片| 美女内射精品一级片tv| 婷婷色av中文字幕| 国产又色又爽无遮挡免| 国产福利在线免费观看视频| 色网站视频免费| 国产精品久久久久久久电影| 最新的欧美精品一区二区| 日韩不卡一区二区三区视频在线| 一级片'在线观看视频| 日日爽夜夜爽网站| 2021少妇久久久久久久久久久| 久久久久久人人人人人| 成年动漫av网址| 伊人亚洲综合成人网| 日韩精品有码人妻一区| 亚洲精品av麻豆狂野| 婷婷成人精品国产| a级片在线免费高清观看视频| 国产精品 国内视频| 精品一区二区三区四区五区乱码 | 少妇人妻久久综合中文| 人人妻人人添人人爽欧美一区卜| 国产成人aa在线观看| 精品一区二区三卡| 插逼视频在线观看| 菩萨蛮人人尽说江南好唐韦庄| www.熟女人妻精品国产 | 狠狠婷婷综合久久久久久88av| 国产成人免费观看mmmm| 在线观看美女被高潮喷水网站| 亚洲四区av| 亚洲国产毛片av蜜桃av| 女人久久www免费人成看片| 亚洲精品自拍成人| 国产精品欧美亚洲77777| 国产精品久久久久久av不卡| 亚洲精品中文字幕在线视频| 精品国产一区二区三区四区第35| 亚洲国产日韩一区二区| 亚洲av国产av综合av卡| 一级a做视频免费观看| 美国免费a级毛片| 亚洲精品中文字幕在线视频| 精品亚洲成国产av| 精品人妻熟女毛片av久久网站| 亚洲av日韩在线播放| 欧美人与性动交α欧美精品济南到 | 黄色 视频免费看| 男人添女人高潮全过程视频| 国产欧美亚洲国产| 国产黄色免费在线视频| 国产精品无大码| 巨乳人妻的诱惑在线观看| 在线观看免费视频网站a站| 熟女av电影| 人成视频在线观看免费观看| 亚洲av.av天堂| 卡戴珊不雅视频在线播放| 国产69精品久久久久777片| 欧美变态另类bdsm刘玥| 久久99一区二区三区| 七月丁香在线播放| 丰满少妇做爰视频| 韩国精品一区二区三区 | 成人漫画全彩无遮挡| 水蜜桃什么品种好| 青春草视频在线免费观看| 制服丝袜香蕉在线| 国产一区二区在线观看av| 91久久精品国产一区二区三区| 97人妻天天添夜夜摸| 精品少妇黑人巨大在线播放| 成人国语在线视频| 国产精品久久久久久精品古装| 美女脱内裤让男人舔精品视频| 欧美国产精品一级二级三级| 久久精品国产鲁丝片午夜精品| 男的添女的下面高潮视频| 久久99热这里只频精品6学生| 国产色爽女视频免费观看| 视频在线观看一区二区三区| videossex国产| 国产又色又爽无遮挡免| 各种免费的搞黄视频| 欧美精品亚洲一区二区| 人体艺术视频欧美日本| 亚洲精品国产色婷婷电影| 内地一区二区视频在线| 美女大奶头黄色视频| 亚洲精品av麻豆狂野| 久久狼人影院| 精品国产乱码久久久久久小说| 亚洲精品日本国产第一区| 不卡视频在线观看欧美| 夫妻午夜视频| 日日啪夜夜爽| 中文字幕人妻熟女乱码| 国产精品久久久久成人av| 久久久久久久亚洲中文字幕| 亚洲美女黄色视频免费看| 丰满少妇做爰视频| 国产成人精品福利久久| 国产精品一区二区在线不卡| 丁香六月天网| 午夜av观看不卡| 2021少妇久久久久久久久久久| 在线精品无人区一区二区三| 色吧在线观看| 国产爽快片一区二区三区| 欧美精品人与动牲交sv欧美| 亚洲国产最新在线播放| 成年人免费黄色播放视频| 国产一区亚洲一区在线观看| 99久久人妻综合| 亚洲精品乱码久久久久久按摩| av卡一久久| 久久久精品免费免费高清| 国产欧美日韩综合在线一区二区| 在线观看www视频免费| 肉色欧美久久久久久久蜜桃| 黄色毛片三级朝国网站| 久久久欧美国产精品| 久久精品国产自在天天线| 免费观看在线日韩| 国产欧美另类精品又又久久亚洲欧美| 国产精品嫩草影院av在线观看| 久久婷婷青草| 国产国语露脸激情在线看| 高清av免费在线| a 毛片基地| 夜夜爽夜夜爽视频| 少妇被粗大猛烈的视频| av不卡在线播放| 亚洲性久久影院| 一本色道久久久久久精品综合| 日本色播在线视频| 黄片播放在线免费| 国产精品无大码| 久久韩国三级中文字幕| 国产日韩欧美在线精品| 两性夫妻黄色片 | 十八禁高潮呻吟视频| 一级毛片黄色毛片免费观看视频| 精品少妇久久久久久888优播| 亚洲图色成人| 国产一区二区三区av在线| 中文字幕精品免费在线观看视频 | 午夜免费鲁丝| 精品视频人人做人人爽| 亚洲国产av影院在线观看| 狂野欧美激情性bbbbbb| 女性生殖器流出的白浆| 夫妻午夜视频| 国产色婷婷99| 99久久中文字幕三级久久日本| 激情视频va一区二区三区| 最近中文字幕2019免费版| 伦理电影免费视频| 国产日韩欧美在线精品| 女人精品久久久久毛片| 男人舔女人的私密视频| 91aial.com中文字幕在线观看| 99久国产av精品国产电影| 免费大片黄手机在线观看| 97在线视频观看| 精品亚洲乱码少妇综合久久| 婷婷色综合大香蕉| 韩国高清视频一区二区三区| 男的添女的下面高潮视频| 好男人视频免费观看在线| 免费在线观看完整版高清| 亚洲av福利一区| 久久久久久久国产电影| 日本免费在线观看一区| av国产久精品久网站免费入址| 午夜福利,免费看| 国产男人的电影天堂91| 中文字幕亚洲精品专区| 亚洲色图 男人天堂 中文字幕 | 久久国产精品大桥未久av| 丰满迷人的少妇在线观看| 欧美精品高潮呻吟av久久| 五月伊人婷婷丁香| 免费看av在线观看网站| 国产亚洲午夜精品一区二区久久| 国产深夜福利视频在线观看| av福利片在线| 在线观看www视频免费| 人成视频在线观看免费观看| 久久国内精品自在自线图片| 大陆偷拍与自拍| 精品第一国产精品| 久久精品国产亚洲av涩爱| 日韩一本色道免费dvd| 青春草亚洲视频在线观看| 丰满饥渴人妻一区二区三| 欧美激情极品国产一区二区三区 | 精品亚洲成a人片在线观看| 国产精品一区二区在线不卡| 999精品在线视频| 免费看不卡的av| 日韩av在线免费看完整版不卡| 777米奇影视久久| 99久国产av精品国产电影| 精品亚洲成国产av| 国产 一区精品| 人妻少妇偷人精品九色| 午夜福利网站1000一区二区三区| 久久国内精品自在自线图片| 欧美xxⅹ黑人| 午夜91福利影院| 看免费成人av毛片| 啦啦啦中文免费视频观看日本| 18+在线观看网站| 大片电影免费在线观看免费| 涩涩av久久男人的天堂| 国产精品秋霞免费鲁丝片| 超碰97精品在线观看| 视频在线观看一区二区三区| 久久精品夜色国产| 国产福利在线免费观看视频| 99久久精品国产国产毛片| 久久精品人人爽人人爽视色| 久久久久久久精品精品| 国产伦理片在线播放av一区| 91国产中文字幕| 一级,二级,三级黄色视频| 国产色婷婷99| 中文字幕人妻丝袜制服| 日韩一本色道免费dvd| 最近最新中文字幕免费大全7| 国产免费一区二区三区四区乱码| 精品人妻一区二区三区麻豆| 亚洲成人av在线免费| 欧美精品国产亚洲| 久久久久久久亚洲中文字幕| xxx大片免费视频| 日日啪夜夜爽| 只有这里有精品99| 中文字幕av电影在线播放| 女性被躁到高潮视频| 蜜桃国产av成人99| 深夜精品福利| 欧美日韩精品成人综合77777| 国产在视频线精品| 日韩一区二区视频免费看| 国产黄色免费在线视频| 天堂8中文在线网| 亚洲国产精品国产精品| 日本色播在线视频| 欧美 日韩 精品 国产| 免费人妻精品一区二区三区视频| 国产精品一二三区在线看| 天天躁夜夜躁狠狠久久av| 国产老妇伦熟女老妇高清| 午夜福利在线观看免费完整高清在| 精品国产国语对白av| av有码第一页| 毛片一级片免费看久久久久| 色94色欧美一区二区| 91精品伊人久久大香线蕉| 黄色配什么色好看| 大片免费播放器 马上看| 成人亚洲精品一区在线观看| 丝袜美足系列| 在线天堂最新版资源| av国产久精品久网站免费入址| 亚洲经典国产精华液单| 午夜老司机福利剧场| 亚洲中文av在线| 国产免费视频播放在线视频| videosex国产| 好男人视频免费观看在线| 亚洲成色77777| 99九九在线精品视频| 欧美日韩综合久久久久久| 亚洲成人手机| 国产熟女欧美一区二区| 热99国产精品久久久久久7| 大陆偷拍与自拍| 夜夜骑夜夜射夜夜干| 国产精品久久久久久久电影| 亚洲精品日本国产第一区| 成人毛片a级毛片在线播放| 美女国产高潮福利片在线看| 亚洲精品乱码久久久久久按摩| 免费黄色在线免费观看| 亚洲精品视频女| 韩国av在线不卡| 国产精品国产三级专区第一集| 国产精品国产三级国产专区5o| 国产无遮挡羞羞视频在线观看| 交换朋友夫妻互换小说| 少妇人妻精品综合一区二区| 只有这里有精品99| 中文字幕亚洲精品专区| 女人精品久久久久毛片| 成年女人在线观看亚洲视频| 国产日韩一区二区三区精品不卡| 亚洲,欧美,日韩| 亚洲精品久久午夜乱码| 久久精品人人爽人人爽视色| 2021少妇久久久久久久久久久| 亚洲欧美一区二区三区黑人 | 亚洲欧美日韩另类电影网站| 亚洲激情五月婷婷啪啪| 婷婷色av中文字幕| 亚洲欧美成人综合另类久久久| 新久久久久国产一级毛片| 十分钟在线观看高清视频www| 午夜影院在线不卡| 蜜桃在线观看..| 国产国拍精品亚洲av在线观看| 日韩电影二区| 国产片内射在线| 亚洲,欧美,日韩| 黄色毛片三级朝国网站| 99国产综合亚洲精品| 咕卡用的链子| 满18在线观看网站| 韩国av在线不卡| 99香蕉大伊视频| 久久精品国产a三级三级三级| 亚洲伊人久久精品综合| 国产欧美日韩一区二区三区在线| 亚洲精品aⅴ在线观看| 好男人视频免费观看在线| 欧美日韩国产mv在线观看视频| 少妇精品久久久久久久| 欧美亚洲 丝袜 人妻 在线| 又黄又粗又硬又大视频| 99re6热这里在线精品视频| av又黄又爽大尺度在线免费看| 999精品在线视频| 亚洲av中文av极速乱| 国产精品熟女久久久久浪| 亚洲av电影在线进入| 国产成人av激情在线播放| 考比视频在线观看| 五月玫瑰六月丁香| 久久精品国产亚洲av天美| 美女主播在线视频| 十八禁高潮呻吟视频| 大片电影免费在线观看免费| 伦精品一区二区三区| 啦啦啦视频在线资源免费观看| 夫妻性生交免费视频一级片| 丝袜脚勾引网站| 汤姆久久久久久久影院中文字幕| 纯流量卡能插随身wifi吗| 18禁裸乳无遮挡动漫免费视频| 人妻 亚洲 视频| 久久人人爽av亚洲精品天堂| 欧美最新免费一区二区三区| 欧美精品亚洲一区二区| 欧美日韩精品成人综合77777| videos熟女内射| xxx大片免费视频| 18禁国产床啪视频网站| 国产成人a∨麻豆精品| 色94色欧美一区二区| 人妻人人澡人人爽人人| 久久久亚洲精品成人影院| 亚洲精品第二区| 香蕉国产在线看| 狠狠精品人妻久久久久久综合| 女人精品久久久久毛片| 18+在线观看网站| 国精品久久久久久国模美| 免费观看在线日韩| a级毛色黄片| 国产国语露脸激情在线看| 国产色爽女视频免费观看| 欧美精品av麻豆av| 国产精品.久久久| 中文字幕精品免费在线观看视频 | 欧美 亚洲 国产 日韩一| 最近最新中文字幕大全免费视频 | 菩萨蛮人人尽说江南好唐韦庄| 青春草视频在线免费观看| 欧美性感艳星| 老司机影院毛片| 夜夜骑夜夜射夜夜干| 18在线观看网站| 免费黄网站久久成人精品| xxx大片免费视频| 国产成人一区二区在线| 亚洲欧美清纯卡通| 国产精品成人在线| 国产爽快片一区二区三区| 亚洲国产欧美在线一区| 国产成人一区二区在线| 我要看黄色一级片免费的| 一区二区三区精品91| 国产乱人偷精品视频| 色吧在线观看| 丁香六月天网| 国产成人精品无人区| 天堂中文最新版在线下载| 激情五月婷婷亚洲| 纯流量卡能插随身wifi吗| 婷婷色麻豆天堂久久| 卡戴珊不雅视频在线播放| 日本欧美视频一区| 中国三级夫妇交换| 97在线人人人人妻| 十分钟在线观看高清视频www| 乱码一卡2卡4卡精品| 日韩制服骚丝袜av| 老熟女久久久| 男女国产视频网站| 黑丝袜美女国产一区| 亚洲精品成人av观看孕妇| 成人国产麻豆网| 免费少妇av软件| 天堂8中文在线网| 香蕉丝袜av| 欧美丝袜亚洲另类| 亚洲欧洲国产日韩| 在线观看免费视频网站a站| 亚洲精品久久午夜乱码| 国产成人精品无人区| 在线亚洲精品国产二区图片欧美| 国产成人精品婷婷| 国产一区二区激情短视频 | 黑人巨大精品欧美一区二区蜜桃 | 性高湖久久久久久久久免费观看| 国产老妇伦熟女老妇高清| 最黄视频免费看| 免费黄频网站在线观看国产| 亚洲av电影在线观看一区二区三区| 美女脱内裤让男人舔精品视频| a级毛色黄片| 一本色道久久久久久精品综合| 高清视频免费观看一区二区| 一本大道久久a久久精品| 亚洲精品,欧美精品| 又黄又爽又刺激的免费视频.| 欧美性感艳星| 男女免费视频国产| 女的被弄到高潮叫床怎么办| √禁漫天堂资源中文www| 久久久久久久久久久免费av| 少妇人妻精品综合一区二区| 精品一区二区免费观看| 欧美变态另类bdsm刘玥| 卡戴珊不雅视频在线播放| 黑丝袜美女国产一区| 久久人人爽人人片av| 亚洲欧洲国产日韩| 国产精品国产av在线观看| 久久 成人 亚洲| 男的添女的下面高潮视频| 亚洲av在线观看美女高潮| 久久久久人妻精品一区果冻| 久久人妻熟女aⅴ| 中文字幕亚洲精品专区| av一本久久久久| 黑丝袜美女国产一区| 另类精品久久| 国产激情久久老熟女| 国产免费现黄频在线看| 欧美成人午夜精品| av又黄又爽大尺度在线免费看| 黄色配什么色好看| 最近的中文字幕免费完整| 少妇人妻久久综合中文| 日韩成人伦理影院| 高清不卡的av网站| 久久国产精品大桥未久av| 一边亲一边摸免费视频| 99热这里只有是精品在线观看| 黄片无遮挡物在线观看| 丝袜人妻中文字幕| 日韩欧美精品免费久久| 亚洲国产精品999| 免费大片18禁| 九草在线视频观看| 久久韩国三级中文字幕| 成人亚洲精品一区在线观看| 午夜福利影视在线免费观看| 成人毛片60女人毛片免费| 日韩 亚洲 欧美在线| 国产一区二区三区av在线| 国产欧美日韩一区二区三区在线| 女人精品久久久久毛片| 男人添女人高潮全过程视频| 亚洲精品国产色婷婷电影| 免费在线观看完整版高清| 亚洲欧美日韩卡通动漫| 性高湖久久久久久久久免费观看| 大香蕉97超碰在线| 久久久亚洲精品成人影院| 婷婷色av中文字幕| 亚洲伊人久久精品综合| 国产又爽黄色视频| 中文字幕最新亚洲高清| 黄色配什么色好看| 欧美国产精品va在线观看不卡| 亚洲精品视频女|