• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Historical Arabic Images Classification and Retrieval Using Siamese Deep Learning Model

    2022-08-24 12:57:10ManalKhayyatLamiaaElrefaeiandMashaelKhayyat
    Computers Materials&Continua 2022年7期

    Manal M.Khayyat, Lamiaa A.Elrefaeiand Mashael M.Khayyat

    1Computer Science Department, Umm Al-Qura University, Makkah, Saudi Arabia

    2Computer Science Department, King Abdulaziz University, Jeddah, Saudi Arabia

    3Electrical Engineering Department, Faculty of Engineering at Shoubra, Benha University, Cairo, Egypt

    4Department of Information Systems and Technology, College of Computer Science and Engineering,University of Jeddah, Jeddah, Saudi Arabia

    Abstract: Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus, there were lots of efforts trying to automate the classification operation and retrieve similar images accurately.To reach this goal, we developed a VGG19 deep convolutional neural network to extract the visual features from the images automatically.Then, the distances among the extracted features vectors are measured and a similarity score is generated using a Siamese deep neural network.The Siamese model built and trained at first from scratch but, it didn’t generated high evaluation metrices.Thus, we re-built it from VGG19 pre-trained deep learning model to generate higher evaluation metrices.Afterward, three different distance metrics combined with the Sigmoid activation function are experimented looking for the most accurate method formeasuring the similarities among the retrieved images.Reaching that the highest evaluation parameters generated using the Cosine distance metric.Moreover, the Graphics Processing Unit(GPU) utilized to run the code instead of running it on the Central Processing Unit (CPU).This step optimized the execution further since it expedited both the training and the retrieval time efficiently.After extensive experimentation,we reached satisfactory solution recording 0.98 and 0.99 F-score for the classification and for the retrieval, respectively.

    Keywords: Visual features vectors; deep learning models; distance methods;similar image retrieval

    1 Introduction

    The accurate image classification and retrieval concerned many researchers.However, most of the efforts handled clear modern images, which made the search and retrieval process much easier.The authors reached that the deep neural networks work well in extracting the features from the images automatically and therefore can retrieve similar images perfectly.

    Deep neural networks usually involve mathematical algorithms that enable them to solve complex problems effectively.They contain a number of hidden layers that enable them to build knowledge on top of each other [1].Hence, as much as the neural network is deeper, as much as, the network builds a better understanding of the problem to be solved.Besides the number of layers, there are the learning parameters, which affect the learning process.Therefore, the developers need to augment the parameters and balance their dataset to generate higher performance results.

    One of the most effective approaches used in training deep neural networks is the transfer learning approach.This approach expedites the learning process.Because it uses the previously learned knowledge from specific huge dataset images on a new unseen dataset images.

    In this study, we challenged ourselves by considering the historical low-quality text-based images.We developed a deep neural network to measure the similarities among the extracted visual features and retrieve the most similar images to a historical query image.Our contributions are summarized as follows:

    1.Utilize both“one-shot learning”strategy along with the“weighted cross-entropy”algorithm to assist in balancing the dataset and, therefore, classify the images effectively.

    2.Classify the images using a pre-trained deep learning model and enter the classified features vectors into a Siamese model that is built according to the architecture of the VGG19 deep learning model to minimize the distances among the classified images further.

    3.Compute the similarity scores using three different distance metrics used in conjunction with the Siamese Sigmoid activation function, looking for the metric that produces the highest similarity results.

    4.Optimize the retrieval of the top-ksimilar images to a user query image without the need for any images’segmentation pre-process through utilizing the GPU.

    Rest of the paper is organized as follows: Section 2 reviews related work on the field.Section 3 discusses the proposed method, including its mathematical representation and development, as well as, it explains the employed strategies for the learning and for image retrieval.Section 4 conducts the experiments and clarifies their results and, Section 5 concludes the paper.

    2 Literature Review

    Most recent studies utilized the deep convolutional neural networks for image classification and retrieval and proved their success.Mehmood et al.[2] utilized the VGG-16 deep convolutional neural network to automatically extract the visual features from Alzheimer’s disease images.Then, they used the Siamese deep neural network to accurately find similar images and classify them.To overcome the small number of training images, the authors did an augmentation step.Finally, they were able to reach 99.05% accurate classification of Alzheimer’s disease images.

    Tian et al.[3] experimented retrieving similar images using a Siamese model but, with an enhanced loss function that is different than the commonly used logistics and triplet loss functions.They started their work by training samples from“ILSVRC2015”then, evaluate their model using the images of“OTB2015”dataset.The enhanced loss function linked all of the sample images using a dense layer.The authors experimented their model with different parameters reaching 60% Area Under Curve(AUC).

    An-Bing et al.[4] developed a deep Siamese near-infrared spectroscopy model to predict drugs.They begin by collecting the data set images and label the similar pairs with“0”and the un-similar pairs with“1”.Afterward, they trained the developed deep Siamese model using CUDA GPU by running the model 60 epochs.The authors recorded above 97% accurate prediction.

    Baraldi et al.[5] developed a Siamese deep learning model to discover similar videos.They began their work by splitting the videos into pictures then, develop a Convolutional Neural Network (CNN)that is similar in its architecture and layers to the Caffe neural network.The authors trained their network on the images of two popular datasets, which are ImageNet and Places datasets.Afterward,they tuned the developed CNN and extracted the visual features from their videos’images.The ReLU activation function was used to measure the similarities among the extracted visual feature vectors.Regarding the textual contents within the videos, they were extracted using the clustering k-means method.Then, the similarities among the extracted words were measures using the Cosine distance method.Finally, the authors merged both the visual and the textual similarities into one score using the Gaussiankernel.the Gaussian kernel.The authors evaluated their model and recorded around 64% successful detection of the video scenes.

    Li et al.[6] proposed a ResNet50-driven Siamese model that accepts (127 x 127) colored image and then retrieve the most similar images to the query image.The authors started by modifying the original ResNet50 CNN by minimizing the strides at the least blocks to include only eight pixels.They also added additional convolutional layer to reduce the number of accepted channels.Afterward, they trained their model on the TrackingNet dataset images recording 73.3% Area Under Curve (AUC).

    Chen et al.[7] developed two different deep Siamese networks to extract the features from the Very High Resolution (VHR) images within the GF dataset.The first developed Siamese model was not fully connected, while the second enhanced model is fully connected.The authors reached that the second model is outperforming the first one recording 97.89% overall accuracy.Similarly, Chaudhuri et al.[8] designed a Siamese model for retriving the VHR remote sensing images.The authors segmented the dataset images into regions.Then, they employed the adjacency graph method to find-out the closest regions to each other and hence, figure-out the similar images.The others evaluated their model using two datasets named: UC-Merced and the PatternNet datasets.They calculated the meanAverage Precision (mAP) and recorded 69.89% and 81.79% using the UC-Merced and the PatternNet datasets respectively.

    Kande et al.[9] designed a generative Siamese deep learningmodel to retrieve the Spectral-Domain OpticalCoherence Tomography (SDOCT) images.The researchers proposed joining the loss generated from the Siamese model with the restoration loss that is generated from the CNN as this combination resulted in more similar images to the query image.To evaluate their model, the authors computed the Texture Preservation (TP) index and recoded 68%.

    Radenovic et al.[10] recommended fun-tuning a deep VGG-CNN on an annotated 3D images.They employed the Siamese learning strategy to extract the visual features from the Oxford5k and Paris6k datasets’images.The authors computed the mean Average Precision to assess their model and reached 91.9%.Similarly, Zhou et al.[11] utilized the Sketch-Based Local Binary Pattern (SBLBP)method to extract the visual features from sketched 3D images.Then, they developed a Siamese deep learning model to measure the similarities among the extracted featutres and retrieve similar images.The authors experimented their model using two popular datasets, which are the National Taiwan University(NTU)andasketcheddatasetfrom Eitzet al.Finally,theyrecorded60%and39%Precision on the NTU and the sketched datasets respectively.

    Koch et al.[12] used the one-shot learning strategy to train their developed Siamese neural network on the Omniglot dataset.The dataset contains images of handwritten alphabets.The authors experimented three different dataset sizes for training their Siamese network and they concluded that,as much as, the number of the training samples increases, as much as, the verification accuracy also increases reaching 93.42% using 150,000 training images.

    Ong et al.[13] developed VGG16 CNN to extract the visual features from the images.Afterward,they classified the images using the Fisher Vector (FV) encoders.The authors employed the Euclidean metric to measure the differences between the images and eventually retrieve similar ones.They were able to record 81.5% mAP using the Oxford dataset and 82.5% mAP on the Paris datasets.On the other hand, Qiu et al.[14] also developed a Siamese model combined with an Euclidean metric to retireve the similar images, but they utilized the ResNet CNN instead of VGG to discover the loop closure.The authors evaluated their model using the TUM dataset and recorded 87.7% precision.

    Wiggers et al.[15] prefered utilizing the AlexNet CNN in conjunction with the Siamese neural network.They experimented their model using the public“Tobacco800”dataset images.The authors also tested the effects of different feature vector sizes.They concluded that, as much as, the feature size increase, as much as, the model record better accurate results reaching 94.4% mAP.

    From the studied literature, we reach that a number of researchers employed the Saiemse model and proved its success in measuring the similarities among the images.However, none of the studies tested the efficiency of the model on the Arabic handwritten characters.Also, none of the studies experimented with different distance metrics in conjunction with the Siamese model.Thus, we aim in this study to experiment more than one distance metric with the Siamese model and also execute the model using both the CPU and the GPU to optimize the image retrieval task.

    3 Methodology

    This section begins by explaining the development of the proposed method, including its mathematical representation and learning strategy.Afterward, it explains the method of matching the images and retrieving similar images to a user query image.

    3.1 Proposed Method

    The proposed method consists of two main steps, which are: the image classification and the similarity measurement.The image classification step begins by entering all the dataset images into a pre-trained VGG19 deep convolutional neural network to automatically extract the global high-level features from the images and classify them as explained in [16].The feature vector of the user query image denoted as FV(a) and the feature vector of all other images stored in the dataset denoted as FV(b), are saved in a database to be able to access them easier and faster during the second step.

    The second step, which is the similarity measurement, takes the outputs from the images’classification step, which are the feature vector of the user query image and the features vectors of all classified images stored in the dataset and enters them as inputs to the Siamese model as illustrated in Fig.1.

    Figure 1: Training and testing the Siamese model for retrieval

    From Fig.1, we notice that during the training phase, the Siamese model enters each one of the input images features vectors into one of the twin Siamese networks.The model uses a max-pooling layer to reduce the spatial dimensionality that exists within the tensors, followed by a flatten layer to convert the two-dimensional matrix into one vector.Then, a dense layer including the“Sigmoid”activation function, to predict the similarities among images.Hence, the last employed layer by the model during the training is the“Cross-entropy loss”shared layer, which includes only one neuron to classify the images as either similar denoted by (1) or not similar, denoted by (0).

    In the testing phase, the Siamese model re-produces the feature vector of the images afterminimizing the distances among their classified classes and computes the distances between the featurevector of the user query image and the features vectors of all other images in the dataset to generate the ranked similarity scores after computing the sigmoid function of the computed distances.The finaloutputs from the model are the ranked top-k similar images to a user query image.

    3.1.1 Mathematical Representation of the Siamese Model

    The Siamese model doesn’t learn any classification; instead it learns to compute the similarity between classified images [17].It consists of twin convolutional networks that share the same weights and structures and are joint through their calculatedweights and the computed loss function by the last fully connected dense layer.The main goal of the loss function is to reduce the difference among the probability distribution of the true labels and the predicted labels.The minimization function presented in Eq.(1) by Lamba [17], ensures that the classified images from the same class preserve analogous feature vector.

    whereLsis the loss function of theithfeature in a dataset including a total number ofNsamples,i= 1,2,3,...,N.

    kis the class andCis the classes’classifier.yiis the true label in the Y set of true labels, Y =,yi∈{1,2,3,....,C} andtk(yi) is the distribution ofyi.

    P(yi=k|bi;wk) is the probability distribution of the predicted label.biis the binary representation of theithfeature, B =,bi∈{-1, 1}.wkis the weight of the classifier.

    After computing the loss function, the Siamese model usually contains a pure static distance metric to compute the exact similarity score of the images.Hence, it subtracts the feature vector of the user query image from the features vectors of all other images stored in the dataset.And after calculating the difference between the two extracted features vectors, it converts the computed difference into one single number using the“Sigmoid”activation function.The output number from the final computed“Sigmoid”function presented in [18] is called the similarity score (SC).

    where σ is the“Sigmoid”activation function,jrefers to the number of neurons on a layer.

    ∝jis the final computed difference between the two feature vectors by the twins’Siamese neural networks,Lrepresents the number of layers in each Siamese neural network.Hence,L-1is the layer before the last fully connected layer.

    3.1.2 Siamese Model Development

    There are two different developments for the Siamese deep learning model.The first is to develop the model and train from scratch.While the second method is to develop the Siamese model using the structure and the weights of a pre-trained deep learning model.Tab.1 illustrates the layers, weights,and the overall architecture of the built Siamese model from scratch.

    Inspired by Singh [19] the second developed Siamese model is according to the structure of the pre-trained VGG19 deep learning model, as illustrated in Tab.2.

    Table 1: The architecture of Siamese deep neural network

    From Tabs.1 and 2 we notice that the Siamese model accepts two input images.In contrast, all other classical deep learning models take only one input image.

    Table 2: The architecture of VGG19-Siamese deep learning model

    The total number of trainable parameters, within the built Siamese model from scratch, equals 857,649.While it equals 20,229,485 in the Siamese model built from the structure of the VGG19 pretrained deep learning model.

    3.2 Learning Strategy

    One-shot learning strategy is employed, which allows the model to predict the true labels after seeing only one example from each class.It is accomplished through defining two customized lists for the training subset, named: images and external.The images customized training list takes one random image from each class and shows it to the model as an example of a similar matching image.In contrast, the external customized training list shows the model one random image that is from any different manuscript than the user query image to show the model an example of non-similar images.This strategy of learning saves training time, and it is useful when some of the dataset classes include only few examples.

    Moreover, the“weighted cross-entropy”algorithm used to weight the classes including the minimum number of images more than the classes including a more significant number of images.This algorithm is used to fix the unbalanced images within the dataset.Because the unbalanced ratio of images may result in an accuracy paradox problem were the generated results could be biased and overfitted to the manuscript, including the largest number of images [20].

    4 Experiments and Tests Results

    In this section, we initially clarify the settings for our experiments then, we begin by experimenting the image classification using the VGG19 deep learning model.Afterward, we experiment the similarity measurement using the developed Siamese model from scratch and compares it with the developed Siamese model using the weights and the structure of the VGG19 pre-trained deep learning model.Moreover, we test three different distance metrics combined with the Siamese model to find the most accurate metric in calculating the similarity scores.

    4.1 Settings of the Experiments

    The testing machine is“ABS Battelbox”PC, including Ubuntu 16.04 Operating System and Intel Core i7-9700K 3.60 GHz with (8) core processors and Nvidia Gefore RTX 2080.In regard of the used softwares to run the experiments, we utilized Python (3.7.4) programming language with the Jupyter notebook web application interface.

    After setting the hardware and software for the experiments, we started by categorizing the dataset used in the study by Khayyat et al.[21], which consists of (8638) historical Arabic manuscripts’images into three main subsets that are the training, the validation, and the testing.70% of the entire dataset size is allocated for the training purpose.While 30% of the dataset size is divided equally between both the validation and the testing subsets.The model initially trained on the 70% data using ten learning cycles and then tested on the 15% unseen data by the model to evaluate its performance.

    4.2 Image Classification

    The main goal of the VGG19 deep learning model is to classify the images according to their predicted manuscript id.The recorded evaluation parameters of the classification process are summarized in Tab.3.

    Table 3: Evaluation of the image classification model

    FromTab.3 we notice that the generated evaluation parameters are all above 97%, which confirms the model’s success in classifying the images.Tab.4 illustrates the calculated precision, recall, and Fscore per each classified manuscript.

    Table 4: Evaluation parameters per classified manuscript

    Table 4: Continued

    From Tab.4 we notice that (38) manuscripts were including 1.0000 under their evaluation parameters, which means that they were 100% successfully classified.Moreover, we notice that the lowest recorded F-score that takes into account the calculation of both the precision and the recall was by the second manuscript as 85.11%.Hence, we admit the effectiveness of the VGG19 deep learning model in classifying the images successfully.

    4.3 Similarity Measurement and Image Retrieval

    To measure the similarity of the classified images, we experimented two developments for the Siamese model.One is the development of the Siamese model from scratch, and the other is its development from the VGG19 pre-trained deep learning model.Considering that both developed models were using the Euclidean (L2) distance metric to measure the similarity scores.The accuracy,precision, recall, F-score, and the mAP evaluation parameters summarized in Tab.5, were computed to assess the performance of the two developed models.

    Table 5: Evaluating the developed Siamese deep learning models

    From Tab.5 we notice that the resulted evaluation parameters were higher when we used the pre-trained deep learning model to build the Siamese deep neural network.Hence, we will use the structure and the weights of the VGG19 pre-trained deep learning model that was initially trained on“ImageNet"1dataset images to develop our Siamese network.

    After reaching that the Siamese deep learning model performs better when it developed using the architecture of a pre-trained deep learning model, we aim to improve its performance further.Therefore, we experimented using the Siamese model in conjunction with different distance metrics than the Euclidean metric.We tried both the Manhattan (L1) and the Cosine distance metrics and summarized the results in Tab.6.

    Table 6: Siamese deep learning model combined with three distance metrics

    From Tab.6 we notice that the Manhattan distance metric generated 95% accuracy and the Euclidean distance metric generated around 97% accuracy.On the other hand, the highest evaluation parameters generated using the Cosine distance metric.Thereby, we will use the Cosine distance metric to measure the similarity scores of the retrieved images.Fig.2 highlights the generated confusion matrix by the Cosine distance metric.

    Figure 2: Confusion matrix of cosine distance metric

    Since the Siamese deep learning model is a binary classification problem that either classifies the dataset images as similar or as non-similar.Then, the generated confusion matrix in Fig.2 is including the true positive, false positive in the first row, and the false negative, true negative in the second row.Considering that, our goal is to have high true positives, as well as, high true negatives, to claim that the model is performing good in classifying the images.

    From the confusion matrix generated using the Cosine distance metric, we notice that the images classified as true positive were (410), which is an excellent performance.At the same time, the model is performing good in predicting the false negative, and this performance is due to the 99% accuracy.A clear diagonal including the large numbers is created inside the confusion matrix, which confirms the effectiveness of the developed Siamese deep learning model when combined with the Cosine distance metric in classifying the images.Thus, we decided to use the Cosine distance metric in the proposed image retrieval system, and we generated its Precision-Recall curve as illustrated in Fig.3.

    Figure 3: Precision-recall curve of the cosine distance metric

    The precision-recall curve helps in evaluating the binary classification problems.Thus, it is a good assessment tool for our problem.The curve summarizes the various generated probabilities among different discrimination threshold values.

    From Fig.3 we notice a steady improvement in the precision values as much as the recall values increase, which reflects the good learning ability of the model using the Cosine distance metric in conjunction with the Siamese deep learning model reaching both precision and recall values that are above 99% successful classification of images.

    Considering that the precision-recall curve doesn’t reflect the true negatives existed in our confusion matrices.Then, it didn’t show the non-similar images that were classified correctly, and thereby, we also generated the Receiver Operating Characteristic (ROC) curve of the Cosine distance metric, as illustrated in Fig.4.

    Figure 4: ROC curve of the cosine distance metric

    The orange dots within the ROC curve are representing the generated probability values by the Siamese deep learning model using the Cosine distance metric.While the blue dashed line appears in the middle of the curve is to easily visualize the values when the true positive rate equals the false positive rate.From Fig.4 we notice that the true positive rates are increasing, reaching a high rate that is close to (1).This result reflects the 99.37% of the predictions classified as true-positive rates.In other words, most of the dataset images correctly classified as either similar or not similar.Whereas, there are few images classified as false-positive rates, which is good because there are only few false alarms generated by the dataset.

    However, the model spent many hours training the Siamese model.And even after the training,it consumed long time to generate the output results, and that is due to the Siamese nature in taking two input images, which increased the dataset size exponentially.

    Therefore,we installed and activated bothCUDAtoolkit andcuDNNdeep neural network library on our machine to run the code on the GPU instead of running it on the CPU.Considering that the installed NVIDIA GPU on our machine is“Gefore RTX 2080”.Thus, its compatible with (10.0.130)CUDA driver and cuDNN version (7).After compiling and executing the same code using the GPU,we reached good enhancements on the results, as illustrated in Tab.7.

    Table 7: Comparison between the Siamese execution on the CPU and GPU

    We notice from Tab.7 that the overall training time for the ten learning cycles was around 12 h using the CPU.While, it took half of the time using the GPU.Thus, we can save lots of time utilizing a rigid GPU.Regarding the retrieval time of the top-10 similar images to one query input image, there is a dramatic decrease in computing it using the GPU.

    Considering that, there is a tremendous number of methods used for image retrieval ranging from the classical static formulas to machine learning, and evolving to reach the deep learning methods.Thereby, we compare in Tab.8 our proposed method with the very similar approaches to us.

    Table 8: Relative comparison with the state-of-the-art methods

    Table 8: Continued

    The comparison in Tab.8 is according to the studies that utilized the Siamese deep learning models as an image retrieval approach.We listed the distance metric and the used dataset as primary comparison criteria.The comparison is relative because almost all of the studies listed in Tab.8 used existing trending clear images.On the other hand, we collected our dataset manually from text-based faded ink historical images.The highest recorded results were highlighted in bold font style.

    From Tab.8 we notice that the proposed method is generating high image retrieval results without the need for any image segmentation process.That is due to the employed learning strategy, as well as,due to the use of the pre-trained deep learningmodel to performthe classification and then minimizing the distances among the classified images utilizing the Siamesemodel.In contrast, we notice that some of the other state-of-the-art methods were requiring segmenting the images, building the models from scratch, or were using irrelevant distance metrics, which lower the image retrieval performance.The research by Mehmood et al.[2] generated close results to ours.However, their result is slightly lower in performance, which might be due to utilizing the VGG16 version of the VGG CNN that is including fewer layers.As well as, they didn’t combine the Siamese activation function with any other static distance metric.Therefore, we admit that we reached a novel approach that proves its success, among other existing methods used for image retrieval.

    5 Conclusion

    This study aims to optimize the image classification and retrieval task, especially for the historical low-quality text-based images.Thus, we experimented the development of the Siamese deep learning model and training it from scratch and compared it with the development of the Siamese model leveraging the weights and the structure of a pre-trained deep learning model.Moreover, we experimented three different distance metrics looking for the most accurate method working in conjunction with the final“Sigmoid”activation function included within the Siamese model to measure the similarity scores.The three-distance metrics are Manhattan, Euclidean, and Cosine.To evaluate the proposed method, we computed the accuracy, precision, recall, F-score, and the mAP.

    After extensive experimentations, we found out that building the Siamese model according to the architecture of the VGG19 pre-trained deep learning model is performing better than building it from scratch.In addition, we concluded that the Cosine distance metric was the most accurate method in computing the similarity scores combined with the“Sigmoid”activation function.

    Even though training the Siamese deep learning model might be time-consuming due to its nature in comparing two images from the same dataset, it is noticed that the model retrieves similar images quickly after it is trained.Hence, we admit that the proposed model presents a successful solution for solving the classifying and the retrieval of the most similar images to a user query image.The solution workedwell with the historical ancient text-based images.Thus,we expect that employing the proposed model with modern clear dataset images, will also generate successful results.

    Funding Statement:The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4400271DSR01).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    首页视频小说图片口味搜索| 午夜影院日韩av| 国产1区2区3区精品| 久99久视频精品免费| 亚洲人成77777在线视频| 美女免费视频网站| 激情在线观看视频在线高清| 国产片内射在线| 91国产中文字幕| 久久久久久大精品| 国产真人三级小视频在线观看| 亚洲五月色婷婷综合| 啦啦啦观看免费观看视频高清| 无人区码免费观看不卡| 精品国产乱码久久久久久男人| 精品久久久久久,| 国产黄片美女视频| 亚洲成a人片在线一区二区| 国产成人精品无人区| 亚洲精品粉嫩美女一区| 美女扒开内裤让男人捅视频| 免费高清视频大片| 久久天堂一区二区三区四区| 无遮挡黄片免费观看| 国产午夜福利久久久久久| 久久久久久久久中文| 久久青草综合色| 国产片内射在线| 人人妻人人澡人人看| 国产一区在线观看成人免费| 熟妇人妻久久中文字幕3abv| 淫妇啪啪啪对白视频| 欧美色欧美亚洲另类二区| 人成视频在线观看免费观看| 久久中文字幕一级| 国产精品一区二区三区四区久久 | 久久精品人妻少妇| 国产午夜福利久久久久久| 国产又黄又爽又无遮挡在线| 波多野结衣高清作品| 99re在线观看精品视频| 日韩中文字幕欧美一区二区| 脱女人内裤的视频| 老司机福利观看| 国产真实乱freesex| 色播亚洲综合网| 欧美性猛交╳xxx乱大交人| 88av欧美| 亚洲,欧美精品.| 精品国产超薄肉色丝袜足j| 十八禁网站免费在线| 欧美成人午夜精品| 欧美乱码精品一区二区三区| 制服人妻中文乱码| 国产视频内射| 精品欧美国产一区二区三| 日本a在线网址| 亚洲精品色激情综合| 91成人精品电影| 精品一区二区三区视频在线观看免费| 久久精品亚洲精品国产色婷小说| 国产又黄又爽又无遮挡在线| 女人被狂操c到高潮| 久久香蕉激情| 亚洲aⅴ乱码一区二区在线播放 | 巨乳人妻的诱惑在线观看| 日本免费一区二区三区高清不卡| 777久久人妻少妇嫩草av网站| 亚洲专区字幕在线| 校园春色视频在线观看| 久久久精品国产亚洲av高清涩受| bbb黄色大片| 精品久久久久久久毛片微露脸| 美女大奶头视频| 亚洲一区中文字幕在线| 国产免费男女视频| 亚洲专区字幕在线| 啦啦啦韩国在线观看视频| 成人18禁高潮啪啪吃奶动态图| 观看免费一级毛片| 校园春色视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲第一青青草原| 亚洲精品一区av在线观看| 精品电影一区二区在线| av中文乱码字幕在线| 午夜视频精品福利| 亚洲av五月六月丁香网| 国产99久久九九免费精品| 99久久综合精品五月天人人| 老司机福利观看| 黄色丝袜av网址大全| 国产亚洲欧美精品永久| 国产精品免费一区二区三区在线| 欧美三级亚洲精品| 中文亚洲av片在线观看爽| 波多野结衣巨乳人妻| 欧洲精品卡2卡3卡4卡5卡区| 很黄的视频免费| 别揉我奶头~嗯~啊~动态视频| 国产精品 国内视频| 人成视频在线观看免费观看| 搡老妇女老女人老熟妇| 亚洲无线在线观看| 99精品欧美一区二区三区四区| 日韩 欧美 亚洲 中文字幕| 黄网站色视频无遮挡免费观看| 欧美性长视频在线观看| 波多野结衣巨乳人妻| 一本精品99久久精品77| 中出人妻视频一区二区| 夜夜夜夜夜久久久久| av福利片在线| 国产精华一区二区三区| 亚洲精品久久国产高清桃花| 大型av网站在线播放| 国产一区二区三区在线臀色熟女| 变态另类丝袜制服| 日本黄色视频三级网站网址| 日韩精品中文字幕看吧| 国产亚洲精品久久久久久毛片| 一级a爱视频在线免费观看| 91老司机精品| 国产区一区二久久| 性欧美人与动物交配| 成熟少妇高潮喷水视频| 亚洲第一欧美日韩一区二区三区| 久久久精品国产亚洲av高清涩受| www国产在线视频色| 国产亚洲欧美精品永久| 国产v大片淫在线免费观看| 亚洲人成77777在线视频| 老汉色av国产亚洲站长工具| avwww免费| 国内揄拍国产精品人妻在线 | 亚洲一区二区三区色噜噜| 久久欧美精品欧美久久欧美| 国产精品久久久av美女十八| 丁香欧美五月| 欧美黑人精品巨大| 人人妻人人看人人澡| 精品第一国产精品| 亚洲精品av麻豆狂野| 国产亚洲精品综合一区在线观看 | 夜夜爽天天搞| 老鸭窝网址在线观看| 国产亚洲av嫩草精品影院| 国产精品九九99| 男女床上黄色一级片免费看| 99精品欧美一区二区三区四区| 哪里可以看免费的av片| 久久久国产精品麻豆| 91成年电影在线观看| 天天添夜夜摸| 久久久久久国产a免费观看| 亚洲成人国产一区在线观看| 亚洲成人国产一区在线观看| 巨乳人妻的诱惑在线观看| 国产在线观看jvid| 日本五十路高清| 久久久久亚洲av毛片大全| 成人国产一区最新在线观看| www.自偷自拍.com| 99国产极品粉嫩在线观看| 91字幕亚洲| 欧美+亚洲+日韩+国产| 日韩免费av在线播放| 男人舔奶头视频| 国产亚洲精品第一综合不卡| 亚洲真实伦在线观看| 2021天堂中文幕一二区在线观 | 国产成人影院久久av| 男人的好看免费观看在线视频 | 亚洲欧美一区二区三区黑人| 一夜夜www| 一个人观看的视频www高清免费观看 | 亚洲三区欧美一区| 一区二区三区高清视频在线| 国产高清视频在线播放一区| 在线观看66精品国产| 国产黄片美女视频| 一区二区三区激情视频| 亚洲精品一区av在线观看| 国产精品久久久人人做人人爽| 十八禁网站免费在线| 欧美性猛交黑人性爽| 色精品久久人妻99蜜桃| bbb黄色大片| 欧美另类亚洲清纯唯美| 欧美国产日韩亚洲一区| 精品一区二区三区四区五区乱码| 无人区码免费观看不卡| 桃色一区二区三区在线观看| 久久久久精品国产欧美久久久| 男男h啪啪无遮挡| 国产精品亚洲美女久久久| 白带黄色成豆腐渣| 999久久久国产精品视频| 国产男靠女视频免费网站| av在线播放免费不卡| 亚洲熟妇熟女久久| 满18在线观看网站| 精品久久久久久久末码| 久久久精品欧美日韩精品| 国产三级黄色录像| 色播亚洲综合网| 日韩有码中文字幕| 亚洲五月天丁香| 真人一进一出gif抽搐免费| 桃色一区二区三区在线观看| 丰满人妻熟妇乱又伦精品不卡| 一级毛片女人18水好多| 老司机深夜福利视频在线观看| 欧美激情久久久久久爽电影| 国产99白浆流出| 18禁观看日本| 亚洲国产精品999在线| 欧美黑人欧美精品刺激| 长腿黑丝高跟| 免费看美女性在线毛片视频| 国产成人精品无人区| 黄片大片在线免费观看| 99久久国产精品久久久| 成年女人毛片免费观看观看9| 午夜久久久在线观看| 亚洲在线自拍视频| 中文字幕高清在线视频| 久久久久国产精品人妻aⅴ院| 精品国产亚洲在线| 久久国产乱子伦精品免费另类| 国产精品久久久久久精品电影 | 欧美黑人巨大hd| 麻豆成人午夜福利视频| 久久精品国产99精品国产亚洲性色| 丰满人妻熟妇乱又伦精品不卡| 精品欧美一区二区三区在线| 精品久久久久久久久久久久久 | 欧美乱码精品一区二区三区| 精品国内亚洲2022精品成人| 欧美最黄视频在线播放免费| 欧美性长视频在线观看| 亚洲电影在线观看av| 国产真人三级小视频在线观看| 久久久久久久午夜电影| 69av精品久久久久久| 午夜老司机福利片| 正在播放国产对白刺激| 欧美 亚洲 国产 日韩一| 日韩精品青青久久久久久| 99re在线观看精品视频| 男人舔奶头视频| 亚洲欧美日韩无卡精品| 国产野战对白在线观看| 1024香蕉在线观看| 99久久无色码亚洲精品果冻| av片东京热男人的天堂| 精品午夜福利视频在线观看一区| 欧美一级a爱片免费观看看 | 一卡2卡三卡四卡精品乱码亚洲| 国产精品野战在线观看| 久久国产精品影院| 日本五十路高清| 久久精品国产清高在天天线| 18禁国产床啪视频网站| 黄色视频不卡| 一本久久中文字幕| 日本三级黄在线观看| 亚洲欧美激情综合另类| 国产精品 欧美亚洲| 色播在线永久视频| 国产99白浆流出| 男女下面进入的视频免费午夜 | a级毛片在线看网站| 非洲黑人性xxxx精品又粗又长| 性欧美人与动物交配| 正在播放国产对白刺激| 亚洲午夜精品一区,二区,三区| 一级a爱视频在线免费观看| 波多野结衣高清无吗| 国产一卡二卡三卡精品| 中文字幕另类日韩欧美亚洲嫩草| tocl精华| 国产精品 欧美亚洲| 欧美乱码精品一区二区三区| 在线观看www视频免费| 国产真人三级小视频在线观看| 国产一区二区在线av高清观看| 亚洲国产欧美一区二区综合| 很黄的视频免费| 50天的宝宝边吃奶边哭怎么回事| 又黄又爽又免费观看的视频| 欧美一区二区精品小视频在线| 国产精品久久久人人做人人爽| 极品教师在线免费播放| 1024手机看黄色片| 久久伊人香网站| 成人永久免费在线观看视频| 少妇粗大呻吟视频| 丰满的人妻完整版| 性色av乱码一区二区三区2| 别揉我奶头~嗯~啊~动态视频| 成年女人毛片免费观看观看9| 一级毛片精品| 自线自在国产av| 亚洲狠狠婷婷综合久久图片| 麻豆国产av国片精品| 国产色视频综合| 欧美另类亚洲清纯唯美| 成人欧美大片| 日韩欧美 国产精品| 黄色视频不卡| 在线观看免费午夜福利视频| 熟女少妇亚洲综合色aaa.| 亚洲午夜理论影院| 日本五十路高清| 国内毛片毛片毛片毛片毛片| 成人国语在线视频| 日日夜夜操网爽| 一本综合久久免费| 侵犯人妻中文字幕一二三四区| 午夜影院日韩av| 日韩成人在线观看一区二区三区| 久久久久久国产a免费观看| 国产极品粉嫩免费观看在线| 亚洲av中文字字幕乱码综合 | 美女高潮到喷水免费观看| 18禁裸乳无遮挡免费网站照片 | 午夜免费鲁丝| 丝袜在线中文字幕| 亚洲av成人av| 国产精品美女特级片免费视频播放器 | 久久久久精品国产欧美久久久| 黑人操中国人逼视频| 女同久久另类99精品国产91| 99热只有精品国产| 午夜福利在线观看吧| 国产成人欧美在线观看| 国产熟女xx| 亚洲自拍偷在线| 免费看十八禁软件| 欧美中文综合在线视频| 搞女人的毛片| www日本黄色视频网| 日韩欧美免费精品| 制服诱惑二区| 精品久久久久久久久久免费视频| 熟女电影av网| 久久精品影院6| 国产成人影院久久av| 色尼玛亚洲综合影院| 又黄又粗又硬又大视频| 免费在线观看影片大全网站| 久久久久久久久久黄片| 久久久久国内视频| 特大巨黑吊av在线直播 | 久久精品国产清高在天天线| 中亚洲国语对白在线视频| 国产激情久久老熟女| 黄色毛片三级朝国网站| 日韩免费av在线播放| 久久久精品欧美日韩精品| 狂野欧美激情性xxxx| www日本在线高清视频| 99国产极品粉嫩在线观看| 亚洲国产欧洲综合997久久, | 国产1区2区3区精品| 亚洲全国av大片| 色综合站精品国产| 久久精品国产99精品国产亚洲性色| 国产精品国产高清国产av| 国内揄拍国产精品人妻在线 | 久久久久久亚洲精品国产蜜桃av| 欧美性长视频在线观看| 国产av一区二区精品久久| 亚洲国产精品成人综合色| 亚洲一区高清亚洲精品| 国产免费男女视频| 男人舔奶头视频| 国产精品九九99| 别揉我奶头~嗯~啊~动态视频| 日日摸夜夜添夜夜添小说| 波多野结衣高清无吗| 国产精品精品国产色婷婷| av片东京热男人的天堂| 欧美日韩瑟瑟在线播放| 婷婷六月久久综合丁香| 亚洲成av人片免费观看| 麻豆一二三区av精品| 欧美av亚洲av综合av国产av| 视频在线观看一区二区三区| 高潮久久久久久久久久久不卡| 午夜福利在线在线| 精品久久久久久久毛片微露脸| 别揉我奶头~嗯~啊~动态视频| 欧美久久黑人一区二区| 一边摸一边做爽爽视频免费| 亚洲精品国产精品久久久不卡| 欧美丝袜亚洲另类 | 免费在线观看日本一区| 97碰自拍视频| 精品免费久久久久久久清纯| svipshipincom国产片| 国产午夜福利久久久久久| 国产亚洲欧美在线一区二区| 精品午夜福利视频在线观看一区| 婷婷精品国产亚洲av| 黄色女人牲交| 精品熟女少妇八av免费久了| 欧美大码av| 此物有八面人人有两片| 免费女性裸体啪啪无遮挡网站| 色av中文字幕| 国产精品一区二区精品视频观看| 18禁国产床啪视频网站| 真人一进一出gif抽搐免费| 成人精品一区二区免费| 久久久久亚洲av毛片大全| 日本a在线网址| 精品久久久久久久久久久久久 | 久久狼人影院| 久久天躁狠狠躁夜夜2o2o| 淫妇啪啪啪对白视频| 嫩草影视91久久| av视频在线观看入口| 久久精品国产99精品国产亚洲性色| 在线十欧美十亚洲十日本专区| 日韩欧美一区二区三区在线观看| 给我免费播放毛片高清在线观看| 人人妻人人看人人澡| 久久欧美精品欧美久久欧美| 亚洲熟妇中文字幕五十中出| 美女 人体艺术 gogo| 99国产精品一区二区三区| 欧美黄色片欧美黄色片| 亚洲人成伊人成综合网2020| 久久国产精品影院| 成在线人永久免费视频| 亚洲av电影不卡..在线观看| 亚洲精品在线美女| 成人亚洲精品一区在线观看| 精品久久久久久成人av| 国产精品综合久久久久久久免费| 国产成人av教育| 三级毛片av免费| 午夜激情av网站| 啦啦啦 在线观看视频| www国产在线视频色| 夜夜看夜夜爽夜夜摸| 听说在线观看完整版免费高清| 高清在线国产一区| 男人舔奶头视频| 成人手机av| 香蕉久久夜色| 国产一卡二卡三卡精品| 在线看三级毛片| 在线观看免费视频日本深夜| 久久久久久久精品吃奶| av电影中文网址| 亚洲第一青青草原| 亚洲成国产人片在线观看| 精品国产超薄肉色丝袜足j| 91av网站免费观看| 黄片播放在线免费| 久久久国产欧美日韩av| 99riav亚洲国产免费| 搡老熟女国产l中国老女人| 在线免费观看的www视频| 夜夜夜夜夜久久久久| 国产亚洲精品久久久久久毛片| 黄频高清免费视频| 美女扒开内裤让男人捅视频| 精品一区二区三区视频在线观看免费| 中文字幕另类日韩欧美亚洲嫩草| 久久午夜综合久久蜜桃| 国产男靠女视频免费网站| 在线观看午夜福利视频| 亚洲av片天天在线观看| 国产精品免费一区二区三区在线| av片东京热男人的天堂| 熟女电影av网| 免费女性裸体啪啪无遮挡网站| 午夜免费鲁丝| 美女午夜性视频免费| 午夜精品久久久久久毛片777| 女性被躁到高潮视频| 亚洲久久久国产精品| 国产单亲对白刺激| 国产99白浆流出| av在线天堂中文字幕| 国产99久久九九免费精品| 国产欧美日韩一区二区三| 99久久综合精品五月天人人| 波多野结衣高清无吗| 国产精华一区二区三区| 日韩国内少妇激情av| 成人免费观看视频高清| 最新美女视频免费是黄的| 国产不卡一卡二| 三级毛片av免费| 亚洲精品中文字幕在线视频| av超薄肉色丝袜交足视频| 日韩大尺度精品在线看网址| 欧美在线黄色| 人人妻人人澡人人看| 国产亚洲av高清不卡| 少妇裸体淫交视频免费看高清 | 亚洲欧美一区二区三区黑人| 真人做人爱边吃奶动态| 国产精品免费一区二区三区在线| 中出人妻视频一区二区| 国产精品综合久久久久久久免费| 成人午夜高清在线视频 | 国产精品二区激情视频| 国产亚洲欧美98| 日本黄色视频三级网站网址| 午夜亚洲福利在线播放| 人人妻人人看人人澡| 中文字幕人妻丝袜一区二区| 国产精品 欧美亚洲| www日本黄色视频网| 午夜激情福利司机影院| 不卡一级毛片| 亚洲七黄色美女视频| 久久国产亚洲av麻豆专区| 久久久久久久久久黄片| 久久久久精品国产欧美久久久| 国产精品久久久av美女十八| 亚洲欧美激情综合另类| 亚洲专区中文字幕在线| 国产亚洲av嫩草精品影院| 两性午夜刺激爽爽歪歪视频在线观看 | 男男h啪啪无遮挡| 免费高清视频大片| 成人18禁高潮啪啪吃奶动态图| 久久精品夜夜夜夜夜久久蜜豆 | 女性生殖器流出的白浆| 日韩大尺度精品在线看网址| 亚洲男人天堂网一区| 久久午夜综合久久蜜桃| 在线观看一区二区三区| 首页视频小说图片口味搜索| 国内少妇人妻偷人精品xxx网站 | 曰老女人黄片| 制服丝袜大香蕉在线| 看黄色毛片网站| 久久久水蜜桃国产精品网| 国产aⅴ精品一区二区三区波| 国产精品香港三级国产av潘金莲| 亚洲成a人片在线一区二区| 国内少妇人妻偷人精品xxx网站 | 欧美成人午夜精品| 免费人成视频x8x8入口观看| 又黄又粗又硬又大视频| 亚洲av电影在线进入| 高潮久久久久久久久久久不卡| 亚洲激情在线av| 亚洲国产精品久久男人天堂| 国产又爽黄色视频| 成人国语在线视频| 天天躁夜夜躁狠狠躁躁| 国产欧美日韩一区二区精品| 夜夜看夜夜爽夜夜摸| 18美女黄网站色大片免费观看| 亚洲av五月六月丁香网| 国产高清有码在线观看视频 | x7x7x7水蜜桃| 亚洲全国av大片| av福利片在线| ponron亚洲| 国产精品美女特级片免费视频播放器 | 天堂动漫精品| 黄片播放在线免费| 宅男免费午夜| 亚洲天堂国产精品一区在线| 久久婷婷人人爽人人干人人爱| 欧美中文综合在线视频| av欧美777| 国产成人啪精品午夜网站| 久热这里只有精品99| 丰满人妻熟妇乱又伦精品不卡| 免费av毛片视频| 久久午夜综合久久蜜桃| 91麻豆精品激情在线观看国产| 黄色丝袜av网址大全| 久久精品国产清高在天天线| 国产人伦9x9x在线观看| 精品乱码久久久久久99久播| 欧美一级a爱片免费观看看 | 日本精品一区二区三区蜜桃| 亚洲在线自拍视频| 久久精品影院6| 99精品在免费线老司机午夜| 99在线人妻在线中文字幕| 夜夜躁狠狠躁天天躁| 一二三四社区在线视频社区8| 我的亚洲天堂| 国产黄a三级三级三级人| 男男h啪啪无遮挡| 国产精品自产拍在线观看55亚洲| 香蕉久久夜色| 18禁美女被吸乳视频| 757午夜福利合集在线观看| 人成视频在线观看免费观看| 亚洲精品一卡2卡三卡4卡5卡| 欧美成人一区二区免费高清观看 | 啪啪无遮挡十八禁网站| 国产精品乱码一区二三区的特点| 男女之事视频高清在线观看| 手机成人av网站| 亚洲五月天丁香| 亚洲 欧美 日韩 在线 免费| 国产欧美日韩一区二区精品| 男人舔奶头视频| 久久久久国产一级毛片高清牌| 色综合亚洲欧美另类图片|