• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Deep-CNN Crowd Counting Model for Enforcing Social Distancing during COVID19 Pandemic:Application to Saudi Arabia’s Public Places

    2021-12-15 12:46:18SalmaKammounJarrayaMahaHamdanAlotibiandManarSalamahAli
    Computers Materials&Continua 2021年2期

    Salma Kammoun Jarraya,Maha Hamdan Alotibi and Manar Salamah Ali

    1Department of Computer Science, FCIT, King Abdulaziz University, Jeddah, Saudi Arabia

    2MIRACL-Laboratory, Sfax University, Sfax, Tunisia

    3Department of Computer Science, King Khalid University, Abha, Saudi Arabia

    Abstract:With the emergence of the COVID19 virus in late 2019 and the declaration that the virus is a worldwide pandemic,health organizations and governments have begun to implement severe health precautions to reduce the spread of the virus and preserve human lives.The enforcement of social distancing at work environments and public areas is one of these obligatory precautions.Crowd management is one of the effective measures for social distancing.By reducing the social contacts of individuals, the spread of the disease will be immensely reduced.In this paper, a model for crowd counting in public places of high and low densities is proposed.The model works under various scene conditions and with no prior knowledge.A Deep CNN model(DCNN)is built based on convolutional neural network(CNN)structure with small kernel size and two fronts.To increase the efficiency of the model,a convolutional neural network(CNN)as the front-end and a multi-column layer with Dilated Convolution as the back-end were chosen.Also, the proposed method accepts images of arbitrary sizes/scales as inputs from different cameras.To evaluate the proposed model, a dataset was created from images of Saudi people with traditional and non-traditional Saudi outfits.The model was also trained and tested on some existing datasets.Compared to current counting methods, the results show that the proposed model has significantly improved efficiency and reduced the error rate.We achieve the lowest MAE by 67%, 32% .and 15.63% and lowest MSE by around 47%, 15%and 8.1% than M-CNN, Cascaded-MTL, and CSRNet respectively.

    Keywords:CNN;crowd counting;COVID19

    1 Introduction

    During pandemics such as coronavirus disease(COVID19),social distancing is applied as a preventive precaution to preserve human lives and reduce the severity of disease spread.Globally, 18.5 million confirmed cases and over 700 thousand deaths had been reported until today.Social distancing is an effective non-medical interference measure for preventing the transmission of diseases.Social distancing can be implemented through various rigorous measures such as banning travel, lockdown, and closing public places.These aggressive measures have made a significant impact on economies around the world.Other less severe measures include warning people to keep a safe distance between individuals and controlling crowds in public places.Enabling technologies and artificial intelligence techniques can play a significant role in implementing and enforcing these measures [1].

    Crowd counting is a practical approach for crowd control management.Public places like malls and supermarkets can be monitored seamlessly through surveillance cameras and crowd counting software.Whereby alerts are sent to organizers or visitors if a particular area has reached the maximum allowed capacity according to the social distancing related quota.

    While a plethora of research has been conducted on crowd counting technologies,limited research has been undertaken on crowds with unique and uncommon characteristics,such as Saudi people in Saudi public places.In Saudi public places,crowd counting comes with several issues,such as non-uniform illumination,extreme clutter,occlusions,non-uniform spreading of people,perspective,intra-sceneinter-scene differences in appearance,and scale.These characteristics make optimization and unification extremely challenging.The wide range of crowd analysis applications, together with the complexity of the problem, have been guiding researchers in recent years to improve the efficiency of these techniques and come with new and effective solutions[2].

    In this paper, we propose a novel method for crowd counting in high and low crowded public places under various scene conditions without prior knowledge.The proposed method is based on CNN model to count people/visitors who appear in video frames in public places.The model accepts arbitrary image sizes and scales as inputs from a diversity of surveillance camera types.The cameras are connected to an IoT architecture to provide pictures from different public places.

    The rest of the paper is organized as follows:an overview of crowd counting and density map generation is discussed in Section 2.The proposed model is presented in Section 3.Section 4 discusses the experimental results on several datasets.And finally the conclusions are discussed in Section 5.

    2 Background

    Crowd counting methods can be grouped into four main classes:Detection-based approaches [3-5],regression-based approaches [6-8], density estimation-based approaches [6,9-11], and CNN-based approaches[12-14].

    Studies showed that, regardless of the pros and cons of other methods, CNN-based regressors have considerably outperformed the outdated crowd counting approaches.The representations from the local features of non CNN-based approaches are not sufficient to satisfy a high level of performance and accuracy.

    The prosperity of CNNs in most computer vision challenging problems has triggered researchers to deeply investigate and study the nonlinear functions from images of crowds to the identical counts or identical density maps.Most CNN crowd counting techniques still require an input image with a fixed size.This requirement is“artificial”and will probably decrease the image understanding of precision.However,when CNNs have been initially proposed,most related studies and research depended on image patches and required the use of fixedsize images[15,16].However,since the quality of the density maps is poor,new approaches need to enhance the density maps.Recently,a multi-column style technique has been published,which provides better quality density maps and improved performance[17,18].However,when the CNN complexity is increased,the model suffers from a non-effective branch structure and prolonged training time.

    Crowd counting methods have two main challenges.First,ROI(region-of-interest counting),where the chosen region of study may affect the accuracy and performance of calculation.The second category is LOI(line-of-interest counting),which calculates the number of people crossing a chosen line.Since the proposed model in this study will be used in public places,the region-of-interest counting is adopted,based on a single image to count both high and low crowded places.

    3 Related Work

    In recent years, crowd analysis techniques have gained significant interest due to the cutting-edge achievements of Convolutional Neural Network (CNN) models.CNN’s capabilities are used for investigating the non-linear functions from crowd images to their corresponding density maps.

    The majority of CNN-based crowd counting models use fixed-size input images[16,19-20],where the models suffer from low-quality density maps due to the structure of their networks.To solve the low-quality density maps challange, multi-column architectures have been introduced [17,21].However, when deeper networks are applied, new challenging issues arise, which are:non-effective branch structure and an extended period of time required for training.

    CNN’s network property approaches are classified into three categories[2].Early deep learning methods for assessing crowd density and counts used basic CNN models with basic CNN network layers [14,19].Scale-aware models used advanced CNN models, which are robust to varying scales, such as multicolumn architectures [22].Finally, context-aware models integrated images of regional and global contextual information into CNN framework to reduce estimation errors [22].

    A multi-column architecture (MCNN) for images with random crowd densities and various perspectives was proposed in Zhang et al.[18].To ensure the robustness of the variation in object scales,representation of varied object scales in images has been supported by the construction of large, medium,and small-sized networks.

    Training regressors with a multi-column network on every input spot has been proposed as a crowd counting method in Zhang et al.[18].Babu Sam et al.[17] argued that training specific collection of spots in images with varied crowd densities would significantly improve the performance of the model.Babu Sam et al.[17] proposed a switching CNN that stimulates the multi-column network by using multiple independent regressors with switching classifiers and sensory domains.The proposed model chooses an optimal regressor for a particular input spot.

    A Contextual Pyramid model(CP-CNN)was proposed in Sindagi et al.[23].The model explicitly joins local and global contextual material of crowd images to improve the quality of crowd estimation of crowd densities.The model consists of Local Context Estimator(LCE),Global Context Estimator(GCE),a Fusion-CNN (F-CNN), and Density Map Estimator (DME).However, the proposed model was complex and required long training time.

    The CSNet model in Li et al.[24] applied improvement to the quality of density maps with deeper network of single-column structures and introduced a dependency on VGG-16 [25].The first ten layers of VGG-16 were used without applying a fully connected layer.Dilated convolutional layers have been used as the back-end to extract deeper information of salience without risking the resolution of the output.

    4 System Model

    The production of the new real-time crowd counting model in this work has gone through several stages;with several steps in each stage.In this section,we discuss the offline work stage that is used to generate a counting model based on deep convolutional neural networks (CNN).The general structure of the offline work consists of the following three steps:

    (1) Data acquisition and collection.

    (2) Generate a full convolution deep CNN that can deal with the processed data.This is done by following the structure of the first 11 layers of VGG-19 small filter size and changing the last convolution layers with dilation convolution and multi-column for back end.

    (3) Evaluate the model with regard to different conditions, including different dilation rates, different crowd levels, and various image environments.The model is evaluated using the most popular crowd counting dataset Shanghai-tech parts A and B, as well as the Saudi dataset.The results are compared with the existing models.

    4.1 Data Collection and Preparation

    As the first step, we record a new and particular large-scale crowd counting dataset to build a large number of training data required by CNN, and to include the unique and uncommon characteristics of Saudi Arabia public places.This dataset was prepared explicitly for Saudi people in public places.It consists of 673 frames with different head numbers and different crowd levels.The counts range from 1 to 450, with an average of 80 people in view.The pictures were taken from different camera angles,which allowed our system to recognize persons wherever the camera was.The frames are recorded from different videos and from different places, such as malls, restaurants, events, walkways, and airports.In addition, a set of images were gathered from openly available websites and social media, such as Google and Instagram.These images cover different events, including concerts and stadiums as shown in Fig.1.In Tab.1, we show more details about the collected dataset.After collecting the videos, we extracted the frames which have a different number of people and have a different distribution.Then we started the process of selecting frames by labeling each person in the image and generating the ground truth file.We label the heads in high crowd images and label the whole body in the less crowded image.

    Figure 1:Sample images from the Saudi dataset

    In this work,the method of generating density maps used in Zhang et al.[18]and Li et al.[24]is applied.Highly congested crowd scenes are attempted by using the geometry-adaptive kernels.Each head annotation is blurred using a Gaussian kernel normalized to1.The ground truth is generated using the spatial distribution of all the images in each dataset.The geometry-adaptive kernel is defined as follows:

    Table 1:Saudi dataset description

    For each targeted objectxiin the ground truth δ,the average distance ofknearest neighbors is indicated bydi.To generate the density map, δ(x - xi) is convolved with a Gaussian kernel and a parameter σi(standard deviation), wherexis the position of pixel in the image.In our experimentations, the configuration in Sindagi et al.[12] where β = 0:3 and k = 3 is followed.To blur all the annotations in sparse crowd images,the Gaussian kernel to the average head size is adapted.

    4.2 Crowd Counting Model Generation Based on CNN

    Our main contribution in this work is to generate accurate people counting model from an arbitrary single image, with any random crowd density and random camera perspective.However, this seems to be a rather challenging task, considering that in the dataset there exists a significant inequality scale of the bodies in different images, which requires the employment of features at different measures and counting people in various images.

    The density of the crowd, as well as its distribution, are very significant in the selected datasets.Typically, there is substantial occlusion for most bodies in images.Hence, traditional methods such as detection-based methods do not perform well in such settings.Since there might be a notable difference in the scale of the object such as people or heads in the images, we need to use features at various scales collectively to estimate the crowd counts correctly in different images.

    To overcome the above mentioned challenges, we proposed a novel framework based on deepconvolutional neural network (CNN) and density map for crowd counting in an unfixed-resolution single image.The basic idea of the proposed network design is to deploy a deeper CNN for catching high-level features with larger receptive fields and produce high-quality density maps without brutally increasing network complexity.

    To count the number of people in an input image via CNNs, we generate density maps of the crowds(from the input images) to estimate how many people exist per square meter.The rationale behind using density maps is that the method preserves more information compared to the total head number of the crowd.Also, the density map will provide the spatial distribution pattern of the crowd in the given image.In addition, in training the density map through a CNN, the learned filters will be more adaptive to heads of various sizes.Therefore it will be more suitable for arbitrary inputs whose viewpoint effect varies notably.However, the filters are more semantic meaningful, and as a result, they improve the exactitude of counting.

    We choose the structure of the first 11 layers of VGG-19[25]as the front-end of deep-CNN because of its powerful transfer learning capability.And it has an adaptable design for smoothly concatenating the backend,which is made for high-quality density map production.

    The original VGG-19 is built using a single column that consists of Convolutions layers(used only 3×3 size),maximum pooling layers (used only 2× 2 size),and fully connected layers, resulting in 19 layers.

    However,the lack of modifications results in poor performance.In CrowdNet[26],the authors directly carve the first 13 layers from VGG-16 and add a(1×1)convolutional layer as an output layer instead of a fully connected layer.As observed from the literature,some architectures use VGG-16,such as MCNN[18],which uses VGG-16 as a classifier of the density level for labeling input pictures before forwarding them to the most suitable column of the multi-column network.

    While in CP-CNN [4], the VGG-16 acts as an ancillary without boosting the utmost accuracy since it combines the effect of classification with the features from the density map generator column.In the proposed model, we first remove the fully-connected layers of VGG-19 and consider it as the classification part.The DCNN is a fully convolutional network; different image sizes can be used both in prediction and in training modes.Then the proposed deep-CNN is built with convolutional layers and three pooling layers in VGG-19.

    The front-end network output is 1/8 the size of the original input.Continuation of stacking the basic components in VGG-19 (more convolutional layers and pooling layers), will lead to further downsizing of the output and limits the production of high-quality density maps.

    Alternatively,we deploy dilated convolutional layers inspired from Yu et al.[27],as the back-end of our deep CNN,to maintain the output resolution and extract deeper information of saliency.

    One of the key aspects and significant components of our back-end deep-CNN is the 2-D dilated convolutional layer which can be described as follow:

    where f(n; m) refers to the production of dilated convolution from image y(n,m) and a filter w(i; j) with n length and m width, while D refers to the dilation rate.However, in a normal convolution, the dilation rate r = 1.Dilated convolutional layers are an excellent alternative of pooling layer and have shown notable enhancement of accuracy in segmentation[28-31].

    4.2.1 Network Configuration

    According to the improvement applied on the original VGG-19, we suggest three network configurations of deep-CNN.The back-end in Fig.2 has a similar front-end organization but with different dilation rates in the back-end.

    Figure 2:The structure of the proposed counting network DNCC

    There are two primary parts in the proposed CNN model.The first part, which is the front-end,comprises the adapted VGG-19 CNN (except fully-connected layers) [32,33] with 3 × 3 kernels for 2D feature extraction.The second part is the back-end containing dilated CNN (DCNN) with more layers,where dilated kernels are utilized for delivering larger reception fields and replacing pooling operations.It is recommended to use DCNN with tree network configurations that have many dilation rates at the backend, but with using the same front-end arrangement.According to Zhang et al.[22], it is more efficient to utilize smaller kernels with a higher number of conventional layers rather than bigger kernels with a lower number of layers for receptive fields of the same size.The primary consideration is to balance the need for accuracy against the resources involved, such as the number of parameters, the training time,and memory consumption.

    Based on our intensive experiments,it was observed that the optimal arrangement involves the use of the first eleven layers of VGG-19 with three rather than five pooling layers so that the adverse impacts of pooling operations on output accuracy could be reduced [22].The same front-end structure has been maintained while the dataset has been trained starting from the eighth layer until the end of the network.Padding has been employed to keep all convolutional layers at the prior size.The parameters of the convolutional layers have been represented by (conv-kernel-size), (dilation-rate), and (number of filters) where the maxpooling layers have been performed on a 2 × 2-pixel window with stride 2.Fig.2 shows a visual overview of the proposed counting network DNCC.

    4.2.2 Training Details

    A direct approach is used for training the DCNN as an end-to-end structure.For the first 11 convolutional layers, initially from fine-tuned, well-trained VGG-19 [25].A 5-fold cross-validation is performed following the standard setting in Li et al.[24].The initial values of the remaining layers are derived from a Gaussian initialization with a standard deviation of 0.01.During the training session,stochastic gradient descent (SGD) is used at a constant learning rate of 1e-6.Consistent with [3,19,22],the Euclidean distance is used for calculating the difference between the ground truth and the estimated generated density map.The equation for the loss function is given below:

    where N is the training batch size while Z (Xi; Θ) represents the output generated by DCNN with a parameter of Θ.Meanwhile,Xi is the input image withas the ground truth result of the input image Xi.

    Subsequently,in the data augmentation step(see Fig.3),the images have been cropped at various places producing nine patches at a quarter of the initial image size.The patches of the first four encompass four quarters of the input image that do not overlap, and the remaining five patches are cropped indiscriminately from the image.Next,the patches are reflected, thus doubling the training set.

    Figure 3:Sample from data augmentation step

    4.3 Model Evaluation

    The evaluation of the proposed model is based Mean Squared Error (MSE) and Mean Absolute Error(MAE)which are defined by Eqs.(4)and(5).

    where N is the number of images in one test sequence andis the ground truth of counting.Cirepresents the estimated count which is defined as follows:

    L and W show the length and width of the density map,respectively,while zl;w is the pixel at(l;w)of the generated density map.Ciis the estimated counting number for image Xi.Roughly, MAE indicates the accuracy of the estimates, and MSE indicates the robustness of the estimates.

    In the next subsections,we present the results of the proposed model on the Saudi dataset and the most popular and challenging crowd dataset,shanghai_tech A&B.

    4.3.1 Results on Saudi Dataset

    The dataset consists of 673 images customized and personalized for Saudi people in public places,and with different head numbers and different crowding levels.We use 450 images for training and 217 for testing.The evaluation of the dataset is as follows:

    Before choosing the final structure of DCNN,different arrangements for the front end and the back end of the improved VGG has been tested.Tab.2 shows the evaluation of the different structures.The most critical part depends on the tradeoff between accuracy and the resource overhead (including training time,memory consumption, and the number of parameters).The result of the experiment shows the best tradeoff can be achieved when keeping the first eleven layers of VGG-19 with only three pooling layers instead of five to suppress the detrimental effects on output accuracy caused by the pooling operation.We also test the effect of dilation on our model by using different dilation rate from 1 to 4, and we found that the best dilation rate is 2 on VGG-19 case F.

    Table 2:Evaluation of different DCNN structures

    Discussion:We evaluate our dataset based on the different category on the DCNN network;the result was as shown in Tab.2.Five random frames are selected from each category,and we found that outdoor night images get the highest error rate by 1.5 MSE.The indoor low crowded achieve the lowest error rate of 0.3.However,this was due to several reasons.First,we have more frames from the indoor category compared to the outdoor category during the network training.Also,the outdoor frames are more challenging than indoor frames.In Tab.3, we present the experimental result and comparisons of three states of art open source models and our DCNN on Saudi dataset.We found that our proposed model achieves the lowest MAE by 67%, 32% and 15.63% and lowest MSE by around 47%, 15% and 8.1% than M-CNN, Cascaded-MTL,and CSRNet respectively.Due to the uncommon and special nature of Saudi society, the state of art models may require special features to be added to the global features.In fact, we have trained our data for crowd counting by accepting the unfixed-resolution color of the images.Thus M-CNN training code has been used as a base, where the findings further support the idea of non-effective branch structure and Cascaded-MTL that have a complicated structure.Fig.4 presents the produced density map of our method.The 1st column displays testing samples of the set in the Saudi Dataset.While the 2nd column displays the ground truth for samples and the 3rd column shows the produced density map.

    Table 3:Estimation errors on the Saudi dataset

    Figure 4:Density maps

    4.3.2 Results on Shanghaies Dataset

    The Shanghai Tech dataset consists of a total number of 330,165 persons within 1198 annotated images[23].This public dataset is based on two parts,A and B,as shown in Tabs.4 and 5,respectively.

    Table 4:Approximation errors on the Shanghai Tech dataset part A

    Table 5:Approximation errors on the Shanghai Tech dataset part B

    Discussion:Our DCNN model has been evaluated and compared to seven other existing related works.The results indicate that our model has achieved the following:

    ? 63%lower MAE in Part A compared to the Cross-scene.

    ? 2.5% Lower MAE than the recent existing work of CSRNet.

    ? Lowest MAE(the highest accuracy)in Part A compared to other models.

    ? 54%lower MAE has been achieved in part B.

    The results indicate that our method has outperformed not only the counting tasks for extremely dense crowds but also tasks for relatively sparse scenes.

    5 Conclusion

    Social distancing has proved to be a significant measure in reducing infection rates in COVID19 pandemic.Artificial intelligence technology plays a vital role in encouraging or even enforcing social distancing practice.One of the practices in social distancing is controlling crowded gatherings in public places.For this reason,there is a need for systems with high accuracy and remarkable performance to detect dense crowds.

    In this paper, we provide accurate people counting model from an arbitrary single image, with any random crowd density and random camera perspective.A novel deep CNN called DCNN, had been proposed.The proposed model follows the structure of VGG-19 small conve size, that takes any image resolution as an input.Also, a new large-scale crowd counting dataset for the Saudi public area has been created and used to train the model.The results indicate that our proposed model achieves a lower MAE combed to state-of-art methods.More specifically, we achieve the lowest MAE by 67%,32%and 15.63%and lowest MSE by around 47%,15%and 8.1%than M-CNN,Cascaded-MTL,and CSRNet respectively.

    In this paper, we provide accurate people counting model from an arbitrary single image, with any random crowd density and random camera perspective.A novel deep CNN called DCNN, had been proposed.The proposed model follows the structure of VGG-19 small conve size, that takes any image resolution as an input.Also, a new large-scale crowd counting dataset for the Saudi public area has been created and used to train the model.The results indicate that our proposed model achieves a lower MAE combed to state-of-art methods.More specifically, we achieve the lowest MAE by 67%,32%and 15.63%and lowest MSE by around 47%,15%and 8.1%than M-CNN,Cascaded-MTL,and CSRNet respectively.

    Funding Statement:This project was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, Saudi Arabia, under grant No.(DF-352-165-1441).The authors, therefore,gratefully acknowledge DSR for their technical and financial support.

    Conflict of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久97久久精品| 在线观看人妻少妇| 性高湖久久久久久久久免费观看| 免费观看av网站的网址| 亚洲怡红院男人天堂| 深爱激情五月婷婷| 日韩一区二区三区影片| 亚洲欧美成人精品一区二区| 亚洲激情五月婷婷啪啪| 2018国产大陆天天弄谢| 水蜜桃什么品种好| 国产午夜精品一二区理论片| av在线蜜桃| 亚洲美女视频黄频| 国产综合精华液| 亚洲欧美中文字幕日韩二区| 99久久精品一区二区三区| av.在线天堂| 九九久久精品国产亚洲av麻豆| 五月玫瑰六月丁香| 99久国产av精品国产电影| 亚洲精品一区蜜桃| 夫妻性生交免费视频一级片| 菩萨蛮人人尽说江南好唐韦庄| 久久久精品94久久精品| a级毛色黄片| 国产精品国产av在线观看| 高清黄色对白视频在线免费看 | 精品久久久久久久久av| 色婷婷久久久亚洲欧美| 国产欧美日韩精品一区二区| 亚洲精品国产av蜜桃| 麻豆成人av视频| 欧美亚洲 丝袜 人妻 在线| a级毛片免费高清观看在线播放| 天天躁日日操中文字幕| 又爽又黄a免费视频| .国产精品久久| 日本午夜av视频| 一级毛片久久久久久久久女| 91精品国产九色| 国产毛片在线视频| 久久精品国产自在天天线| 日韩成人av中文字幕在线观看| 国产高清有码在线观看视频| 欧美高清性xxxxhd video| 看免费成人av毛片| 观看av在线不卡| 精品少妇久久久久久888优播| 欧美xxxx性猛交bbbb| 国产欧美日韩精品一区二区| av在线观看视频网站免费| 伦理电影免费视频| 免费在线观看成人毛片| 99re6热这里在线精品视频| 久久99精品国语久久久| 91在线精品国自产拍蜜月| 国产69精品久久久久777片| 中文资源天堂在线| 久久精品国产自在天天线| 欧美zozozo另类| 亚洲国产成人一精品久久久| 老司机影院毛片| 亚洲美女视频黄频| 夜夜骑夜夜射夜夜干| 中文字幕人妻熟人妻熟丝袜美| 亚洲性久久影院| 国产精品无大码| 视频中文字幕在线观看| 国产久久久一区二区三区| 亚洲色图av天堂| 免费播放大片免费观看视频在线观看| 亚洲欧美日韩卡通动漫| 国产爽快片一区二区三区| 看十八女毛片水多多多| 亚洲精品久久久久久婷婷小说| 国产免费一级a男人的天堂| 九草在线视频观看| 黄色一级大片看看| 一区二区av电影网| 国产亚洲精品久久久com| av在线老鸭窝| 欧美精品一区二区大全| 丰满迷人的少妇在线观看| 久久精品久久久久久久性| 国产毛片在线视频| 日本vs欧美在线观看视频 | 国产视频内射| 大香蕉久久网| 99热网站在线观看| 水蜜桃什么品种好| 久久鲁丝午夜福利片| 少妇被粗大猛烈的视频| 欧美成人a在线观看| 亚洲图色成人| av在线蜜桃| 欧美国产精品一级二级三级 | 午夜视频国产福利| 欧美日韩视频精品一区| 欧美日韩国产mv在线观看视频 | 国产男人的电影天堂91| 久久久久网色| 尾随美女入室| 日本色播在线视频| 久久国内精品自在自线图片| 日韩欧美一区视频在线观看 | 2018国产大陆天天弄谢| 美女视频免费永久观看网站| 免费观看av网站的网址| 久久综合国产亚洲精品| 丝瓜视频免费看黄片| 插逼视频在线观看| 激情五月婷婷亚洲| 欧美极品一区二区三区四区| 99视频精品全部免费 在线| 亚州av有码| a 毛片基地| 尾随美女入室| 最近最新中文字幕免费大全7| 免费观看无遮挡的男女| 国产精品麻豆人妻色哟哟久久| 国产欧美亚洲国产| 日韩在线高清观看一区二区三区| 这个男人来自地球电影免费观看 | 亚洲一级一片aⅴ在线观看| 国产精品熟女久久久久浪| 久热这里只有精品99| 蜜臀久久99精品久久宅男| 校园人妻丝袜中文字幕| 在线观看一区二区三区激情| 国产一区二区三区av在线| 亚洲成人手机| a级一级毛片免费在线观看| 婷婷色麻豆天堂久久| 超碰av人人做人人爽久久| 中文字幕免费在线视频6| 最近中文字幕2019免费版| 2018国产大陆天天弄谢| 国产91av在线免费观看| 日韩人妻高清精品专区| 天天躁日日操中文字幕| 欧美高清成人免费视频www| 亚洲欧美精品专区久久| 日韩免费高清中文字幕av| 一级毛片aaaaaa免费看小| 国产黄频视频在线观看| 插阴视频在线观看视频| 亚洲性久久影院| 日本wwww免费看| 2021少妇久久久久久久久久久| 一个人免费看片子| 亚洲真实伦在线观看| 久久 成人 亚洲| 在线观看人妻少妇| 精品人妻视频免费看| av卡一久久| 99热这里只有是精品50| 亚洲成人中文字幕在线播放| 在线 av 中文字幕| 精品少妇黑人巨大在线播放| 国产成人免费观看mmmm| 国产精品一区www在线观看| 欧美+日韩+精品| 久久久久久伊人网av| 水蜜桃什么品种好| 亚洲不卡免费看| 日本-黄色视频高清免费观看| 我的女老师完整版在线观看| 久久99热6这里只有精品| 国产在线免费精品| 亚洲国产欧美人成| 99久久人妻综合| 交换朋友夫妻互换小说| 国产综合精华液| 极品少妇高潮喷水抽搐| 日本与韩国留学比较| 美女福利国产在线 | 91aial.com中文字幕在线观看| 高清不卡的av网站| 丝袜喷水一区| av播播在线观看一区| 成人无遮挡网站| 国产欧美日韩精品一区二区| av免费观看日本| 男人和女人高潮做爰伦理| 亚洲欧美成人精品一区二区| 国产欧美另类精品又又久久亚洲欧美| 久久精品久久久久久久性| 最近中文字幕2019免费版| a级毛片免费高清观看在线播放| 久久精品夜色国产| 成人免费观看视频高清| 日韩强制内射视频| 少妇精品久久久久久久| 在线观看免费日韩欧美大片 | 亚洲国产欧美人成| 纯流量卡能插随身wifi吗| 国产免费福利视频在线观看| 久久99热这里只频精品6学生| 老熟女久久久| 交换朋友夫妻互换小说| 欧美日韩国产mv在线观看视频 | 国产综合精华液| 国产 一区 欧美 日韩| 亚洲激情五月婷婷啪啪| 亚洲成人手机| 欧美激情国产日韩精品一区| 中国三级夫妇交换| 18禁动态无遮挡网站| 国产成人精品久久久久久| 偷拍熟女少妇极品色| 观看免费一级毛片| 国产亚洲5aaaaa淫片| 国模一区二区三区四区视频| 一个人免费看片子| 天堂中文最新版在线下载| 日韩一本色道免费dvd| 国产精品国产三级国产av玫瑰| 久久久久视频综合| 色视频在线一区二区三区| 97精品久久久久久久久久精品| 久久精品久久久久久久性| 小蜜桃在线观看免费完整版高清| 国产伦在线观看视频一区| 91午夜精品亚洲一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91 | 久久精品久久精品一区二区三区| 精品久久久久久久末码| 国产欧美日韩一区二区三区在线 | 最近中文字幕高清免费大全6| 黄色视频在线播放观看不卡| 亚洲人成网站高清观看| 一级毛片黄色毛片免费观看视频| 在线观看美女被高潮喷水网站| 久久久久久久久久久丰满| 亚洲精品亚洲一区二区| 国语对白做爰xxxⅹ性视频网站| 人妻 亚洲 视频| 国内揄拍国产精品人妻在线| 建设人人有责人人尽责人人享有的 | 在线天堂最新版资源| 亚洲av电影在线观看一区二区三区| 蜜桃久久精品国产亚洲av| 91午夜精品亚洲一区二区三区| 国产免费视频播放在线视频| 在线精品无人区一区二区三 | 日本猛色少妇xxxxx猛交久久| 亚洲av综合色区一区| 亚洲第一区二区三区不卡| 最近中文字幕高清免费大全6| 国产色爽女视频免费观看| 中文字幕av成人在线电影| 王馨瑶露胸无遮挡在线观看| 久久久精品免费免费高清| 国产精品一及| 亚洲av不卡在线观看| av网站免费在线观看视频| 大片免费播放器 马上看| 中文资源天堂在线| 成人特级av手机在线观看| 在线观看人妻少妇| 亚洲国产精品专区欧美| 免费在线观看成人毛片| 日韩人妻高清精品专区| 麻豆成人午夜福利视频| 久久久久久久久久人人人人人人| 一级毛片电影观看| av专区在线播放| 亚洲伊人久久精品综合| 亚洲国产成人一精品久久久| 精品人妻视频免费看| 国产精品99久久99久久久不卡 | 欧美人与善性xxx| 九色成人免费人妻av| 国产在线一区二区三区精| 欧美日韩在线观看h| 久久久久精品性色| 制服丝袜香蕉在线| 18禁裸乳无遮挡免费网站照片| 国产精品人妻久久久久久| 亚洲av.av天堂| 18禁动态无遮挡网站| 2022亚洲国产成人精品| 欧美精品亚洲一区二区| 国产女主播在线喷水免费视频网站| 日本av手机在线免费观看| 一级毛片aaaaaa免费看小| 高清日韩中文字幕在线| 久久久久久人妻| 欧美日韩在线观看h| 欧美精品亚洲一区二区| 亚洲av欧美aⅴ国产| 亚洲婷婷狠狠爱综合网| 一个人看视频在线观看www免费| 精品一区二区免费观看| 国产高清国产精品国产三级 | 成人黄色视频免费在线看| 狂野欧美激情性bbbbbb| 日韩人妻高清精品专区| 亚洲国产精品成人久久小说| 肉色欧美久久久久久久蜜桃| 久久韩国三级中文字幕| 九草在线视频观看| 天美传媒精品一区二区| 久久久久久久大尺度免费视频| 日本黄大片高清| 美女福利国产在线 | 日韩av免费高清视频| 国产乱人偷精品视频| 哪个播放器可以免费观看大片| a 毛片基地| 久久人妻熟女aⅴ| 99久久综合免费| 日本黄色日本黄色录像| 最近中文字幕高清免费大全6| 在线观看免费视频网站a站| 亚洲精品色激情综合| 亚洲图色成人| 久久久成人免费电影| 纯流量卡能插随身wifi吗| av播播在线观看一区| 在线免费观看不下载黄p国产| 久久国产精品大桥未久av | 欧美丝袜亚洲另类| 免费高清在线观看视频在线观看| 久久久久网色| 交换朋友夫妻互换小说| 熟女人妻精品中文字幕| 国产高潮美女av| 欧美日韩国产mv在线观看视频 | 国产精品久久久久成人av| 女性生殖器流出的白浆| 日日摸夜夜添夜夜爱| 色5月婷婷丁香| 噜噜噜噜噜久久久久久91| 欧美97在线视频| 色吧在线观看| 国产色爽女视频免费观看| 亚洲伊人久久精品综合| 国产亚洲av片在线观看秒播厂| 免费高清在线观看视频在线观看| 日本欧美国产在线视频| 国产成人精品久久久久久| 日韩免费高清中文字幕av| 黑丝袜美女国产一区| 一二三四中文在线观看免费高清| 国产在线视频一区二区| 一区二区三区四区激情视频| 色综合色国产| 亚洲成人手机| 精品国产露脸久久av麻豆| av国产久精品久网站免费入址| 精品少妇黑人巨大在线播放| 国产高清有码在线观看视频| 成人免费观看视频高清| 一区二区三区免费毛片| 你懂的网址亚洲精品在线观看| 久久国产亚洲av麻豆专区| www.av在线官网国产| 日韩成人av中文字幕在线观看| 国产午夜精品一二区理论片| 亚洲av电影在线观看一区二区三区| 亚洲丝袜综合中文字幕| 亚洲国产欧美人成| 久久毛片免费看一区二区三区| 大片电影免费在线观看免费| 成人午夜精彩视频在线观看| 精品一区在线观看国产| 色5月婷婷丁香| 偷拍熟女少妇极品色| 熟女av电影| 国产精品精品国产色婷婷| 免费看不卡的av| 久久ye,这里只有精品| 亚洲精华国产精华液的使用体验| 成人影院久久| 日日摸夜夜添夜夜添av毛片| 日本爱情动作片www.在线观看| 欧美老熟妇乱子伦牲交| 校园人妻丝袜中文字幕| 性高湖久久久久久久久免费观看| 综合色丁香网| 久久久久国产网址| 欧美xxxx黑人xx丫x性爽| 亚洲av二区三区四区| 大片免费播放器 马上看| 国产男女超爽视频在线观看| 性色avwww在线观看| 色网站视频免费| 亚洲国产最新在线播放| 日本免费在线观看一区| 国内精品宾馆在线| 国产永久视频网站| 久久99热6这里只有精品| 91aial.com中文字幕在线观看| 国产精品国产三级国产专区5o| 日韩av不卡免费在线播放| 欧美xxxx性猛交bbbb| 日日啪夜夜撸| 国产亚洲一区二区精品| 精品国产乱码久久久久久小说| 麻豆乱淫一区二区| freevideosex欧美| 晚上一个人看的免费电影| 婷婷色综合大香蕉| 尤物成人国产欧美一区二区三区| 国产精品av视频在线免费观看| 午夜精品国产一区二区电影| 国产精品秋霞免费鲁丝片| 新久久久久国产一级毛片| 国产欧美另类精品又又久久亚洲欧美| 老师上课跳d突然被开到最大视频| 在线观看一区二区三区| 老师上课跳d突然被开到最大视频| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品乱码久久久v下载方式| 在线观看一区二区三区| 久久久a久久爽久久v久久| 中文精品一卡2卡3卡4更新| 欧美97在线视频| 97在线视频观看| 中国美白少妇内射xxxbb| 日本爱情动作片www.在线观看| 精品少妇黑人巨大在线播放| 亚洲高清免费不卡视频| 亚洲欧美一区二区三区黑人 | 男女无遮挡免费网站观看| 免费久久久久久久精品成人欧美视频 | 亚洲成色77777| 日韩欧美精品免费久久| 中文天堂在线官网| 日韩一区二区三区影片| 少妇丰满av| 高清在线视频一区二区三区| 啦啦啦在线观看免费高清www| 日韩电影二区| 免费大片18禁| h日本视频在线播放| 春色校园在线视频观看| 午夜福利高清视频| 国产一级毛片在线| 七月丁香在线播放| 亚洲av中文字字幕乱码综合| 久久综合国产亚洲精品| 日韩,欧美,国产一区二区三区| 韩国高清视频一区二区三区| 精品人妻偷拍中文字幕| 亚洲av综合色区一区| 少妇人妻 视频| 在线亚洲精品国产二区图片欧美 | 亚洲精品,欧美精品| 亚洲欧洲日产国产| 亚洲国产精品国产精品| 在线观看一区二区三区| 美女高潮的动态| 在线亚洲精品国产二区图片欧美 | 黄色怎么调成土黄色| 涩涩av久久男人的天堂| 亚洲精品视频女| 中文字幕精品免费在线观看视频 | 亚洲国产成人一精品久久久| 一区在线观看完整版| 少妇熟女欧美另类| 制服丝袜香蕉在线| 国产精品嫩草影院av在线观看| 午夜免费男女啪啪视频观看| 97超视频在线观看视频| 美女cb高潮喷水在线观看| 伦理电影大哥的女人| 91久久精品国产一区二区三区| 久热这里只有精品99| 舔av片在线| 国产又色又爽无遮挡免| 免费久久久久久久精品成人欧美视频 | 99热全是精品| 亚洲av欧美aⅴ国产| 美女内射精品一级片tv| 99精国产麻豆久久婷婷| 国产精品嫩草影院av在线观看| 亚洲中文av在线| 午夜福利视频精品| 99精国产麻豆久久婷婷| 免费观看的影片在线观看| 国产av码专区亚洲av| 亚洲四区av| 亚洲欧美一区二区三区黑人 | 中国三级夫妇交换| 新久久久久国产一级毛片| 免费不卡的大黄色大毛片视频在线观看| 欧美 日韩 精品 国产| 成人黄色视频免费在线看| 国产片特级美女逼逼视频| 国产高清不卡午夜福利| av卡一久久| 国产欧美亚洲国产| 久久青草综合色| 国产黄片美女视频| 久久韩国三级中文字幕| 色综合色国产| 亚洲成人手机| 色综合色国产| 亚洲成人手机| 人人妻人人澡人人爽人人夜夜| 观看美女的网站| 内射极品少妇av片p| 深爱激情五月婷婷| 在线精品无人区一区二区三 | 一个人免费看片子| 亚洲精品色激情综合| 丝袜喷水一区| 各种免费的搞黄视频| 亚洲av成人精品一区久久| 大片电影免费在线观看免费| 亚洲在久久综合| 免费观看的影片在线观看| 一区二区三区精品91| 国产精品久久久久久久久免| 国产一区二区三区av在线| 国产老妇伦熟女老妇高清| 狂野欧美激情性bbbbbb| 大陆偷拍与自拍| 久久综合国产亚洲精品| 小蜜桃在线观看免费完整版高清| 中文欧美无线码| 97精品久久久久久久久久精品| 精品酒店卫生间| 日本av手机在线免费观看| 亚洲国产日韩一区二区| 国精品久久久久久国模美| 大香蕉久久网| 好男人视频免费观看在线| 熟女电影av网| 春色校园在线视频观看| 在线看a的网站| 男人爽女人下面视频在线观看| www.av在线官网国产| 男男h啪啪无遮挡| 99精国产麻豆久久婷婷| 国产亚洲精品久久久com| 国产精品一区二区性色av| 久久久久网色| 男女边摸边吃奶| 色视频www国产| 3wmmmm亚洲av在线观看| 国产爽快片一区二区三区| 亚洲综合色惰| 99热这里只有是精品50| 国产成人精品婷婷| 久久久精品免费免费高清| 女人十人毛片免费观看3o分钟| 久久人妻熟女aⅴ| 亚洲成人一二三区av| xxx大片免费视频| 欧美+日韩+精品| 热99国产精品久久久久久7| 亚洲天堂av无毛| 精品一区二区三卡| 亚洲欧美日韩卡通动漫| 日韩人妻高清精品专区| 日日啪夜夜爽| 美女xxoo啪啪120秒动态图| 在线天堂最新版资源| 一级av片app| 午夜激情久久久久久久| 亚洲中文av在线| 少妇 在线观看| 久久久久久久久大av| 尾随美女入室| 五月天丁香电影| 在线观看一区二区三区| 国产精品国产av在线观看| 国语对白做爰xxxⅹ性视频网站| 午夜日本视频在线| 99热这里只有精品一区| 激情五月婷婷亚洲| 全区人妻精品视频| 如何舔出高潮| 男女国产视频网站| 国产有黄有色有爽视频| 如何舔出高潮| 精品一区在线观看国产| 欧美区成人在线视频| 亚洲av综合色区一区| 日韩av免费高清视频| av天堂中文字幕网| 亚洲成人一二三区av| 日本猛色少妇xxxxx猛交久久| www.av在线官网国产| 久久久久国产精品人妻一区二区| 少妇人妻 视频| 中国美白少妇内射xxxbb| 在现免费观看毛片| 国产精品av视频在线免费观看| 亚洲av中文字字幕乱码综合| 亚洲高清免费不卡视频| 中文欧美无线码| 在线观看免费视频网站a站| 成人无遮挡网站| 一区在线观看完整版| 小蜜桃在线观看免费完整版高清| 一区二区三区四区激情视频| 亚洲精品视频女| 亚洲av成人精品一区久久| 一二三四中文在线观看免费高清| 日韩人妻高清精品专区| 精品亚洲成a人片在线观看 | 国产在线男女| 我要看黄色一级片免费的| 最新中文字幕久久久久| 久久久欧美国产精品| 日本欧美国产在线视频| 国产精品秋霞免费鲁丝片| 欧美激情国产日韩精品一区| 欧美精品亚洲一区二区| 18禁裸乳无遮挡动漫免费视频|