• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automatic Unusual Activities Recognition Using Deep Learning in Academia

    2022-11-09 08:17:08MuhammadRamzanAdnanAbidandShahidMahmoodAwan
    Computers Materials&Continua 2022年1期

    Muhammad Ramzan,Adnan Abid and Shahid Mahmood Awan,3

    1School of Systems and Technology,University of Management and Technology,Lahore,54782,Pakistan.

    2Department of Computer Science and Information Technology,University of Sargodha,Sargodha,40100,Pakistan

    3School of Electronics,Computing and Mathematics,University of Derby,Derby,United Kingdom

    Abstract: In the current era,automatic surveillance has become an active research problem due to its vast real-world applications,particularly for maintaining law and order.A continuous manual monitoring of human activities is a tedious task.The use of cameras and automatic detection of unusual surveillance activity has been growing exponentially over the last few years.Various computer vision techniques have been applied for observation and surveillance of real-world activities.This research study focuses on detecting and recognizing unusual activities in an academic situationsuch as examinationhalls,which may help the invigilators observe and restrict the students from cheating or using unfair means.To the best of our knowledge,this is the first research work in this area that develops a dataset for unusual activities in the examination and proposes a deep learning model to detect those unusual activities.The proposed model has been named Automatic Unusual Activity Recognition(AUAR),which employs motion-based frame extraction approaches to extract key-frames and then applies advanced deep learning Convolutional Neural Network algorithm with diverse configurations.The evaluation using standard performance measures confirm that the AUAR model outperforms the already proposed approaches for unusual activity recognition.Apart from evaluating the proposed model on the examination dataset,we also apply AUAR on Violent and Movies datasets,widely used in the relevant literature to detect suspicious activities.The results reveal that AUAR performs well on various data sets compared to existing state-of-the-art models.

    Keywords: Deep learning;unusual activities;examination;CNN;surveillance;human activity recognition

    1 Introduction

    Traditional surveillance requires manual observation to identify unusual activities,which is tedious and prone to error activity.The use of cameras for surveillance is growing exponentially.Surveillance cameras capture a huge volume of video data.Observation of human behavior and categorizing actions is very subjective in different situations.Based on this reason,observational activity can be classified into a normal or abnormal/unusual activity.Regular activities can be categorized as the usual or daily activities performed by humans,such as hand waving,eating,sitting,standing,walking,etc.Unusual activities are different from normal routine activities and vary in specific situations,known as suspicious activities.A lot of work has been done for the detection of suspicious activities in different situations such as observing an abandoned object,theft,health monitoring of patients at the hospital and home (i.e.,Fall) [1],road accidents [2],traffic rules violation [3],driver drowsiness [4,5] etc.Nowadays,terrorist activities [6] are happening in crowded and sensitive places such as religious places like mosques,churches,educational institutes,airports,bus stations,government buildings,and shopping malls.The terrorists target such places,hence detecting any suspicious activities or any orphan suitcase around such places has gained utmost importance in the current era,which can be automatically classified as a suspicious activity.

    Human activity detection is an important research area in image processing and video analysis[7].Human activity recognition from still images or video sequences is a challenging task due to various reasons such as deformation,viewpoint variation,illumination changes,background clutter,partial occlusion,and scale variation.Vision-based activity detection systems generally consist of stages such as video/image preprocessing,key-frame extraction,feature extraction,classification,and activity detection.

    In particular,the traditional methods of invigilation during the examination to detect unfair means require manual observation of students.An invigilator cannot monitor all the students and may lose attention over time,allowing pupils to engage in cheating activities [8].Thus,there is a need for automated and intelligent video-based suspicious activity detection systems that may help analyses,detect and minimize unwanted acts resulting in unfair means.However,less work is done for the automatic detection of suspicious/ unusual activities for invigilation during an academic examination that is limited to a few activities and uses handcrafted features and hardcoded algorithms for detection [9].

    Deep learning-based Human activity recognition (HAR) is an active research area that plays a vital role in monitoring people’s daily life behavior and recognizing activities in a crowded scene and the critical regions through video surveillance.The significant benefit of deep learning is its ability to perform automatic feature extraction and learning compared to conventional visionbased methods.Deep learning models’strength makes it possible to perform automatic high-level feature extraction,and representation learning thus achieves high performance in many areas.Deep learning based on Convolutional Neural Networks (CNNs) has been widely adopted for the video-based human physical activity recognition task.This research automatically analyses and detects cheating activities during examination through videos using deep learning techniques.

    This research presents a deep learning-based model name Automated Unusual Activity Recognition (AUAR) to detect unusual activities,including cheating and malpractice during the examination.The proposed system extracts key-frames based on human motion from a video sequence/stream;deep learning model 2D and 3D CNN used for classification task to detect suspicious activities.Furthermore,we have also created a data set for unusual activity recognition during the examination.Thus,the main research contributions of this paper are as follows:

    · The dataset has been created for the examination of unusual activity detection systems.For the data set processing,data labelling has been carried out by expert annotators.The dataset is freely available for research purposes.

    · We propose to utilize a motion-based Key-frame extraction method to extract only salient frames from a video sequence.

    · We proposed to utilize 2D and 3D CNN architectures to detect suspicious activities.

    · The research work evaluation using standard evaluation measures prove that the proposed model AUAR outperforms the existing approaches.

    The rest of the paper is organized as follows: Section 2 reviews the related work.Section 3 explains the proposed suspicious activity detection system.Section 4 discusses the empiricallybased results,and Section 5 provides the conclusion and future work.

    2 Related Work

    Unusual human activity detection is an important research area in the field of image processing and video analysis.Tracking and understanding objects’behavior through videos has been a research focus for an extended period due to its essential role in human-computer interaction and surveillance.Various algorithms and approaches have been used to detect suspicious objects in public and crowded places in the last decade.Many researchers have been explored the activity recognition problem in different domain.There are two primary activity recognition approaches discussed so far: vision-based [10] and sensor-based activity recognition.

    The advancement of image representation approaches and classification methods in visionbased activity recognition literature follows the research trajectory of local,global,and depthbased activity representation methods.Other approaches that being discussed in the literature for human activity detection can be categorized as video-based [11],fuzzy-based [12],trajectorybased [13],hierarchically based [14],data mining based,and color histogram-based suspicious movement detection and tracking [15].The unusual activity detection process is typically composed of four steps,scene segmentation,feature extraction,monitoring,and human behaviour detection from the video streams.

    The vision-based activity recognition literature follows the research trajectory of local,global and depth-based activity representation approaches.Wang et al.worked on patient condition recognition,elder people caring and human fall detection (in hospitals and at homes) for their assistance using surveillance video based on PCANet.Babiker et al.[15] present a human activity recognition system based on feature extraction analysis methods.The author uses two types of feature extraction approaches,the Harris corner detector and blob analysis features.A multi-layer perceptron feeds forward neural network used as a classifier for human activity recognition on KTH and Weizmann datasets.A.K.[16] using the frame deviation method is used to extract keyframes.For feature classification,a Random Forest algorithm is used.Wiliam et al.[17] proposed a contextual information based automatic suspicious behavior detection system.An inference algorithm use for decision making by combining information about the context and learned system knowledge as behavior is suspicious or not.The proposed approach is tested on the CAVIAR dataset and Z-Block dataset.Roy and Om [18] work on suspicious and violent activity detection using the HOG feature extractor and SVM classifier.The trained SVM classifier classifies activities as violent and non-violent,such as kicking,punching and fighting.Other main approaches that are discussed recently for human activity detection are Fuzzy based [19] Trajectory-based [20],Hierarchical based [21].

    There are very few articles in the literature that address detecting suspicious activity during examination through video datasets to facilitate invigilators efficient conduction of exams.The authors in [22] provide a framework to monitor student activities during examination by detecting face region using Haar features,detecting hand contact and hand signalling as cheating activities,and raising an alert.Works on the detection of suspicious activity during academic offline examination.This work is divided into three modules;impersonation checks using a PCA-based face recognition method,detecting such facial malpractices in which students get involved in a conversation with another,and identifying illegal materials or gadgets.

    The recent years have shown significant development in the field of deep learning.Deep learning achieves excellent performance and recognition accuracy in various areas such as pattern recognition,image/object recognition,natural language processing,speech recognition,etc.A potential advantage of deep learning models over vision-based methods is their ability to perform automatic feature extraction and learn by examples using machine learning.The computer visionbased methods involve handcrafted low-level features (e.g.,colour,edges,corners,contrast) for classification.In contrast,deep learning correlated to A.I.often abstract high level (e.g.,Shapes,contours,depth information) feature from a low level,thus achieving high accuracy for classification tasks.The recent advancement in deep learning makes it possible to recognize an activity through video surveillance.Hassan et al.[23] proposed a smartphone-based HAR approach with inertial sensors.The authors use triaxial accelerometers and gyroscope sensors for efficient feature extraction,and Deep Belief Network (DBN) is used for the classification task to recognize the physical activity of humans.The experiments were performed on ANN,SVM and DBN classifiers and showed 89.06%,94.12% and 95.85% accuracy.Sabokrou et al.[24] propose a detection and localization of anomaly in crowded scenes in video datasets.Authors use cubic patch-based methods and use a cascade of classifiers.These classifiers are divided into two steps,a deep 3D stack auto-encoder for identifying normal cubic patches and then using a complex deeper 3D CNN.The authors compare the proposed method’s performance with other researchers’work on UCSD and UMN benchmark datasets.Limin Wang et al.[25] in this paper,the author has used the Temporal Segment Network for action recognition from videos on limited training data.This approach was tested on the HMDB51 and UCF101 dataset and had obtained 69.4% and 94.2%performance gain.Ramachandran et al.present a framework for unusual human activity detection,tracking and features extraction using CNN.The extracted features are then fed into Multiclass Support Vector Machine (MSVM) for classification and detection of suspicious activities.The experiments were performed on a standard dataset and achieved 95% accuracy.Jalal et al.proposed a method to recognize human interaction in an outdoor environment using a Multifeature algorithm with CNN.The proposed method is evaluated on the BIT-Interaction dataset and recognize eight complex activities.The experimental results show 84.63% recognition accuracy.

    Computer vision and Deep learning base HAR is an active research area that plays an essential role in monitoring people’s daily life behavior and recognizing activities in a crowded scene and the critical regions through video surveillance [26].However,less work is done to detect suspicious activities during an examination that is limited up to a few activities and uses hardcoded algorithms for detection.This domain’s previous work only involves computer vision-based,handcrafted features and hard-coded algorithms for detecting each category of unusual activity.There is no machine learning involved in classification and detection [27].Senthilkumar et al.[28]work was to establish a system for evaluating and identifying suspicious behavior in a classroom setting.The system structure consists of three parts to control the student’s actions during the study exam.First,the student’s face region is identified;secondly,the student’s hand contact detection and thirdly,the student’s hand signal.Tab.1 show the already existing techniques related to Unusual Activity detection.

    Table 1:Unusual activity detection techniques

    3 Proposed Research Methodology

    The proposed method uses deep learning to classify key-frames of a video sequence in normal and unusual activities.Fig.1 shows a comprehensive framework of the proposed system showing the steps of the proposed research method.

    3.1 Datasets

    In this research,three datasets have been used for empirical analysis.First is the own created dataset,Examination Unusual Activity (EUA) and two more standard published dataset,Violent flow (“Crowd ViolenceNon-violence Database.” https://www.openu.ac.il/home/hassner/data/violent flows/) and Movies (Movie and Hockey datasets.“https://figshare.com/articles/figure/_Movie_an d_Hockey_datasets_/1375015”).

    Figure 1:The framework diagram of the proposed research approach

    3.1.1 EUA Dataset

    There is no standard dataset available in the domain of academic examination invigilation for suspicious activities detection.To address this issue,we have developed our dataset to evaluate the proposed AUAR System.This system is proposed for Unusual Activity recognition in Academic settings.EUA Dataset has been created to classify the activities into normal and abnormal.The suspicious activity of cheating has been detected using three activities: head movement,object passing,and signalling.

    For dataset preparation,the videos are captured with the help of university students studying in the Computer Science &Information Technology Department,University of Sargodha,Pakistan.The DSLR camera used for the acquired video with the 20.1 megapixels resolution,the number frames per second,is 29,and the frames’size is 1440 × 1080.All video clips are preprocessed and saved in mp4 format.Each category has 100+ video clips,and the dataset contains a total of 510 videos.

    There is vast variability in the dataset as multiple students conduct activities and multiple camera perspectives document these activities.At a fixed point with a static context,events are captured.The next measure for the preparation of the dataset is the validation of this dataset.Some quality metrics define quality,variance,lightness,hue,saturation,and quantity for this dataset.Fig.2 shows a few glimpses of the frames/image for these three activities of the EUA dataset.The frequency of different categories of activities in the prepared dataset has been presented in Tab.2.The table shows that the data set comprises of 550 videos,with three categories of unusual activities and one category of usual activity videos.Whereas Tab.3 presents the characteristics of the DSLR camera used for recording these videos.

    Figure 2:Sample frames from EUA dataset

    Table 2:Statistics of the EUA dataset

    Table 3:Characteristics of DSLR camera

    3.1.2 Violent and Movies Dataset

    Two other benchmark datasets named as movie dataset and violent-flow dataset have also been used in this research.These datasets have been included in this research because these datasets have been widely used in similar articles addressing the unusual activity recognition problem.Between two (or a few) people,the Hockey data set was used to evaluate methods for classifying videos as violent or non-violent.The collection includes 1,000 clips divided into five parts,each with 100 violent and non-violent scenes.The dataset of Violent Flows consists of 246 real-life video in which both violent and non-violent scenes are included.The purpose and motivation for including these datasets in this research are to evaluate the proposed model’s effectiveness on a variety of datasets to gauge its general applicability.

    3.2 Video Preprocessing

    After the video capturing process,long-duration videos are converted into short clips of threesecond duration each.According to classes of unusual activities,we convert every video into.mp4 format.For video preprocessing,the Gaussian filter is used for noise removal,and histogram equalization is performed on frames of video.After preprocessing,extracted frames are resized into 128 × 128.

    3.3 Key Frame Extraction

    For the detection of unusual activity from a video sequence,there is a need to extract key-frames that consist of an unusual series of actions.In this dataset,each video consists of a sequence of frames at the rate of 30 fps (http://www.imctv.com/pdf/ipcamera/IP_Surveillanc e_Design_Guide.pdf.).The frames are very similar to each other,so information in a video sequence is highly redundant,so only a few frames are required that contain meaningful information.These frames are usually called key-frames.Several techniques exist for key-frame extraction,such as colour histogram,histogram difference,frame difference,correlation,entropy difference,etc.In this research work,we apply the motion-based key frame extraction method.

    For key-frames extraction,first,downsampling of all the videos is carried out by selecting a skipping factor equal to three for consecutive frames.The skipping factor helps to eliminate redundant frames.We applied a motion-based key frame extraction method [38].In this method,we take the pixel-wise absolute difference between two consecutive frames.

    Threshold valueTis calculated by usingT=(mean of absolute difference + standard deviation of absolute difference)

    WherePfirepresents the previous frame andCfi+1as a current frame in the above equation.Then,we compute the average difference of the matrix obtained in Eq.(1).

    If theAvgdiffexceeds a pre-defined Threshold (T),then the current frame is selected as a key-frame or skipped otherwise.

    We update the frames as prev_frame=curr_frame and repeat the whole process.Our keyframe extraction algorithm extracts 11,500 frames for fame level classification,and a sequence of 20 structures out of 550 video clips for video level classification is obtained.Some sample key-frames extracted for four classes of the EUA dataset are shown in Fig.3.

    Figure 3:Key-frames (a) head_move (b) signal (c) object_pass (d) normal

    3.4 CNN Architecture

    The Convolution Neural Network (CNN) is one of the most widely used architectures among the deep learning architectures for events or activity recognition and automatic feature extraction.This research examines the CNN model on two different architectures: 2D-CNN for Frame level and 3D-CNN for video level detection.The main difference between these two architectures is that the 2D model learns only spatial information using a single frame as input.In contrast,the 3D model learns both space and time information from a video sequence by using the sequence/stack of frames as input.

    3.4.1 Features Extraction Using CNN

    CNN uses 3 × 3 filters for automatic feature extraction.CNN model trains based on these self-learned features.The output after applying filters is known as a feature map.The first few layers of the network may detect simple features like lines,circles,edges.The network combines these findings in each layer and continually learns more complex concepts as we go deeper and deeper into the network’s layers.

    3.4.2 2D-CNN Architecture

    The proposed 2D model learns only spatial information by using a single frame as input.In the CNN model,convolutional layers perform feature extraction.The number of convolutional layers depends on the complexity of the problem.As we increase the training samples,we need more convolutional layers to capture the feature map by applying kernels of varying sizes.The pooling layer aims to reduce the feature map’s size and the number of parameters extracted through convolutional layers.Also,ignoring minor details such as translational,rotational invariance and focusing on the bigger picture (maximum activation).The researchers previously used many techniques to perform pooling operation such as Max pooling,Global average pooling,stochastic pooling [39],etc.The performance analysis shows that Max pooling performs better and extensively used in research than other techniques.A fully connected layer connects every neuron from the Max-pooled layer to every four output neurons.In this research study,the number of convolutional and pooling layers is selected based on training data experiments.We have performed different experiments to compare several different approaches to convolutional and pooling layers and choose the best approach.The proposed 2D-CNN model configuration is as described in Tab.4.

    The 2D-CNN architecture consists of five Convolution layers,the input layer has the shape(128,128,3),and kernel size 3 × 3 with pooling layers of kernel 2 × 2 batch normalization layer is also added after each Convolution layer for normalizing input data,and Relu activation function is used.All the neurons in the ReLu function do not activate at the same time.After convolutional layers,a flattened layer is added,then add two fully connected dense layers followed by a dropout of 0.5 followed by a softmax output layer consist of four (as equal to a number of classes) neurons.The data is split into three parts after model construction: 70% training,15% validation,and 15% test set.A training dataset is a set of samples used during the learning process.In comparison,the validation dataset is a set of examples used for parameter selection and independent from training.For the performance evaluation,a Test set is used.

    Table 4:Convolutional model configuration

    The deep learning models require a large amount of data for training.Suppose there is not enough dataset available for training on the CNN model that affects the model’s performance and accuracy.Hence,the solution to this research problem isData Augmentation.The data augmentation is increasing the dataset by using different methods to help deep learning models learn diversity in the dataset,prevent overfitting and produce better results.The data augmentation process includes the following parameters,e.g.,horizontal or vertical flip,width shift,rescale and rotation range.Then we generate batches of data for training up to 50 epochs with batch size 20 to fit in RAM and process easily.

    3.4.3 3D-CNN Architecture

    We moved from 2D-CNN a frame-level classification model to the 3D-CNN classification model for video action recognition.2D-CNN achieve tremendous success in the image recognition domain.Increased complexity and dimensionality of 3D-CNNs has limited the work on video analysis and recognition [40].The flow diagram of the proposed 3D-CNN architecture is presented in Fig.4.

    The proposed 3D-CNN model takes an input sequence of 20 frames having (20,128,128,3)dimensions the sequence of 20 frames input to 3D-CNN architecture for training.The network consists of 4 Conv3D and MaxPooling3D layers followed by one fully connected layer with a 0.5% dropout and dense SoftMax output layer.The configuration details of the 3D-CNN model are described in Tab.4.

    In the proposed model,3D Convolution and pooling layers are used to preserve the space and time information to learn special and temporal features from a video sequence to learn representations better.The 3D-CNN model layers were selected after extensive experiments by increasing/decreasing layers in the model and fine-tune hyperparameters of the model;thus,the configurations are optimized that give the best results in terms of accuracy and loss.

    After configuring model layers,the model is compiled with the cost/loss function “categorical_crossentropy” and the optimization function “RMSprop” with a learning rate of 0.001.The input videos are split into 80% training and 20% test videos dataset.The model is fit on training and validation dataset splits for 30 epochs.

    Figure 4:Flow diagram of 3D CNN

    4 Experimental Results and Discussion

    In this section,we evaluate the performance of our proposed system on the EUA dataset.The standard dataset for AUAR during Examination does not exist.In earlier studies,the authors have created their video dataset (containing only a few videos) to evaluate the proposed method.The experiments are performed on Google Colab a free web-based cloud service that provides Tesla K80 GPU with 12 GB RAM and TPU to train and process deep learning models.

    The proposed research work is divided into two implementation domains: the first one covers 2D-CNN,while the second domain implements 3D-CNN.We performed the experiments on two different dataset settings: spatial domain (frames/image level) and space-time (sequence of frames/video) level activity detection.In this research,we evaluated the performance of deep learning approaches based on AUROC (Area Under the Receiver Operating Characteristics)

    4.1 Evaluation of CNN Architectures

    The 2D-CNN model uses the AUAR dataset consists of 1725 testing frames of 4 classes.While the 3D-CNN model takes 110 test videos having a sequence of 20 frames per video clip.We load the trained model;evaluate the performance of the CNN models on test split,and see how well our model learns to generalize actions.The model takes the test split as input and predicts the class label for each frame according to conventional and non-conventional activities compared to ground truth labels.The experimental results of the 2D-CNN model on the test dataset show 77%,and 3D-CNN models show 73% accuracy.In Fig.5 2D-CNN shows 0.94,and in Fig.6 3D-CNN shows 0.91 micro and macro average ROC curve.The ROC shows the probability curve for each action class according to probability scores calculated by the model.AUC for each class,as shown in the figure representing how the model learns to distinguish between each category of unusual activities.

    Figure 5:ROC for 2D-CNN

    Figure 6:ROC for 3D-CNN

    4.2 Comparison of Deep Learning Models

    In Tabs.5 and 6,we summarise the results of deep learning CNN models based on the performance evaluation matrix of AUROC and Classification Report for four EUA dataset classes.Tab.6 presents a comparative analysis based on the classification report for three unusual activities and one normal class,and the overall accuracy of the proposed system using test datasets.

    Table 5:Comparison of CNN models based on AUROC

    Table 6:Comparison of CNN models based on classification report

    4.3 Comparative Analysis of the Proposed Model with Standard Dataset

    The proposed method is evaluated on Movie and Violent flow datasets to analyze the CNN model’s performance on these two standard datasets considered benchmarks in the relevant studies.The videos are preprocessed,and key-frames are extracted for normal and unusual behavior and input to CNN architecture.The proposed model performance was evaluated with a standard dataset based on classification accuracy with another state-of-the-art technique.Tab.7 shows that the proposed method AUAR can better classify unusual behaviours compared to existing techniques.

    Table 7:Comparison of classification results on standard datasets

    5 Conclusion

    This article presents a novel deep learning-based unusual activity detection model in the examination hall.The proposed deep learning model is based on CNN.It outperforms existing models used for unusual activity recognition that uses computer vision and hardcoded algorithms to detect unusual activities during the examination.Apart from proposing the model,we have also developed a video dataset for unusual examination hall activities.The performance of the proposed research work is evaluated on a frame-level consisting of 11500 Key-frames,and the video level consists of 550 video clips of 4 different classes.We have used AUROC as an evaluation matrix.The detection results of deep learning models show excellent performance on our developed dataset.The accuracy of deep learning models for frame-level is higher than the video level due to limited video dataset and GPU resources.The proposed CNN models show an optimized accuracy on our unusual activity dataset regardless of dataset complexity and resource limitations.Apart from this examination dataset,we evaluated the proposed model on two other widely-used datasets,including the Violent-flow dataset and Movie dataset for unusual activity recognition.The proposed model outperformed existing models for all three datasets.

    Funding Statement: The authors received no specific funding for this study.

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    免费观看无遮挡的男女| 日韩强制内射视频| 边亲边吃奶的免费视频| 一本一本综合久久| 免费av不卡在线播放| 久久6这里有精品| 性高湖久久久久久久久免费观看| 3wmmmm亚洲av在线观看| 亚洲精品中文字幕在线视频 | 免费观看的影片在线观看| 亚洲国产av新网站| 亚洲欧洲国产日韩| 欧美日韩在线观看h| 日韩av在线免费看完整版不卡| 国产成人精品无人区| 黄色视频在线播放观看不卡| 亚洲精品,欧美精品| 亚洲精品久久久久久婷婷小说| 自拍偷自拍亚洲精品老妇| 免费不卡的大黄色大毛片视频在线观看| 十分钟在线观看高清视频www | 我要看日韩黄色一级片| 欧美国产精品一级二级三级 | 一级二级三级毛片免费看| 一区二区三区乱码不卡18| 精品午夜福利在线看| 免费观看无遮挡的男女| 中文字幕精品免费在线观看视频 | a级毛片在线看网站| 亚洲国产色片| 久久99精品国语久久久| 久久99精品国语久久久| 搡女人真爽免费视频火全软件| 久久人人爽人人爽人人片va| 成人国产av品久久久| 乱系列少妇在线播放| 多毛熟女@视频| 婷婷色麻豆天堂久久| 亚洲无线观看免费| av一本久久久久| 欧美激情极品国产一区二区三区 | 日韩电影二区| 黑人猛操日本美女一级片| 日本黄色日本黄色录像| 亚洲成人手机| av福利片在线观看| 99久久精品国产国产毛片| 一级,二级,三级黄色视频| 成人二区视频| 国产精品国产三级国产av玫瑰| 亚洲国产毛片av蜜桃av| 丰满人妻一区二区三区视频av| 国产日韩欧美在线精品| 久久ye,这里只有精品| 国产av码专区亚洲av| 亚洲激情五月婷婷啪啪| 亚洲国产最新在线播放| 成人免费观看视频高清| 亚洲高清免费不卡视频| 两个人的视频大全免费| 超碰97精品在线观看| 国产精品无大码| 精品亚洲成a人片在线观看| 成人特级av手机在线观看| 精品少妇黑人巨大在线播放| 亚洲色图综合在线观看| 久久国产精品大桥未久av | 大香蕉久久网| 国产精品嫩草影院av在线观看| 欧美亚洲 丝袜 人妻 在线| 蜜桃久久精品国产亚洲av| 亚洲自偷自拍三级| 中国美白少妇内射xxxbb| 六月丁香七月| 日韩欧美 国产精品| 免费在线观看成人毛片| 亚洲国产av新网站| 久久毛片免费看一区二区三区| 黄色毛片三级朝国网站 | 日日爽夜夜爽网站| 国产永久视频网站| 一级av片app| 亚洲精品乱久久久久久| 两个人免费观看高清视频 | 久久国产亚洲av麻豆专区| 亚洲欧美日韩卡通动漫| 欧美最新免费一区二区三区| 人人妻人人爽人人添夜夜欢视频 | 亚洲国产av新网站| 国产黄频视频在线观看| 夫妻午夜视频| 99久国产av精品国产电影| 久久人人爽人人片av| 精品亚洲成国产av| 国产一区有黄有色的免费视频| 国产成人一区二区在线| 黄片无遮挡物在线观看| 色吧在线观看| 久久精品久久精品一区二区三区| 亚洲国产精品成人久久小说| 久久精品国产a三级三级三级| 欧美激情国产日韩精品一区| 亚洲欧美一区二区三区黑人 | 亚洲欧美中文字幕日韩二区| 久久久久久久久久久久大奶| 日日啪夜夜爽| 综合色丁香网| 欧美精品一区二区大全| 纯流量卡能插随身wifi吗| 伦理电影免费视频| 大陆偷拍与自拍| 97精品久久久久久久久久精品| 色婷婷av一区二区三区视频| 熟女av电影| 国产精品久久久久久精品电影小说| 日本wwww免费看| 另类亚洲欧美激情| 插阴视频在线观看视频| 国产 一区精品| av视频免费观看在线观看| 99久久精品国产国产毛片| 国产极品粉嫩免费观看在线 | 久久综合国产亚洲精品| 男女啪啪激烈高潮av片| 又黄又爽又刺激的免费视频.| 国产av国产精品国产| 免费高清在线观看视频在线观看| 一级毛片aaaaaa免费看小| 国模一区二区三区四区视频| 婷婷色综合大香蕉| 日韩成人伦理影院| 国产 精品1| 一级片'在线观看视频| 黄色毛片三级朝国网站 | 久久亚洲国产成人精品v| 免费黄频网站在线观看国产| 亚洲av在线观看美女高潮| 中文字幕制服av| 免费黄频网站在线观看国产| 久久国内精品自在自线图片| 亚洲精品色激情综合| 国产熟女欧美一区二区| 新久久久久国产一级毛片| 国模一区二区三区四区视频| 亚洲av成人精品一二三区| 日韩熟女老妇一区二区性免费视频| 亚洲av综合色区一区| 99九九线精品视频在线观看视频| 亚洲av中文av极速乱| 久久久久国产网址| 欧美xxⅹ黑人| 久久久久久伊人网av| 亚洲av男天堂| 日日撸夜夜添| 99久久中文字幕三级久久日本| 菩萨蛮人人尽说江南好唐韦庄| 日韩精品有码人妻一区| 观看av在线不卡| 女性生殖器流出的白浆| 精品亚洲乱码少妇综合久久| 精品国产国语对白av| 七月丁香在线播放| 国产有黄有色有爽视频| 亚洲国产毛片av蜜桃av| 男人爽女人下面视频在线观看| 午夜激情久久久久久久| 天天操日日干夜夜撸| 久久久久久久久久久久大奶| 色94色欧美一区二区| 99视频精品全部免费 在线| 国产精品成人在线| 成年人免费黄色播放视频 | 成人综合一区亚洲| 国产精品熟女久久久久浪| 高清av免费在线| 中文天堂在线官网| 国产伦精品一区二区三区四那| 国产 精品1| 亚洲av电影在线观看一区二区三区| 亚洲av国产av综合av卡| 午夜福利,免费看| 亚洲精品中文字幕在线视频 | 成年人午夜在线观看视频| 欧美精品国产亚洲| 久久人妻熟女aⅴ| 国产欧美日韩精品一区二区| 黄色视频在线播放观看不卡| 在线观看国产h片| 欧美日韩在线观看h| 国产白丝娇喘喷水9色精品| 极品少妇高潮喷水抽搐| 欧美 亚洲 国产 日韩一| 又粗又硬又长又爽又黄的视频| 久久久亚洲精品成人影院| 中文资源天堂在线| 国产爽快片一区二区三区| 亚洲va在线va天堂va国产| 日韩av免费高清视频| 日本与韩国留学比较| 看免费成人av毛片| 国产色爽女视频免费观看| 国产美女午夜福利| 青青草视频在线视频观看| 日韩中字成人| 夜夜看夜夜爽夜夜摸| 久久精品熟女亚洲av麻豆精品| a级毛色黄片| 国产免费一区二区三区四区乱码| 国产男人的电影天堂91| 免费人妻精品一区二区三区视频| 成年人免费黄色播放视频 | av视频免费观看在线观看| 欧美成人精品欧美一级黄| videossex国产| 久久人人爽人人爽人人片va| 久久99蜜桃精品久久| 熟女av电影| 乱人伦中国视频| 国产精品一区www在线观看| 人人妻人人添人人爽欧美一区卜| 成人国产麻豆网| 两个人的视频大全免费| 欧美+日韩+精品| 久久久a久久爽久久v久久| 人妻少妇偷人精品九色| 噜噜噜噜噜久久久久久91| 一区二区av电影网| 久久热精品热| 免费大片黄手机在线观看| 国产免费又黄又爽又色| 亚洲精品中文字幕在线视频 | 久久精品国产亚洲网站| 午夜福利视频精品| 精品久久国产蜜桃| 又大又黄又爽视频免费| 久久人妻熟女aⅴ| 在线精品无人区一区二区三| 国产精品麻豆人妻色哟哟久久| 免费看av在线观看网站| 亚洲四区av| 免费观看性生交大片5| 十分钟在线观看高清视频www | 亚洲伊人久久精品综合| 亚洲国产精品一区二区三区在线| 日本黄色日本黄色录像| 久久狼人影院| 国产精品久久久久久精品古装| 精品少妇久久久久久888优播| 桃花免费在线播放| 又粗又硬又长又爽又黄的视频| av卡一久久| 麻豆成人午夜福利视频| 性色av一级| 又黄又爽又刺激的免费视频.| 日韩在线高清观看一区二区三区| 久久久国产精品麻豆| 人妻制服诱惑在线中文字幕| 久久这里有精品视频免费| 伊人久久国产一区二区| 晚上一个人看的免费电影| 精品人妻熟女av久视频| 美女cb高潮喷水在线观看| 一区二区三区四区激情视频| 男女国产视频网站| 大码成人一级视频| 最新中文字幕久久久久| 亚洲一区二区三区欧美精品| 日本免费在线观看一区| 性高湖久久久久久久久免费观看| 99热国产这里只有精品6| 我要看日韩黄色一级片| 国产精品久久久久成人av| 亚洲精品中文字幕在线视频 | 少妇高潮的动态图| 亚洲成人av在线免费| 深夜a级毛片| av播播在线观看一区| freevideosex欧美| 天天躁夜夜躁狠狠久久av| 久久人妻熟女aⅴ| 国产av码专区亚洲av| 亚洲中文av在线| 国产精品99久久久久久久久| 亚洲人成网站在线观看播放| 日韩视频在线欧美| 少妇精品久久久久久久| 日日啪夜夜爽| 久久久久久久国产电影| 美女视频免费永久观看网站| 建设人人有责人人尽责人人享有的| 国产精品99久久99久久久不卡 | 老女人水多毛片| 午夜精品国产一区二区电影| 男人爽女人下面视频在线观看| 春色校园在线视频观看| 好男人视频免费观看在线| 一区二区三区四区激情视频| 综合色丁香网| 偷拍熟女少妇极品色| 国产日韩一区二区三区精品不卡 | 欧美高清成人免费视频www| 日韩一区二区视频免费看| 精品久久久噜噜| 国产亚洲av片在线观看秒播厂| 亚洲av国产av综合av卡| 久久久久久久久久人人人人人人| 女人久久www免费人成看片| 欧美激情极品国产一区二区三区 | 最新中文字幕久久久久| 3wmmmm亚洲av在线观看| 美女主播在线视频| 亚洲天堂av无毛| 深夜a级毛片| 一区在线观看完整版| 国产午夜精品久久久久久一区二区三区| a级毛片在线看网站| 亚洲精品视频女| 久久久久久久久久久久大奶| 交换朋友夫妻互换小说| 久久热精品热| 亚洲精品自拍成人| 亚洲精品视频女| 国产黄频视频在线观看| 欧美日韩在线观看h| 一区在线观看完整版| 少妇被粗大猛烈的视频| 少妇丰满av| 日本vs欧美在线观看视频 | 久久99热6这里只有精品| 国产中年淑女户外野战色| 国产日韩欧美视频二区| 妹子高潮喷水视频| 女的被弄到高潮叫床怎么办| 国产爽快片一区二区三区| 亚洲精品一区蜜桃| 国产乱人偷精品视频| 国产色爽女视频免费观看| 欧美一级a爱片免费观看看| 成年女人在线观看亚洲视频| 另类亚洲欧美激情| 麻豆成人av视频| 黄色配什么色好看| 国产欧美亚洲国产| 99精国产麻豆久久婷婷| 亚洲欧美一区二区三区黑人 | 在线观看免费视频网站a站| 国产午夜精品一二区理论片| 色5月婷婷丁香| 少妇高潮的动态图| 美女脱内裤让男人舔精品视频| 寂寞人妻少妇视频99o| 亚洲欧美精品自产自拍| 天天躁夜夜躁狠狠久久av| 一区二区三区精品91| 久久精品久久久久久噜噜老黄| 最近最新中文字幕免费大全7| 国产亚洲欧美精品永久| 中国国产av一级| 在线观看免费高清a一片| 中文字幕精品免费在线观看视频 | 成人国产麻豆网| 少妇 在线观看| 国产成人精品福利久久| 这个男人来自地球电影免费观看 | av不卡在线播放| 国产精品久久久久久精品电影小说| 麻豆乱淫一区二区| 国产91av在线免费观看| 又黄又爽又刺激的免费视频.| 成人午夜精彩视频在线观看| 亚洲精品一二三| h日本视频在线播放| 免费观看av网站的网址| 汤姆久久久久久久影院中文字幕| 少妇丰满av| 国产亚洲91精品色在线| 成人18禁高潮啪啪吃奶动态图 | 中文精品一卡2卡3卡4更新| 国产69精品久久久久777片| 人人妻人人爽人人添夜夜欢视频 | 免费久久久久久久精品成人欧美视频 | 精品熟女少妇av免费看| 中文欧美无线码| 国产欧美另类精品又又久久亚洲欧美| 一级二级三级毛片免费看| 亚洲综合精品二区| 在线观看www视频免费| 观看免费一级毛片| 欧美+日韩+精品| 永久网站在线| 午夜av观看不卡| 国产成人精品婷婷| 人人澡人人妻人| 亚洲av免费高清在线观看| 啦啦啦啦在线视频资源| 国产乱人偷精品视频| 亚洲精品日韩av片在线观看| 国产淫片久久久久久久久| 99视频精品全部免费 在线| 久久久久久久精品精品| 国产成人午夜福利电影在线观看| 欧美xxxx性猛交bbbb| 岛国毛片在线播放| 亚洲国产精品一区三区| av在线老鸭窝| 精品久久久久久电影网| 18禁动态无遮挡网站| 亚洲精品久久久久久婷婷小说| 麻豆成人午夜福利视频| 日本-黄色视频高清免费观看| 日日爽夜夜爽网站| a级片在线免费高清观看视频| xxx大片免费视频| 亚洲不卡免费看| 黄片无遮挡物在线观看| 国产老妇伦熟女老妇高清| 黄色日韩在线| 国产综合精华液| 午夜免费观看性视频| 99久久中文字幕三级久久日本| 中文字幕制服av| 只有这里有精品99| 黄色视频在线播放观看不卡| 你懂的网址亚洲精品在线观看| 久久6这里有精品| 中文资源天堂在线| √禁漫天堂资源中文www| 又黄又爽又刺激的免费视频.| 少妇人妻 视频| 成人二区视频| 最新的欧美精品一区二区| 精品一区二区三区视频在线| 大陆偷拍与自拍| 久久国产亚洲av麻豆专区| 免费看不卡的av| 自拍欧美九色日韩亚洲蝌蚪91 | 精品一区二区免费观看| 亚洲av综合色区一区| 久久久午夜欧美精品| 日韩亚洲欧美综合| 国产精品久久久久久久电影| 久久国产亚洲av麻豆专区| 十八禁网站网址无遮挡 | 日日摸夜夜添夜夜添av毛片| 免费久久久久久久精品成人欧美视频 | 五月玫瑰六月丁香| 美女福利国产在线| 国产探花极品一区二区| 男女国产视频网站| 秋霞伦理黄片| 又粗又硬又长又爽又黄的视频| 老女人水多毛片| 亚洲综合色惰| 亚洲精品乱码久久久v下载方式| 美女主播在线视频| 亚洲av综合色区一区| 只有这里有精品99| 亚洲人与动物交配视频| 国产成人精品久久久久久| 最后的刺客免费高清国语| 国产在线免费精品| 国产黄片视频在线免费观看| 午夜免费男女啪啪视频观看| 国产亚洲91精品色在线| 亚洲国产毛片av蜜桃av| 99久久中文字幕三级久久日本| 丝瓜视频免费看黄片| 最新中文字幕久久久久| 成人二区视频| 只有这里有精品99| 亚洲精品第二区| 国内少妇人妻偷人精品xxx网站| 一级毛片我不卡| 欧美xxⅹ黑人| 五月天丁香电影| 国产女主播在线喷水免费视频网站| 日韩亚洲欧美综合| 两个人的视频大全免费| 欧美日韩国产mv在线观看视频| 国产免费又黄又爽又色| av又黄又爽大尺度在线免费看| 丁香六月天网| 一本色道久久久久久精品综合| 免费黄频网站在线观看国产| 99久久中文字幕三级久久日本| 精品亚洲成国产av| xxx大片免费视频| 男女无遮挡免费网站观看| 高清av免费在线| 久久婷婷青草| 一级片'在线观看视频| 男女无遮挡免费网站观看| 国产成人免费观看mmmm| 一个人免费看片子| 九色成人免费人妻av| 人人妻人人看人人澡| 欧美国产精品一级二级三级 | 久久免费观看电影| 黄色视频在线播放观看不卡| 久久免费观看电影| 韩国av在线不卡| 国产精品偷伦视频观看了| 国产亚洲一区二区精品| 日本与韩国留学比较| 久久精品国产亚洲网站| 国产午夜精品久久久久久一区二区三区| 亚洲精品中文字幕在线视频 | 春色校园在线视频观看| 天堂中文最新版在线下载| 欧美日韩av久久| 亚洲精品亚洲一区二区| 欧美3d第一页| 3wmmmm亚洲av在线观看| 老熟女久久久| 久久久久视频综合| h视频一区二区三区| 国产视频首页在线观看| 亚洲精品一二三| 国产综合精华液| 亚洲久久久国产精品| 一区二区三区四区激情视频| 五月开心婷婷网| 少妇猛男粗大的猛烈进出视频| 亚洲久久久国产精品| 天堂8中文在线网| 99re6热这里在线精品视频| 综合色丁香网| 日本黄色日本黄色录像| 卡戴珊不雅视频在线播放| 欧美日韩综合久久久久久| 欧美日韩国产mv在线观看视频| 欧美激情极品国产一区二区三区 | 黄色日韩在线| 好男人视频免费观看在线| 久久精品国产亚洲av涩爱| 色视频在线一区二区三区| 亚洲av二区三区四区| av国产久精品久网站免费入址| 中文资源天堂在线| 久久ye,这里只有精品| 国产毛片在线视频| 91久久精品国产一区二区三区| 久久久欧美国产精品| 有码 亚洲区| 另类亚洲欧美激情| 人人澡人人妻人| 免费看光身美女| 一级毛片久久久久久久久女| 伊人久久精品亚洲午夜| 嫩草影院新地址| 午夜福利,免费看| 欧美国产精品一级二级三级 | 国产成人91sexporn| 国产日韩欧美亚洲二区| 高清av免费在线| 国产毛片在线视频| 啦啦啦视频在线资源免费观看| 国产白丝娇喘喷水9色精品| 久久人人爽人人爽人人片va| 国产在线男女| 免费黄色在线免费观看| 国产有黄有色有爽视频| 黄色视频在线播放观看不卡| 在线观看免费日韩欧美大片 | 精品人妻一区二区三区麻豆| 欧美精品国产亚洲| 18+在线观看网站| 国产黄片视频在线免费观看| 性高湖久久久久久久久免费观看| 日韩免费高清中文字幕av| 亚洲精品aⅴ在线观看| 久久99热6这里只有精品| 69精品国产乱码久久久| av黄色大香蕉| 一边亲一边摸免费视频| 在线播放无遮挡| av在线播放精品| 全区人妻精品视频| 亚洲欧美一区二区三区国产| 边亲边吃奶的免费视频| 久久精品久久久久久久性| 精品久久久久久久久亚洲| 内地一区二区视频在线| 亚洲自偷自拍三级| 久久久欧美国产精品| 国产黄片视频在线免费观看| 国产高清不卡午夜福利| 日韩亚洲欧美综合| 只有这里有精品99| 免费黄网站久久成人精品| 国产欧美另类精品又又久久亚洲欧美| 两个人免费观看高清视频 | 国产免费一级a男人的天堂| 免费黄网站久久成人精品| 搡老乐熟女国产| 男女免费视频国产| 亚洲精品国产av蜜桃| av天堂久久9| 成年美女黄网站色视频大全免费 | 久久久久久久久久久免费av| 国产乱来视频区| 国产成人精品婷婷| 夫妻性生交免费视频一级片| 婷婷色av中文字幕| 97在线视频观看| 亚洲电影在线观看av| 在线亚洲精品国产二区图片欧美 | 9色porny在线观看| 成人免费观看视频高清| 一本大道久久a久久精品| 在线观看av片永久免费下载| 啦啦啦啦在线视频资源| 少妇人妻久久综合中文| 赤兔流量卡办理| 啦啦啦视频在线资源免费观看|