• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Trajectory Classifcation Model for Congestion Detection in Human Crowds

    2021-12-14 09:59:06EmadFelembanSultanDaudKhanAtifNaseerFaizanUrRehmanandSalehBasalamah
    Computers Materials&Continua 2021年7期

    Emad Felemban,Sultan Daud Khan,Atif Naseer,Faizan Ur Rehman and Saleh Basalamah

    1Department of Computer Engineering,College of Computing and Information Systems,Umm Al-Qura University,Makkah,Saudi Arabia

    2Department of Computer Science,National University of Technology,Islamabad,Pakistan

    3Science and Technology Unit,Umm Al-Qura University,Makkah,Saudi Arabia

    4Institute of Consulting Research and Studies,Umm Al-Qura University,Makkah,Saudi Arabia

    Abstract:In high-density gatherings,crowd disasters frequently occur despite all the safety measures.Timely detection of congestion in human crowds using automated analysis of video footage can prevent crowd disasters.Recent work on the prevention of crowd disasters has been based on manual analysis of video footage.Some methods also measure crowd congestion by estimating crowd density.However, crowd density alone cannot provide reliable information about congestion.This paper proposes a deep learning framework for automated crowd congestion detection that leverages pedestrian trajectories.The proposed framework divided the input video into several temporal segments.We then extracted dense trajectories from each temporal segment and converted these into a spatio-temporal image without losing information.A classification model based on convolutional neural networks was then trained using spatio-temporal images.Next, we generated a score map by encoding each point trajectory with its respective class score.After this, we obtained the congested regions by employing the non-maximum suppression method on the score map.Finally, we demonstrated the proposed framework’s effectiveness by performing a series of experiments on challenging video sequences.

    Keywords: Crowd congestion; trajectory; classification; crowd analysis

    1 Introduction

    Public gatherings, such as concerts, political and religious processions, festivals and sports,etc., are commonly observed in human societies.Often, thousands of participants end up gathering in a limited space.Although these public gatherings are organized for peaceful purposes,crowd disasters occur frequently.For example, during the Hajj in 2015, a stampede killed more than 700 people died.A similar event happened during the Love Parade in 2010 and at a religious procession in Baghdad in 2005 [1].

    It is imperative to monitor the crowded scene to prevent disasters.For this, hundreds of surveillance cameras are installed in different places to cover the area efficiently.Crowd management’s current practice is based on manual analysis that requires tremendous human effort to review incoming video streams to identify potential abnormal situations [2].Such manual analysis of the crowd is tedious and leads to errors due to limited human capabilities.The other complementary approach is based on experiments.Researchers manually analyze recorded videos and employ different models to simulate pedestrians’behaviour [3,4], particularly to identify and predict choke points.Such empirical studies are always useful and result in infrastructure improvements [5].However, these models suffer from the following limitations:(1) Simulations cannot simultaneously cover different real-time crowd situations.(2) These models cannot provide precise results:They provide only limited responses to other input parameters.

    An alternative approach is to automatically analyze crowded scenes in real-time by employing computer vision and machine-learning techniques.Several methods and techniques have been proposed for automated analysis [6].This has numerous applications, such as anomaly detection [7,8],panic detection [9,10], counting [11,12], density estimation [13,14], and tracking [15,16].Generally,during the past few years, each of these applications has attracted considerable attention from the research community, and various algorithms and methods have been proposed.However,congestion detection in high-density crowded videos has not received adequate attention in the existing literature.

    Congestion is a prolonged temporal situation wherein many people cannot move at their desired speeds [17].Timely detection of congested regions is essential for efficient crowd management systems.If congestion is not addressed well in time, it may cause a cascading effect and lead to disasters.Despite its importance, only a limited amount of work [1,18,19] has been reported in the literature that automatically detects regions in high density crowded scenes.Significantly,none of these studies has explored crowd congestion detection using pedestrian trajectories with deep learning.

    In this paper, we propose a deep learning framework for detecting congestion in crowds.The pipeline of the proposed framework is shown in Fig.1.First, we divided the input video stream into multiple overlapped video segments.We extracted dense trajectories from each part and generated a spatio-temporal image that effectively characterizes relative motion in the respective video segment.Then, we employed a Convolutional Neural Network (CNN) to extract hierarchical features from the fully connected layer and learn more about the influence representation of trajectories.CNN classifies each spatio-temporal image into two classes, i.e., ‘congested’ and‘normal’.After classification, we generated a score map by encoding them with their respective class scores.We followed this by applying the non-maximum suppression method on the score map to obtain a congested region(s) in the given video segment.

    Figure 1:Illustrates the pipeline of proposed framework

    Comparison and Differences with other methods

    We summarize the differences between the proposed framework and other existing methods in the following manner:

    a.State-of-the-art primarily adopted unsupervised machine-learning approaches for classifying the scene into two categories, i.e., ‘congested’and ‘normal’[1,18].We discard the traditional machine-learning techniques and propose a convolutional neural network that our empirical studies have shown to be a better counterpart.

    b.The previous methods use holistic approaches to extract hand-crafted [19] features from the whole image.In contrast, our network undertakes a robust and compact representation of trajectories by learning hierarchical features from spatio-temporal images.

    c.The previous methods cannot deal with complex scenes since they were trained for limited scenarios due to datasets’unavailability.We introduced our model on a large dataset that enhanced its discriminative power of detecting congestion in complex scenes.

    Contribution

    a.This paper has proposed a novel approach to detect and localize congested regions in videos using spatio-temporal images.

    b.Instead of extracting hand-crafted [19] features from trajectory data, we achieved effective representation of the trajectory using 2-D spatio-temporal images and shifted the problem to image classification.

    c.We are amongst the first to create and build a dataset of spatio-temporal images generated from trajectory data.

    d.We performed an extensive evaluation of the proposed framework in different scenarios,thereby confirming its effectiveness.

    2 Literature Review

    Crowd management is a significant area of research wherein scholars have been working for several years.Numerous techniques have been used to analyze crowds, like simulation and modelling or Computer Vision techniques.Computer Vision is a relatively new technique for analyzing gatherings and has produced marvellous results in the last decade.The essential analysis activities are counting individuals, tracking, and exploring the crowd’s behavior.Several methods from different fields have been applied to the problem of crowd management.Examples include disaster risk reduction and crowd simulation [6,20].Simulation is one of the most used methods to analyze crowds.Zhan et al.[21] and Jacques et al.[22] describe crowd analyses using simulation.

    Computer Vision (CV) has played an essential role in crowd management for the past many years.The evaluation of these techniques is highly dependent on the dataset used during experimentation.Vision-based crowd analysis focuses on four significant aspects, as shown in Fig.2:(i) Counting, (ii) Tracking, (iii) Behaviour, and (iv) Congestion Detection.

    Figure 2:Vision-based crowd analysis

    Computer Vision-based Crowd Counting uses multiple approaches to count people in a scene.Crowd Counting can be achieved via direct methods, such as-detecting faces or body parts.Indirect detection can be done by estimating crowd density ranges.Along with traditional approaches [23], some advanced techniques—object density map, regression techniques, monolithic, part-based, pixel-based, texture-based, and deep learning-based methods—are also used for crowd counting.

    Crowd Counting—using density maps—bypasses all the difficulties arising from image occlusion.Lempitsky et al.[24] proposed a crowd counting method based on density maps.Similarly,some other researchers [25,26] found object density maps effectively for crowd counting.Kang et al.[27] provided a comprehensive comparison of crowd counting using density maps, wherein density values were calculated from low-level features, even as information location was maintained.Lempitsky et al.[24] introduced a linear model that predicts the pixel’s density value from the extracted part.Kang et al.[28] proposed a computer vision-based framework to count the people in Tawaf.

    Regression-based methods work more efficiently as compared to tracking-based and detectionbased methods.Regression-based counting uses image features for counting.Idress et al.[29] used multiple sources like SIFT, heat detector confidence, and frequency domain analysis.They found that a single source is not enough for counting in distorted and severe occlusion datasets.One of the drawbacks of this technique is the lack of location information; hence, it cannot be used for object localization [30].Arteta et al.[25] used ridge regression (RR) for the interactive counting of people.

    Regression-based methods can be improved by applying the Convolutional Neural Network (CNN).These methods use density maps generated from images.Zhang et al.[31] proposed a CNN model that uses switchable alternative learning for crowd counting and density maps.They presented a Multi-column Convolutional Neural Network (MCNN) trained to estimate crowd density in still images.Sindagi et al.[32] proposed the Contextual Pyramid CNN (CP-CNN) that created good quality density images by incorporating contextual information with a density map.Xiong et al.[33] proposed Convolutional LSTM (convLSTM) to improve count accuracy using spatial and temporal information.

    Advancement in deep learning improves results significantly in people-detection, counting, and tracking.Deep CNN is a model in which features are extracted from the lower-level layers up to the final layer.Nowadays, faster R-CNN [34] algorithms are used widely to count the crowd.

    People tracking in videos is crucial for Computer Vision.The tracking algorithm is useful in video surveillance.Tracking can be done using multiple cameras or a single camera.Stateof-the-art approaches for multi-view [35–37] uses multiple cameras to monitor the object, while the single-view method [38–40] uses a single camera but provides lesser information.Khanloo et al.[41] combined motion and colour features to track people.

    The behaviour of the crowd is an essential feature in video analytics.Studies discuss a lot of approaches to analyze crowd behaviour in real-time and offline videos.Some of the most common approaches are optical flow [42,43], particle flow [44], streaklines flow [45,46], spatio-temporal features [47,48], and tracklets [49,50].These approaches determine the collective behaviour of the crowd.Saqib et al.[51] developed a framework that took multiple snapshots from videos of the moving Haram crowd and extracted crowd density and directional flow information by applying an unsupervised hierarchical clustering algorithm.

    Crowd congestion-detection is another area where computer vision can play an influential role.Unfortunately, very little work has been done on congestion-detection and prediction using computer vision.Khan et al.[52] proposed a computer vision-based framework that estimates crowd density, detects congestion-levels, and identifies dominant patterns using videos from the IP cameras installed in Masjid al-Haram.They also proposed a congestion-detection mechanism [19]using multiple overlapping temporal segments of equal duration.The trajectories extracted from these segments are used to calculate the oscillation map to identify critical congestion areas in videos.

    Another challenging task is anomaly detection to sense any casualty in crowded scenes.Initially, the anomaly is detected using hand-crafted features and object trajectories [53,54].Abnormal courses are obtained from the motion of humans that serve as features.Due to the advancement of deep neural networks, feature extraction has become more representative than hand-crafting.

    Similarly, Yang et al.[55] proposed an anomaly detection approach based on the deep learning generative model.Hasan et al.[56] used convolutional AEs on spatio-temporal frames to detect anomalies.The LSTM based CAEs are also based on convolutional AEs used to detect anomalies [57].

    Our proposed framework discards traditional machine-learning approaches and recommends a Convolutional Neural Network (CNN).In contrast to holistic approaches to extract handcrafted features from the whole image, our network learns a robust and compact representation of trajectories by extracting hierarchical features from spatio-temporal images.Moreover, we have trained the model on a large dataset that has enhanced its discriminative power of detecting congestion in complex scenarios.

    3 Proposed Methodology

    3.1 Motion Extraction

    The input to the proposed framework is a video sequence.The framework divides this sequence into several overlapping temporal segments, where the length of each part isL.LetVrepresent an input video sequence.We divided the video sequenceVintonnumber of pieces asS1,S2,...,Sn.The size of each temporal segmentLrepresents the number of frames per segment.We then extracted spatio-temporal information in the form of trajectories from each temporal segmentSi.

    To better understand pedestrians’behaviour, it is vital to obtain accurate, long, and dense trajectories.Such trajectories help capture the pedestrians’local motion and provide full coverage of the global context.Conventionally, trajectories are extracted by detecting and tracking each individual in the scene.We observed that the performance of this tracking method depends upon the accuracy of a detector.While this works fine in low-density crowds, where half/full pedestrians’bodies are visible, it is not satisfactory in high-density crowds.This phenomenon explains why researchers tend to avoid this method to understand the dynamics of high-density groups and,instead, adopt holistic approaches by gathering global information [19].

    We group the holistic approaches into two categories:(1)Interest point trackingand(2)dense optical flow tracking.In the first category, interest points—for example, corner points, edges of SIFT features—are extracted from the initial frame of a video sequence and tracked through subsequent frames.The trajectories obtained by this method are sparse and cannot provide full coverage of motion in the scene since a limited number of features are generated for tracking.In the second category, the optical flow field is computed between every consecutive frame of the video sequence.Since the flow vector is calculated for every pixel, it provides better coverage of moving crowds in the scene.However, a small change in illumination causes a significant difference in the flow vector.Thus, the trajectories obtained by this method are unreliable.

    Accordingly, we have used both the techniques mentioned above and employed KLT [58],Particle Video (PV) [59], Large displacement optical flow method [60], and particle advection [61]techniques to extract motion information from the video.Among these methods, the particle advection approach produces denser trajectories than KLT or Particle Video.It has been widely used in many applications for extracting dense and reliable global motion information.From empirical evidence, we observed that particle advection produces more plausible trajectories, and we adopted this approach in the proposed framework.

    The first step is to compute dense optical flow between each consecutive frame of the video sequence to obtain trajectories using the particle advection approach.We employed a popular optical flow technique [62], which calculates the optical flow vector for every pixel using gradient and brightness consistency constraints.Since the flow vector is computed for every pixel of an image, the trajectories obtained by this approach are dense.Generally, dense flow tracking can cause substantial computational costs.To reduce the computation cost, we sampled anchor points from a uniform gridGoverlaying the initial frame of the video sequence.Let anchor pointi∈Gbe uniquely represented byfi=(x,y,Δx,Δy)where(x,y)are the spatial coordinates and(Δx,Δy)is the flow vector.ThenF={f1,f2,...,fn}represents the optical flow field that containsnnumber of anchor points.Each anchor pointsiinGinitiated a trajectory in the current frame and formed a long course by concatenating matched points in the subsequent frames.A ‘set of trajectories’Ω={t1,t2,...,tn}was obtained.It described the motion in the given video sequence.Generally, in structured crowds (where pedestrians’flow was unique), we got reliable trajectories using the particle advection approach.However, in unstructured gatherings (where the pedestrians moved in arbitrary directions), this approach produced unreliable courses due to the following reasons:(1) Frequent occlusions and (2) ambiguous optical flow at the boundaries of two opposite flows [63].As such, the anchor point might drift from the original path and become part of some other motion.We avoided this problem by terminating the tracking process when the anchor point ceased its authentic way.To achieve this, we computed a circular distance d [27] between the circular angle of anchor point at frame t andt+1.We defined a thresholdλand terminated the tracking process for anchor point i in caseλ≤d.Furthermore, we removed the occluded trajectories and those that could not find a match in the subsequent frames.If the displacement vector in the warped vector field was too small, the path was regarded as noise and removed.After pruning noisy ones, the final setΩwas considered to be a compressed representation of video sequenceSiover a temporal domain.

    3.2 Spatio-Temporal Image Generation and Classification

    Trajectory analysis and classification have played a vital role in object classification and recognition tasks [64–66] and, therefore, obtained considerable importance from the research community.A trajectory can generally be classified using two different ways:(1) Unsupervised clustering and (2) supervised machine-learning.Unsupervised clustering is a data-driven approach,where the trajectories are clustered into other groups based on the similarity measure.This approach finds similar patterns in the input data without using labels that make it impossible to assign a pre-defined class trajectory.Inversely, supervised machine-learning requires labelled samples for training the algorithm that assigns given courses to a pre-defined class.In this paper,we have employed supervised machine-learning for classifying trajectories.

    We presented Algorithm 1 that converts the input set of trajectories into corresponding spatiotemporal images.Our method consisted of two steps.In the first step, we reverse the range of lines to fixed-size images.It is vital because trajectory data extracted from different scenes have differing spatial degrees depending on the video frame’s resolution.Generally, CNN requires images of a fixed size.Therefore, we first converted all trajectories to spatio-temporal pictures to the extent that fits the CNN requirement.We then pre-processed each image by subtracting it from the image mean.The input to Algorithm 1 is a set of trajectories represented byΩ={t1,t2,...,tn}and the output isImthat represents a set of normalized spatio-temporal images.Fig.3 shows the pipeline of the proposed method.

    Algorithm 1:Generating spatial-temporal images from trajectories Input:Trajectories (Ω,wi,hi,wo,ho)Output:List of spatio-temporal images Is 1:Begin 2:Foreach trajectories tk in Ω do 3:Initialize I ∈Rwoxho ←1 4:Foreach point p in t do 5:images/BZ_482_617_1869_660_1915.png= p·x wi wo 6:?y= p·y hi ho 7:If 0 ≤images/BZ_482_617_1869_660_1915.png ≤wo and 0 ≤?y ≤ho then 8:I(images/BZ_482_617_1869_660_1915.png,?y)←0 9:EndIf 10:EndFor 11:insert I in Is 12:EndFor 13:Initialize Imean ∈Rwoxho ←0 14: Imean ←∑Is N //compute mean of image set Is 15:Foreach image M in Is do 16:d ←Imean ?M 17:insert d in Im 18:EndFor 19:return Im 20:End

    Proposed spatio-temporal images are binary images that contain the connected coordinates of corresponding trajectories’data, as shown in Fig.3 and have a white background with black pixels corresponding to the trajectory data.From Fig.3, it is also apparent that the resultant spatio-temporal images are significantly different from other natural RGB images in the following ways:(1) Natural RGB images are more complex than spatio-temporal pictures since they have three colour channels; (2) RGB images contain creamy texture and high-frequency components while spatio-temporal images lack texture information and most of the image is blank; (3) Due to the lack of texture and colour information, spatio-temporal images belonging to different classes appear similar.Therefore, spatio-temporal images have extensive intraclass similarities compared to natural RGB images.

    Figure 3:Pipeline of spatial-temporal images generation method

    We introduced a simple yet effective CNN architecture to classify spatio-temporal images.Generally, the architecture of CNN is composed of convolutional, pooling, and fully connected layers.Even though different CNN architectures have been proposed in studies that address distinct recognition and classification tasks, it is still uncertain if CNN can be designed to classify spatio-temporal images.Since spatio-temporal images lack texture and appearance information,we kept the architecture of CNN shallow.Our proposed CNN architecture is similar to VGG-M,consisting of six convolution layers and two fully connected layers.However, due to the uniqueness of spatio-temporal images, we modified the architecture and tuned it accordingly in the following ways:

    1.We enhanced the receptive field of VGG-M by increasing the filter size of the first convolution layer.This modification was made to incorporate more context in spatio-temporal images.A small patch from spatio-temporal images may contain only a little information about the trajectory, and most of the image is blank.

    2.The original VGG-M accepts 3-channel RGB images, while spatio-temporal images are binary and have only one channel.To make them fit the input size of CNN, we modified the pictures from a single track to a three-channel by copying each image three times.

    Due to many network parameters and limited training data, overfitting can be a common problem.We adopted the dropout technique [67] to avoid this problem.Generally, a dropout value of 0.5 is considered optimum for most classification problems.However, in our experiments,we kept the dropout value to 0.6 in layers 6 and 7 (fully connected layers).Since we have two classification classes, the last fully connected layer had two outputs.The overall architecture of the proposed CNN is shown in Tab.1.The stochastic gradient descent learned the weights of different CNN filters with a momentum of 0.6.We limited the batch size of training images to 64.LetIm={i1,i2,...,in}is the set of spatio-temporal images for trajectoriesΩ={t1,t2,...,tn}andS={S1,S2,...,Sn}is the list of scores assigned to spatio-temporal images after classification.We then utilized this information to detect congested regions in the scenes.

    ?

    Table 1:Architecture of proposed convolutional neural network

    3.3 Congestion Region Detection

    For detecting congested regions inthe scene, we utilized trajectory informationΩ={t1,t2,...,tn} and their corresponding scoresS={S1,S2,...,Sn} to generate a score map, represented by Ψ.We developed the score map Ψ using Algorithm 2.

    Input to the algorithm is a set of trajectoriesΩand their corresponding scoresS.The resolution of Ψ is equal to the resolution of the original video frame, i.e., Rwixhi.The score map is generated by assigning a score value to each point of the trajectory (line 5 of Algorithm 2).The score map values vary from 0 to 1, where ‘0’represents a normal, and ‘1’illustrates a congested.

    Our score map Ψ is similar to the oscillation map in [19].However, there is one fundamental difference.The oscillation map is generated by statistically computing the oscillation value of each trajectory.In contrast, in our case, each value of score Ψ represents the confidence score obtained through the classification of spatio-temporal images by CNN.

    Fig.4a shows the score map with the colour bar.We encoded the higher score with red colour to represent congested and blue to represent non-congested trajectories.After we obtained the score map Ψ, we applied the non-maximum suppression (NMS) method to suppress low score values.In our experiments, we fixed the threshold value to 0.6.Since we wanted to identify congested locations, we kept the points with a score value of ≥0.6 and suppressed all those points with scores lower than 0.6.Fig.4b shows the score map obtained after applying NMS.After NMS, we used a 2-D Gaussian filter withσis 1 and size 15×15 pixels.Small blobs appeared after applying the Gaussian filter, as shown in Fig.4c.Since they belonged to congested regions, they were clustered together using the mean-shift method [36].The resultant areas were the crowded regions in the scene.Fig.4d illustrates congested overlaid places over the image.

    Figure 4:(a) Shows the score map obtained by Algorithm 2.(b) Shows score map obtained after applying non-maximum suppression.(c) Shows blobs appear after applying Gaussian filter which are then clustered to obtain final congested regions in (d).(best view in zoom in)

    4 Experiment Results

    This section will discuss the dataset’s details and the different schemes adopted to evaluate and compare the proposed approach with other state-of-the-art methods.

    To evaluate the proposed approach’s effectiveness, we performed experiments on the dataset presented in [19].The dataset contained fifteen video sequences that covered different scenarios.It discussed other crowd behaviours that lead to congestion, including evacuation, jostling,conflict, and blockage.

    During an evacuation, people try to leave through a single and narrow exit that causes congestion.Generally, this phenomenon is commonly observed at train stations.While jostling,people try to push each other to make their way out.Congestion arises when two or more large groups of people come face to face in a narrow passage.In such a situation, people confront each other to make their way out.Blocking happens when the movement of one large group of people is obstructed.

    We followed the convention adopted by [19] in the experimental setup and divided the videos into two sets, i.e., test and train.The train set contained nine video sequences, while the remaining six video sequences were used for testing, as shown in Tab.2.

    The Trajectory Annotation elaborates the method we used.A typical annotation process involves the coder watching the video for several hours, manually tracking each individual, and generating a trajectory.Since we are dealing with high-density crowds, with over 2000 people in a scene, this process consumes a lot of human effort and is usually prone to errors.Another way to annotate trajectories is to employ an unsupervised clustering algorithm that can cluster them into groups.The prominent (big cluster) groups are regarded as ‘normal’, while those belonging to sets containing a smaller number are considered congested trajectories.Although this process necessitates additional annotation, it reduces the cost of manual labelling significantly.Therefore,we adopted this strategy to cluster trajectories into groups, and then, through visual observation of the clusters, we assigned labels to clusters.We observed that the annotation process still requires further refinement as similar trajectories may be transferred to two different classes.This problem cannot be detected through visual observations.Therefore, we employed t-SNE [68] to further refine the training data by visualizing the distribution of trajectories in a low-dimension space.In the visualization plane, similar orbits close to each other, and we manually inspected them that lay farther away from the class.After preparing the data, we trained the model and reported the test set’s classification and localization accuracy.

    Table 2:Splitting of the dataset into training and testing sets.Each video is represented by its name, behaviour, and number of frames

    We also evaluated and compared the performance of other existing methods:

    a.Krausz et al.[1] proposed the first method that automatically detects congestion in crowded scenes to the best of our knowledge.The authors evaluated their proposed framework on a single video.They offered a framework that automatically detects congestion in videos, computes the optical flow field from the input video sequence, and generates a 2-D histogram of magnitude and orientation.The framework identifies congestion by detecting a small magnitude along the central axis of the histogram.

    b.Huang et al.[18] proposed a vision-based approach that detects congestion in videos using entropy, commonly used to measure a closed system’s stability.The authors used velocity entropy to measure velocity vectors’dispersion, which serves as an indicator of congestion.

    c.Bek et al.[69] proposed a framework that extracts trajectories from the video.Using this,they computed track density and local inertia, which are then combined to estimate the scene’s congestion level.

    d.Most recently, Khan et al.[19] proposed a framework that detects and localizes congestion in the scene.They extracted point trajectories from the video and used them to determine the oscillation feature leading to the generation of an oscillation map, which is then quantized and used to identify the scene’s congested regions.Moreover, the authors proposed a novel dataset that contains 15 video sequences to evaluate the congestion detection models.

    e.We also trained a binary SVM classifier.For this, we extracted spatio-temporal features from each video using a 3-D convolutional neural network [70].At the same time, the linear classifier was trained based on the extracted features.We created small clips by trimming congested segments from each video and treated ‘congested’and ‘normal’as separate classes.During testing, binary classifiers provided the class score for each video segment.

    To evaluate the proposed method’s performance, we used conventional evaluation metrics,i.e., Region of Convergence (ROC) and Area Under Curve (AUC).These are some of the standard metrics used to evaluate binary classifiers.For evaluating the classification performance,we used frame-level classification.A frame is considered ‘congested’if intersection-over-union(IoU) between the predicted region and the ground truth is ≥0.4.Quantitative results in terms of ROC and AUC are reported in Fig.5 and Tab.3.From the quantitative results, it is evident that the proposed approach outperforms other state-of-the-art methods by a significant margin.

    Figure 5:Performance comparison of different methods in terms of ROC.Krausz et al.[1] in green, Huang et al.[18] in blue, Bek et al.[69] in red, baseline in magenta and proposed in black colour

    Table 3:Performance comparison of different methods in terms of Area Under Curve (AUC).Large values represent superior performance

    The results can be safely concluded that a binary SVM classifier achieves lower performance than the other methods and cannot be used for congestion detection in videos.It is primarily due to datasets containing long, untrimmed video sequences where most video frames are usual, and congestion occurs only for a short duration.Therefore, a binary SVM classifier could not learn enough discriminative features to distinguish congestion.A two-dimensional histogram (magnitude and directions) obtained by K-mean clustering in [1] was not robust enough to classify congestion accurately.Furthermore, we observed that this method produced a low classification score for both‘congested’and ‘normal’temporal segments since it relied on a weak feature, i.e., the optical flow,which is easily affected by simple changes illumination.Bek et al.[69] achieved a comparable performance, but their method misclassified regular patterns as congested patterns and produced a high congestion score.

    We also analyzed the proposed framework’s false alarm rate because a large portion of the video sequence is non-congested in the real world.Ideally, a robust congestion detector’s goal is to produce zero false alarm rate on standard video sequences.Therefore, we evaluated the performance of the proposed method and other reference methods on regular video sequences.We set the threshold value to 0.5 and reported results of the false alarm rate in Fig.6.As demonstrated in Tab.3, we can conclude that the proposed method produces relatively fewer false alarms than other reference methods.

    Figure 6:Performance comparison of different methods in terms of false alarm rate

    To demonstrate the proposed framework’s robustness, we used an additional video sequence covering the Shibuya Crossing in Japan, duly downloaded from YouTube.Shibuya Crossing happens to be one of the world’s busiest crossings.It is especially famous for its scramble crossing.Even though Shibuya Crossing’s density is usually high during peak times, no congestion situation was reported.We manually analyzed the video sequence and learned that there was, in fact, no congestion in the entire series.We then tested the proposed framework by providing it as an input to the framework.The framework’s output is illustrated in Fig.7.The algorithm detects no congestion in all temporal segments of the video, indicating that the proposed system is robust in distinguishing the congestion from original video segments.

    Figure 7:Temporal segments of Shibuya crossing, where T1, T2, T3 and T4 are different temporal segments, where no congestion is detected

    We observed that the model learned more discriminative features when trained with ‘normal’and ‘congested’video segments from empirical studies.Further, it was observed that the network could accurately learn to localize the congested region by training it with many positive and negative samples.We also observed that the network learns to discriminate congestion and standard patterns at a high number of training iterations.For example, at 500 iterations, the validation error of the network is increased.After 1,000 iterations, the network starts to learn discriminative features and produces a high score for congested and a low score for regular segments.As the network analyses more video samples and the number of iterations increase, its precision also increases.

    To quantify the performance of the proposed framework, we used three evaluation metrics,i.e., Detection Accuracy (DA), Localization Accuracy (LA), and Missed Rate (MR).DA is calculated by the number of correct predictions/total number of predictions.We regard a frame as congested if the area of the congested region is more significant than a thresholdλ.In all our experiments, we fixed the value ofλ=of foreground pixels.LA is the Intersection-over-Union(IoU) between the detected region and the ground truth.IoU is calculated byWhere C is the number of common points among detection and ground truth region, N is the number of points in the detected area, whereas M is the number of points in the ground truth region.We computed missed detection by (MR) number of missed detections (in frames)/total number of frames in the given temporal segment.

    The performance of the proposed framework in terms of the three evaluation metrics can be seen in Tab.4, where DA, MR, and LA are computed for all video sequences.Our proposed method detected congestion with an average 0.90 detection accuracy in almost all the video sequences exceptHseq01andHseq02, where the framework achieved comparatively lower values as the framework did not detect some frames.In these frames, relatively small blobs (congested areas) were produced, filtered out by the thresholdλ.

    The proposed framework’s effectiveness has been demonstrated qualitatively in Figs.8 and 9.Fig.8 shows the output of the framework at independent temporal segments ofStation1video sequence.This sequence demonstratesgrowing congestion, where the area of the congested region increases with time.This video sequence exhibits evacuation behaviour, where people from different entrances try to leave from a single narrow exit.Fig.8 illustrates different temporal segments of the video, where people’s density is less in the first temporal segment T1compared toT4.Therefore, the area of the congested region in the temporal segmentT1is less.With time, more people join, and a crowd starts forming crowded regions in other temporal segments.The output of our framework was then compared with the ground truth.

    Table 4:Performance of the proposed framework in DA, MR, and IoU for all video sequences

    Fig.9 shows congestion in theHseq01video sequence.We can observedynamic congestionin this sequence when congestion is produced at different regions of the scene at multiple temporal segments.In this video sequence, a large group of people circumambulate the Kaaba to perform obligatory rituals.Another group of people blocks their way for cleaning purposes.This small group of pedestrians move orthogonal to the main flow and stop the other group’s path.As the small group penetrates forward, they choke the people’s movement at different spatial-temporal segments, as shown in the video.Fig.9 compares the performance of the proposed framework with the ground truth.The proposed framework accurately localized the congested region in all temporal segments of the video, as shown in Figs.8 and 9.

    Figure 8:Comparison of the proposed framework with the ground truth in Station1 video sequence.The first row shows the ground truth, where the red blobs show the congested regions in different temporal segments.The second row shows the crowded areas detected by the proposed framework

    Figure 9:Comparison of the proposed framework with the ground truth in Hseq01 video sequence.The first row shows the ground truth, and the second row shows the congested regions detected by the proposed framework in different temporal segments of the video

    5 Conclusion and Future Work

    In this work, we have proposed a practical framework for detecting congestion in crowds.The framework generates spatio-temporal images from pedestrian trajectories and introduces CNN to learn the representation.A CNN classifies each spatio-temporal image into two classes, i.e.,‘normal’and ‘congested.’The framework then generates a score map by embedding each point of trajectory with its respective class score.Congested regions are obtained after employing the nonmaximum suppression algorithm.The proposed framework achieves state-of-the-art performance on challenging video sequences and outperforms other reference methods by a significant margin.This superior performance is because CNN efficiently learns hierarchical features from spatiotemporal images.Empirical studies suggest that the proposed framework is adaptable, and the proposed CNN classifier can be easily replaced with other classifiers.

    In the future, we plan to validate our framework on multiple video sequences, including those recorded from various cameras, scenarios from camera feed, or simulation videos taken from different angles.We also plan to optimize our framework to run on low-end hardware and provide analytics in real-time.

    Acknowledgement:The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia, for funding this research work through Project Number 0909.

    Funding Statement:This research work is supported by the Ministry of Education in Saudi Arabia(Grant Number 0909).

    Conficts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    性欧美人与动物交配| 国产1区2区3区精品| 午夜免费激情av| 波多野结衣高清作品| 神马国产精品三级电影在线观看 | 久久草成人影院| tocl精华| 日韩欧美免费精品| av欧美777| 人人妻,人人澡人人爽秒播| 国产精品一区二区精品视频观看| www.精华液| 国产av又大| 一级片免费观看大全| 亚洲av电影在线进入| 日本 av在线| 黑人巨大精品欧美一区二区mp4| 日本三级黄在线观看| 国产色视频综合| 在线天堂中文资源库| 国产在线观看jvid| 老司机靠b影院| 国产1区2区3区精品| 欧美久久黑人一区二区| 老司机午夜福利在线观看视频| 麻豆成人午夜福利视频| 日本免费a在线| 亚洲精品国产精品久久久不卡| av在线播放免费不卡| 国产男靠女视频免费网站| 国产视频一区二区在线看| 男男h啪啪无遮挡| 法律面前人人平等表现在哪些方面| 国产麻豆成人av免费视频| 国产亚洲av嫩草精品影院| 他把我摸到了高潮在线观看| 一区福利在线观看| 婷婷精品国产亚洲av在线| 亚洲熟女毛片儿| 精品高清国产在线一区| 欧美久久黑人一区二区| 精品日产1卡2卡| 搞女人的毛片| 校园春色视频在线观看| 欧美黄色淫秽网站| 日韩有码中文字幕| 欧美性长视频在线观看| 国产日本99.免费观看| 国产亚洲av高清不卡| 男女午夜视频在线观看| 亚洲成人久久性| 精品久久久久久久久久久久久 | 久久香蕉精品热| 少妇的丰满在线观看| 国产精品精品国产色婷婷| 亚洲全国av大片| 夜夜夜夜夜久久久久| 久久中文看片网| 欧美绝顶高潮抽搐喷水| 久9热在线精品视频| 亚洲久久久国产精品| 久久久久久九九精品二区国产 | 亚洲欧美日韩高清在线视频| 青草久久国产| 亚洲av电影在线进入| 国产高清激情床上av| 亚洲精品中文字幕在线视频| 欧美绝顶高潮抽搐喷水| 精品一区二区三区av网在线观看| 国产日本99.免费观看| 在线观看免费日韩欧美大片| 精品国产乱码久久久久久男人| 欧美午夜高清在线| 99国产极品粉嫩在线观看| 日韩精品中文字幕看吧| 亚洲五月色婷婷综合| 黑人欧美特级aaaaaa片| 香蕉国产在线看| 男女之事视频高清在线观看| 免费高清视频大片| 不卡一级毛片| 国产精品国产高清国产av| 精品国产乱子伦一区二区三区| 午夜激情福利司机影院| 久久精品成人免费网站| 欧美日韩亚洲综合一区二区三区_| 1024香蕉在线观看| 欧美+亚洲+日韩+国产| 又黄又爽又免费观看的视频| 成人欧美大片| 久久伊人香网站| 精品一区二区三区四区五区乱码| 免费看十八禁软件| a级毛片a级免费在线| 国产一卡二卡三卡精品| av天堂在线播放| 亚洲国产欧美网| 国产激情偷乱视频一区二区| 看黄色毛片网站| 国产精品乱码一区二三区的特点| 亚洲人成网站在线播放欧美日韩| 美女国产高潮福利片在线看| 日本熟妇午夜| 免费在线观看视频国产中文字幕亚洲| 一级毛片精品| 久久久久久久久免费视频了| 好男人电影高清在线观看| 天堂影院成人在线观看| 欧美中文日本在线观看视频| 免费在线观看亚洲国产| 亚洲av电影不卡..在线观看| 久99久视频精品免费| 国产欧美日韩一区二区精品| 18美女黄网站色大片免费观看| 香蕉丝袜av| 成人亚洲精品一区在线观看| 国产精品免费一区二区三区在线| 美女大奶头视频| 亚洲av成人不卡在线观看播放网| 国产一级毛片七仙女欲春2 | 国产av不卡久久| av在线播放免费不卡| 精品人妻1区二区| 国产伦在线观看视频一区| 中文字幕久久专区| 精品国内亚洲2022精品成人| 成人亚洲精品av一区二区| 国产v大片淫在线免费观看| 亚洲成a人片在线一区二区| 国产熟女午夜一区二区三区| 午夜福利在线观看吧| 最近最新免费中文字幕在线| 91成人精品电影| 夜夜躁狠狠躁天天躁| 母亲3免费完整高清在线观看| 深夜精品福利| 国语自产精品视频在线第100页| 免费看日本二区| 两个人视频免费观看高清| 黄片小视频在线播放| 欧美激情久久久久久爽电影| 亚洲欧美激情综合另类| 男女下面进入的视频免费午夜 | 国产人伦9x9x在线观看| 免费观看人在逋| 男人舔女人的私密视频| 国产99久久九九免费精品| 1024手机看黄色片| 91麻豆精品激情在线观看国产| 一级a爱视频在线免费观看| x7x7x7水蜜桃| 黑丝袜美女国产一区| 在线天堂中文资源库| 日韩三级视频一区二区三区| 国产91精品成人一区二区三区| 欧美激情极品国产一区二区三区| 视频区欧美日本亚洲| 欧美性猛交黑人性爽| 亚洲精品国产精品久久久不卡| 亚洲无线在线观看| 亚洲人成77777在线视频| 淫秽高清视频在线观看| 满18在线观看网站| 777久久人妻少妇嫩草av网站| 精品久久久久久成人av| 法律面前人人平等表现在哪些方面| 久久国产精品人妻蜜桃| 丁香六月欧美| 国产国语露脸激情在线看| 国产色视频综合| 18美女黄网站色大片免费观看| 欧美人与性动交α欧美精品济南到| 在线观看日韩欧美| 一级作爱视频免费观看| 精品熟女少妇八av免费久了| 中文字幕精品亚洲无线码一区 | aaaaa片日本免费| 亚洲成a人片在线一区二区| 亚洲av电影不卡..在线观看| 国产精品永久免费网站| 一本综合久久免费| 757午夜福利合集在线观看| 亚洲人成网站在线播放欧美日韩| 一区二区三区高清视频在线| 大型黄色视频在线免费观看| 一区二区三区高清视频在线| 日韩欧美一区视频在线观看| 亚洲av美国av| 老汉色av国产亚洲站长工具| 久久中文看片网| 精品久久久久久久毛片微露脸| 视频区欧美日本亚洲| 一个人观看的视频www高清免费观看 | 日韩一卡2卡3卡4卡2021年| 久9热在线精品视频| 亚洲欧美激情综合另类| √禁漫天堂资源中文www| 人人澡人人妻人| 国产成+人综合+亚洲专区| 曰老女人黄片| 老汉色∧v一级毛片| 亚洲第一av免费看| 久久精品国产清高在天天线| 成人免费观看视频高清| 国产精品久久久久久人妻精品电影| 日韩高清综合在线| 热99re8久久精品国产| 欧美国产日韩亚洲一区| 亚洲成人久久爱视频| 免费看美女性在线毛片视频| 波多野结衣高清作品| 神马国产精品三级电影在线观看 | 亚洲成人久久爱视频| 制服丝袜大香蕉在线| 长腿黑丝高跟| 日韩精品免费视频一区二区三区| 国产精品影院久久| 久久香蕉精品热| av电影中文网址| 国产1区2区3区精品| 黄色丝袜av网址大全| 可以免费在线观看a视频的电影网站| 久久久国产成人免费| 亚洲五月天丁香| 午夜精品在线福利| 成熟少妇高潮喷水视频| 国产精品电影一区二区三区| 欧美绝顶高潮抽搐喷水| 国产一区二区三区视频了| 啦啦啦免费观看视频1| 国产一区二区在线av高清观看| 色精品久久人妻99蜜桃| 黄色片一级片一级黄色片| 午夜成年电影在线免费观看| 一级毛片高清免费大全| 欧美激情久久久久久爽电影| 午夜福利高清视频| 日本 av在线| 欧美性猛交黑人性爽| 最近最新免费中文字幕在线| 久久久精品国产亚洲av高清涩受| 久久青草综合色| 美女国产高潮福利片在线看| 18禁观看日本| 亚洲国产精品sss在线观看| 亚洲精品美女久久久久99蜜臀| a级毛片a级免费在线| 俺也久久电影网| 最新在线观看一区二区三区| 观看免费一级毛片| 欧美精品啪啪一区二区三区| 老司机靠b影院| 亚洲美女黄片视频| 亚洲男人天堂网一区| 国产一区二区三区在线臀色熟女| 国产精品久久视频播放| av片东京热男人的天堂| 91大片在线观看| 亚洲国产精品合色在线| 国产一级毛片七仙女欲春2 | 国产亚洲av嫩草精品影院| 国产精品美女特级片免费视频播放器 | 禁无遮挡网站| av福利片在线| 非洲黑人性xxxx精品又粗又长| 成人国语在线视频| 精品久久久久久久久久免费视频| 在线观看舔阴道视频| 每晚都被弄得嗷嗷叫到高潮| 久久99热这里只有精品18| 亚洲精品国产精品久久久不卡| 男女视频在线观看网站免费 | av视频在线观看入口| 别揉我奶头~嗯~啊~动态视频| www国产在线视频色| 中文字幕av电影在线播放| 国产亚洲av高清不卡| 午夜福利在线在线| 免费电影在线观看免费观看| 欧美zozozo另类| 国产97色在线日韩免费| 国产精品一区二区免费欧美| 女性被躁到高潮视频| 免费看a级黄色片| 亚洲精品美女久久av网站| av视频在线观看入口| 麻豆成人午夜福利视频| 91大片在线观看| 亚洲av五月六月丁香网| 97碰自拍视频| 精品乱码久久久久久99久播| 亚洲欧美一区二区三区黑人| 操出白浆在线播放| 高潮久久久久久久久久久不卡| 亚洲一区中文字幕在线| 久久久久亚洲av毛片大全| 两个人免费观看高清视频| 午夜激情av网站| 国产成人欧美| 久久狼人影院| 中文字幕另类日韩欧美亚洲嫩草| 亚洲成人久久爱视频| 亚洲午夜理论影院| 一进一出抽搐gif免费好疼| 日韩欧美一区二区三区在线观看| 叶爱在线成人免费视频播放| 人人妻人人澡人人看| 亚洲人成网站高清观看| 日本成人三级电影网站| 18禁黄网站禁片免费观看直播| 国产一区二区三区在线臀色熟女| 一本一本综合久久| 91老司机精品| 亚洲午夜精品一区,二区,三区| 自线自在国产av| 亚洲熟妇中文字幕五十中出| 久久性视频一级片| 啦啦啦韩国在线观看视频| 天天一区二区日本电影三级| 日本五十路高清| 90打野战视频偷拍视频| 99国产综合亚洲精品| 亚洲国产欧美一区二区综合| 国产国语露脸激情在线看| 欧美黄色淫秽网站| 国产伦在线观看视频一区| 欧美+亚洲+日韩+国产| 黄色视频不卡| 一级作爱视频免费观看| 中文字幕久久专区| 男女午夜视频在线观看| 亚洲国产欧美日韩在线播放| 国产私拍福利视频在线观看| 精品免费久久久久久久清纯| 日本成人三级电影网站| 91麻豆精品激情在线观看国产| 久久人妻av系列| 亚洲一区二区三区不卡视频| 嫩草影视91久久| 亚洲性夜色夜夜综合| 国产极品粉嫩免费观看在线| 国产激情欧美一区二区| 国产99白浆流出| 在线免费观看的www视频| 1024视频免费在线观看| √禁漫天堂资源中文www| 亚洲欧美精品综合一区二区三区| avwww免费| 啪啪无遮挡十八禁网站| 高清在线国产一区| 亚洲熟女毛片儿| 午夜免费观看网址| 国产色视频综合| 啦啦啦韩国在线观看视频| 国产成人欧美| 久久久水蜜桃国产精品网| 欧美色视频一区免费| 成人欧美大片| 999久久久国产精品视频| 欧美激情极品国产一区二区三区| 欧美日韩福利视频一区二区| 黄色a级毛片大全视频| 999精品在线视频| 国产午夜精品久久久久久| 日本一区二区免费在线视频| 亚洲精品av麻豆狂野| 波多野结衣高清无吗| 制服丝袜大香蕉在线| 一卡2卡三卡四卡精品乱码亚洲| 国产亚洲精品久久久久久毛片| 亚洲av电影在线进入| 一本精品99久久精品77| 此物有八面人人有两片| 欧美成人一区二区免费高清观看 | 啦啦啦观看免费观看视频高清| 精华霜和精华液先用哪个| 中文字幕人妻熟女乱码| 女同久久另类99精品国产91| 少妇裸体淫交视频免费看高清 | 亚洲av熟女| 国产99久久九九免费精品| 日韩欧美一区二区三区在线观看| 国产成人一区二区三区免费视频网站| www日本在线高清视频| 国产91精品成人一区二区三区| 后天国语完整版免费观看| 久久青草综合色| 黄色女人牲交| 99riav亚洲国产免费| 最新美女视频免费是黄的| 老汉色∧v一级毛片| 黑丝袜美女国产一区| 熟女少妇亚洲综合色aaa.| 午夜福利在线在线| 亚洲五月婷婷丁香| 一本综合久久免费| 一二三四社区在线视频社区8| 午夜激情福利司机影院| 亚洲国产高清在线一区二区三 | 夜夜爽天天搞| 欧美一级毛片孕妇| 亚洲欧美精品综合一区二区三区| av有码第一页| 在线播放国产精品三级| 老熟妇仑乱视频hdxx| 亚洲黑人精品在线| 夜夜躁狠狠躁天天躁| 丁香欧美五月| 男人操女人黄网站| 亚洲最大成人中文| 变态另类丝袜制服| 精品久久蜜臀av无| 久久久久久人人人人人| 久久精品国产亚洲av高清一级| 精品一区二区三区av网在线观看| 国产野战对白在线观看| 高清毛片免费观看视频网站| 一本综合久久免费| 午夜免费观看网址| 久久草成人影院| 亚洲精品中文字幕在线视频| 精品国产乱码久久久久久男人| 久久热在线av| 精品人妻1区二区| 久久九九热精品免费| 女人爽到高潮嗷嗷叫在线视频| 久久久久久久久久黄片| 亚洲熟女毛片儿| 国产精品香港三级国产av潘金莲| 丝袜人妻中文字幕| 怎么达到女性高潮| 国产精品影院久久| 久久亚洲真实| 50天的宝宝边吃奶边哭怎么回事| 久久精品人妻少妇| 欧美zozozo另类| 看免费av毛片| 在线永久观看黄色视频| 老司机午夜福利在线观看视频| 欧美+亚洲+日韩+国产| 好男人在线观看高清免费视频 | 亚洲九九香蕉| 色播在线永久视频| 久久久久久国产a免费观看| 免费在线观看亚洲国产| 美国免费a级毛片| 日韩欧美一区二区三区在线观看| 美女高潮喷水抽搐中文字幕| 成年人黄色毛片网站| 日韩精品中文字幕看吧| 色综合站精品国产| 欧美日韩一级在线毛片| 国产伦一二天堂av在线观看| 国产蜜桃级精品一区二区三区| av福利片在线| 老司机深夜福利视频在线观看| 日本免费一区二区三区高清不卡| 欧美激情高清一区二区三区| 婷婷丁香在线五月| 亚洲专区国产一区二区| 国产精品九九99| 可以免费在线观看a视频的电影网站| 午夜两性在线视频| 这个男人来自地球电影免费观看| 欧美国产精品va在线观看不卡| 久久精品夜夜夜夜夜久久蜜豆 | 色在线成人网| 男人操女人黄网站| 在线十欧美十亚洲十日本专区| 波多野结衣高清作品| 国产单亲对白刺激| 亚洲 国产 在线| a级毛片在线看网站| 麻豆一二三区av精品| 午夜a级毛片| 午夜老司机福利片| 熟女电影av网| 国产伦人伦偷精品视频| 黄色a级毛片大全视频| 两个人免费观看高清视频| 免费在线观看成人毛片| 亚洲中文字幕日韩| 丝袜美腿诱惑在线| 日本一本二区三区精品| 99国产极品粉嫩在线观看| 久久精品成人免费网站| 国产亚洲精品综合一区在线观看 | 免费看a级黄色片| 国产亚洲av嫩草精品影院| 曰老女人黄片| 视频区欧美日本亚洲| svipshipincom国产片| 伊人久久大香线蕉亚洲五| 欧美成狂野欧美在线观看| 又黄又粗又硬又大视频| 美女高潮到喷水免费观看| 精品久久久久久久久久久久久 | 99热只有精品国产| 人妻久久中文字幕网| 日韩欧美 国产精品| 欧美黑人巨大hd| 国产熟女xx| 99在线人妻在线中文字幕| 91大片在线观看| 久久精品aⅴ一区二区三区四区| 一进一出抽搐动态| 亚洲精品国产区一区二| 国产亚洲av嫩草精品影院| 动漫黄色视频在线观看| 夜夜爽天天搞| 又黄又爽又免费观看的视频| 亚洲精品中文字幕一二三四区| 成人永久免费在线观看视频| 国产91精品成人一区二区三区| 国产av一区在线观看免费| 美女免费视频网站| 桃色一区二区三区在线观看| 日韩欧美 国产精品| 欧美成人一区二区免费高清观看 | 亚洲久久久国产精品| 欧洲精品卡2卡3卡4卡5卡区| 国产亚洲精品第一综合不卡| 久久草成人影院| 巨乳人妻的诱惑在线观看| 亚洲国产精品久久男人天堂| 中文字幕人成人乱码亚洲影| 久久久精品国产亚洲av高清涩受| 亚洲中文字幕一区二区三区有码在线看 | 国产成人精品久久二区二区91| 国产99久久九九免费精品| 宅男免费午夜| 特大巨黑吊av在线直播 | 俺也久久电影网| 手机成人av网站| 天天添夜夜摸| 脱女人内裤的视频| 久久久久久久久久黄片| 日日爽夜夜爽网站| 欧美激情极品国产一区二区三区| 制服人妻中文乱码| 久久精品aⅴ一区二区三区四区| 成年女人毛片免费观看观看9| 午夜影院日韩av| 亚洲人成伊人成综合网2020| 成人av一区二区三区在线看| 99riav亚洲国产免费| 久久香蕉精品热| 国产精品久久视频播放| 午夜激情av网站| cao死你这个sao货| 国产精品乱码一区二三区的特点| 91成年电影在线观看| 亚洲精品av麻豆狂野| 亚洲精品美女久久久久99蜜臀| 国产精品永久免费网站| 久久精品夜夜夜夜夜久久蜜豆 | 亚洲无线在线观看| 国产99久久九九免费精品| 欧美激情 高清一区二区三区| √禁漫天堂资源中文www| 国产真人三级小视频在线观看| 女人被狂操c到高潮| 97碰自拍视频| 丝袜在线中文字幕| 日韩av在线大香蕉| 一本综合久久免费| 国产精品精品国产色婷婷| 亚洲av片天天在线观看| 久久精品影院6| 久久国产精品人妻蜜桃| 亚洲激情在线av| 欧洲精品卡2卡3卡4卡5卡区| avwww免费| 久久久久精品国产欧美久久久| 国产精品,欧美在线| 亚洲国产精品久久男人天堂| 免费在线观看完整版高清| 男男h啪啪无遮挡| 亚洲无线在线观看| 色播在线永久视频| 99在线人妻在线中文字幕| 精品国产乱子伦一区二区三区| 99精品欧美一区二区三区四区| 精品第一国产精品| 国产不卡一卡二| 久久久久国产一级毛片高清牌| 丁香欧美五月| 特大巨黑吊av在线直播 | 久久久久九九精品影院| 黑人巨大精品欧美一区二区mp4| 免费搜索国产男女视频| 曰老女人黄片| 男女视频在线观看网站免费 | 欧美乱色亚洲激情| 久久人妻福利社区极品人妻图片| 好看av亚洲va欧美ⅴa在| 亚洲九九香蕉| 欧美性长视频在线观看| 精品免费久久久久久久清纯| 天天一区二区日本电影三级| 18禁美女被吸乳视频| 欧美国产精品va在线观看不卡| 亚洲欧美一区二区三区黑人| 成人国产综合亚洲| 母亲3免费完整高清在线观看| 亚洲av五月六月丁香网| 久久亚洲真实| 国产精品99久久99久久久不卡| a级毛片在线看网站| 国产精品国产高清国产av| 女生性感内裤真人,穿戴方法视频| 久久久久久久久免费视频了| 精品久久久久久久毛片微露脸| 天天一区二区日本电影三级| 国产成人精品久久二区二区91|