• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Intelligent Traffic Surveillance through Multi-Label Semantic Segmentation and Filter-Based Tracking

    2023-10-26 13:15:18AsifaMehmoodQureshiNoufAbdullahAlmujallySaudAlotaibiMohammedHamadAlatiyyahandJeongminPark
    Computers Materials&Continua 2023年9期

    Asifa Mehmood Qureshi ,Nouf Abdullah Almujally ,Saud S.Alotaibi ,Mohammed Hamad Alatiyyah and Jeongmin Park

    1Department of Creative Technologies,Air University,Islamabad,44000,Pakistan

    2Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,P.O.Box 84428,Riyadh,11671,Saudi Arabia

    3Information Systems Department,Umm Al-Qura University,Makkah,Saudi Arabia

    4Department of Computer Science,College of Sciences and Humanities in Aflaj,Prince Sattam Bin Abdulaziz University,Al-Kharj,Saudi Arabia

    5Department of Computer Engineering,Tech University of Korea,Siheung-si,Gyeonggi-do,15073,Korea

    ABSTRACT Road congestion,air pollution,and accident rates have all increased as a result of rising traffic density and worldwide population growth.Over the past ten years,the total number of automobiles has increased significantly over the world.In this paper,a novel method for intelligent traffic surveillance is presented.The proposed model is based on multilabel semantic segmentation using a random forest classifier which classifies the images into five classes.To improve the results,mean-shiftclustering was applied to the segmented images.Afterward,the pixels given the label for the vehicle were extracted and blob detection was applied to mark each vehicle.For the validation of each detection,a vehicle verification method based on the structural similarity index is proposed.The tracking of vehicles across the image frames is done using the Identifier(ID)assignment technique and particle filter.Also,vehicle counting in each frame along with trajectory estimation was done for each object.Our proposed system demonstrated a remarkable vehicle detection rate of 0.83 over Vehicle Aerial Imaging from Drone(VAID),0.86 over AU-AIR,and 0.75 over the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)dataset during the experimental evaluation.The proposed system can be used for several purposes,such as vehicle identification in traffic,traffic density estimation at intersections,and traffic congestion sensing on a road.

    KEYWORDS Traffic surveillance;multi-label segmentation;random forest;particle filter;computer vision

    1 Introduction

    For numerous real-time computer vision techniques,fast image frame sequence processing is crucial.One of the most significant areas is the tracking of moving objects in video image sequences such as traffic control and surveillance,sports reporting,and video annotation [1].The number of vehicles has increased drastically over the past few years.Therefore,there is a need to automate the traffic surveillance systems.A large number of image-based systems have been proposed by the research community.But there are still some challenges that need to be addressed to enhance the traffic monitoring system capabilities.A large number of effective image processing techniques have been proposed which perform well for static image data.However,these scenarios will get more challenging if the background and moving objects change dynamically [2].Techniques including background subtraction,and consecutive frame differencing are not suitable when the images are captured using a mobile platform because background pixels also have motion in them which classifies them as foreground objects.Therefore,several areas of computer vision and image processing,including intelligent transportation,medical images,object recognition,semantic segmentation,and humancomputer interaction,have proven to be quite effective[3].Semantic segmentation is the grouping and individual labeling of pixels that belong to the same class[4].Traditional traffic monitoring systems only consist of binary segmentation e.g.,vehicle and background labeling.However,our proposed system performs multi-class segmentation for a better understanding of the scene and different objects.Furthermore,Aerial data has the potential to greatly improve traffic management,control,efficiency,and effectiveness.But it also has some challenges which include varying object size,large covered areas other than the roads,and different road designs which need to be addressed effectively to develop systems based on data retrieved from mobile platforms.

    This paper proposes a reliable system for traffic monitoring in aerial images specifically designed by keeping in view the above-mentioned limitations.The approach requires segmenting all Red,Green and Blue(RGB)images into various classes which include vehicles,roads,buildings,sky,and greenery.Then,to further improve the result the segmented images are subjected to mean-shift clustering to group the pixels having the same class labels.The phase of vehicle detection comes next,which consists of two steps i)extracting only those pixels that belong to the vehicle class and ii)finding contour by detecting the borders of the object.To verify each detected vehicle the Structural Similarity Index(SSID)score was calculated using each image’s corresponding mask.Afterward,the traffic densities on the road were estimated by counting each verified vehicle.To track multiple vehicles within a single frame a unique ID was allocated founded on a distinctive feature descriptor named Oriented FAST Features from Accelerated Segment Test (ORB).Finally,the location of each vehicle was estimated by using the particle filter,and also the allocated IDs were retrieved in each succeeding frame by matching the ORB key point descriptors.Vehicle trajectories were estimated for each tracked vehicle.Three large publicly available datasets,the UAVDT Dataset,AUAIR Dataset,and VAID dataset were used for experimentation purposes.

    This paper’s primary contributions are as follows:

    ? Multi-label pixel segmentation technique for accurate vehicle extraction from Red,Green,and Blue(RGB)images.

    ? Proposing an easy and efficient way for detection verification based on SSID score using ground truth.

    ? Designing a powerful vehicle recognition system grounded on the ORB features for ID retrieval and particle filter for tracking.

    The rest of the paper is structured as follows:Section 2 explains and evaluates the research work that is pertinent to the proposed system.Section 3 defines the overall system methodology.The dataset used in the proposed work is described in Section 4,which also uses several tests to demonstrate the system’s robustness.The research is concluded in Section 5,which also lists some future directions.

    2 Related Work

    Researchers have been actively working on traffic monitoring algorithms for the past few years.They investigated their systems’behaviors using images taken from a static camera,satellite images,and aerial images.In maximum cases,the whole images are firstly preprocessed to remove irrelevant areas other than vehicles and then features are extracted from them.Different approaches are based on image differencing,foreground extraction,or background subtraction techniques.These approaches are simple and especially useful when the Region of Interest (ROI) is visible and of reasonable size in images [5].However,in aerial images,the vehicle size varies depending on the height at which the images are taken.Therefore,semantic segmentation approaches are being used for detection and tracking purposes[6].Moreover,the additional steps of clustering and identifier assignment for improved results are also common.Thus,the related work is categorized into semantic-based and deep learning-based traffic monitoring systems to present an overview of existing models and techniques.

    2.1 Semantic Segmentation-Based Traffic Monitoring Systems

    Zhang et al.[7] have performed aerial vehicle recognition by deploying a multi-label semantic segmentation mechanism for better scene understanding.They used Mask Region-based Convolutional Neural Network (R-CNN) to segment different regions and then eliminate background objects to reduce the computational area.To detect the aerial vehicle,visual attention mechanism was used for feature extraction which was passed onto the Adaboost classifier to get the exact location.Makrigiorgis et al.[8] incorporated segmentation for road extraction using EfficientNet which combines MobileNetV2 and ResNet18.Further,the You Look Only Once version 3(YOLOv3)algorithm detects the vehicles on the extracted ROI.In complex cases,background elimination in real-time scenarios is more challenging.Also,after the removal of invalid data deploying a pre-trained deep learning algorithm only increase the computational complexity of the model as these models can perform well if directly applied to raw images.Their road extraction mechanism can be replaced by multi-label scene segmentation to better analyze the images and to directly get vehicles for detection.

    Gomaa et al.[9] argued that in aerial images both the background and foreground are moving therefore approaches based on detecting motion are not feasible.Thus,a method based on top-hat and bottom-hat transformation along with the Otsu partitioning method and morphological operations were deployed for detection.While vehicle motion is important,Shi Tomasi features were extracted and clusters grounded on displacement and angle trajectories were formed.The background clusters were removed,leaving behind the vehicles.Robust features of each vehicle were used for tracking across images.They achieved high accuracy by using multiple feature maps.In another study[10],an object detection method for images taken under low-illumination conditions has been proposed.The methodology presented a two-stage approach i.e.,cloud-based image enhancement and edge-based detection which is an efficient and dynamic approach to address each image contrast enhancement requirement separately.The authors of[11]employed an innovative method for image stacking.Only small cars were included in the image registration procedure,and all of the stationary backgrounds near the moving vehicles were blurred using the warping technique.This algorithm’s primary objective is to eliminate distracting image background elements that can be smoothed to extract only the vehicle from the surrounding area.These methods,however,were distinguished by complex features and these systems have high time complexity.

    2.2 Deep Learning-Based Traffic Monitoring Systems

    Numerous researchers have implemented a feature detection approach for directly recognizing vehicles in images.Kong et al.[12]used a vehicle detection technique based on salient point feature extraction for image stabilization.A particle filter using a Histogram of Gradient (HOG) features was used for tracking across frames.Gupta et al.[13] deployed different deep-learning models directly to images to detect vehicles.The models include a two-stage detector named Faster Recurrent Convolutional Neural Network (R-CNN) in comparison with one-step detectors i.e.,Single Shot Detector(SSD),YOLOv3,and YOLOv4.YOLOv4 algorithm outperforms all other models by having an 88% mean average precision (mAP) score.These models are highly sensitive to class imbalance and therefore require data-augmenting methodologies.Ozturk et al.[14]proposed a vehicle detection model primarily using Convolutional Neural Network (CNN) with the support of morphological corrections named miniature CNN architecture.This post-processing is computationally expensive.Additionally,alternative datasets of aerial images do not exhibit the same accuracy.The combination of deep learning for feature extraction and Support Vector Machines (SVMs) for classification is described in[15].This method’s use of a brute-force search methodology results in a higher computing intensity.

    Baykara et al.[16]used the YOLO method to find the vehicles.Lane polygon and lane ID detection were used for vehicle tracking.A vehicle is passed to the tracking module,which gives each newly discovered vehicle an ID when its centroid falls within the lane polygon.In[17],the detection of moving vehicles was done using a frame differencing method.While CNN was utilized for classification,the Kalman filter was used to further track the vehicles.

    3 Proposed Method

    This section elaborates on the proposed traffic monitoring system.An overview of the system architecture is shown in Fig.1.All RGB images were segmented into 5 classes using the Random Forest classifier.To smoothen the obtained segmented images and to reduce noise,mean-shift clustering was applied to make clusters of the pixels having identical class labels.For vehicle detection,first of all,the pixels which belong to the vehicle class were extracted and then contours were detected by detecting each object’s edges.To verify each segmented vehicle,the SSID score was calculated using the image masks (ground truth).The density of traffic on the road was estimated by counting each verified vehicle.To track multiple vehicles,a unique ID was allocated based on the ORB features.Vehicles were tracked across multiple image frames using a particle filter.To locate each tracked vehicle,IDs were restored based on ORB key point descriptor along with trajectories approximation.The different stages of the proposed framework are thoroughly explained in the following subsections

    3.1 Image Pre-Processing

    Firstly,the RGB images from all three datasets are cropped to a constant dimension of 300×300 to maintain consistency in size.Then,these images were converted to grayscale to reduce the number of channels.

    3.2 Image Segmentation

    Image segmentation is used as a fundamental stage in many visual technologies that aim to assess situations [18].As a result,segmentation plays a crucial part in numerous applications,such as surveillance systems,driverless vehicles,virtual reality,and medical imaging.Researchers have developed a variety of object segmentation algorithms[19],including watershed transform[20],regiongrowing,graph-cuts,k-means clustering[21],conditional random fields,and more sophisticated deep learning(DL)techniques[22].To find the best solution for the segmentation of traffic scenes we used a random forest classifier which outperforms other classifiers.To train the model,different features were extracted from the images.These feature sets were then split into which includes the original pixel values,Gabor filter,Scharr filter,Prewitt filter,gaussian filter,median filter,and variance[23].These features were based on the edges and the color space changes which helped detect different regions in the images.

    Figure 1:An overview of the proposed intelligent traffic surveillance system

    First of all,the pixel value of the original image was taken as feature 1.Then,the Gabor filter was applied which is a linear filter used for disparity estimation,feature extraction,texture categorization,and edge detection.The Gabor kernel can be expressed as Eq.(1).

    wherex′=-x sin θ+y cos θandy′=x cos θ+y sin θandxandyare the image coordinates.θrepresents the parallel stripes direction of the filter,σrepresents the standard deviation of the Gaussian component,γidentifies the aspect ratio determining the function support’s ellipticity,andψdenotes the phase of the plane wave.

    The resultant matrixR(x,y)is obtained by convolving the original imagel(x,y)with the Gabor filter,using Eq.(2).

    Also,Scharr and Prewitt filters were applied to detect edges both in the horizontal and vertical direction and to highlight gradient edges using the first derivative.The magnitude and orientation of gradient using Eqs.(3)and(4).

    To extract features after reducing noise in the image a low pass filter Gaussian was applied whose kernel is computed by using Eq.(5).

    where G is the Gaussian kernel.To have an extensive and meaningful feature vector,a median filter was also applied to remove salt and pepper noise as it replaces every pixel value with the median value.As the task was to multi-label the image therefore to measure the deviation of each pixel value from its mean value,the variance was also computed by using Eq.(6).

    Figure 2:Output of image segmentation.(a)Original image,(b)after semantic segmentation

    3.3 MeanShift Clustering

    To further improve the segmentation accuracy of each class and to remove noise we applied mean shift clustering.It is a gradient ascent approach to calculate the local greatest density of a data collection by applying mean shifts.It is a non-parametric method that works well to find clusters in the data with arbitrary shapes[24].The fundamental form of the x mean shift vector can be calculated using Eq.(7) under the presumption that n sample points inxi,where i=1,2,...,n,is given in the d-dimensional space Rd.

    wherehdenotes the radius andshrepresents the high-dimensional spherical area,satisfying the y-point set relationship using Eq.(8).

    wherekdenotes that k points inxifall within the boundaries ofDh.Two elements,notably the neighborhood and color pixel bandwidths,have an impact on the mean shift method’s final clustering.For thexipoints that fall within the bounds ofDh,the following rules are defined.

    When the pixel bandwidth is short,the probability density is high when comparing the colors of pixelsxandxi.When comparing the distances of the pixelsxandxi,the high probability density is shown by small distance bandwidth betweenxandxi.As a result,these two rules can be combined to form the probability density function.Thus,the kernel function can be defined by using Eq.(9).

    Figure 3:Result of mean-shift clustering.(a)Segmentation with noise,(b)mean-shift clustering

    3.4 Vehicle Detection

    Following multi-class semantic segmentation,we extract only those pixels that belong to the vehicle class as each pixel is tagged and allocated to a certain class during segmentation.As we only wanted to detect vehicles,we set all of the pixels’values to zero except for the vehicle class.After extracting the pixels of the vehicle class,the resultant image was converted into a binary image using Eq.(10).

    where L stands for an image that only contains pixels of the vehicle class,andbwstands for the overall binary image that results,as seen in Fig.4.

    Figure 4:Vehicle pixel extraction.(a)Binary image with only vehicle masks(b)the resulting image

    As the extracted vehicle differs in color or brightness from the surroundings,therefore,to identify each vehicle separately a blob detection technique[25]was applied as represented in Fig.5.

    Figure 5:Vehicle detection using blob detection algorithm over AU-AIR,VAID,and UAVDT datasets

    3.5 Vehicle Verification

    To verify each segmented and detected vehicle,we back-propagate toward the ground truth of each particular image to confirm the presence of vehicles at certain locations.For verification,the Region of Interest(ROI)of each detection in segmented images,as well as ground truth,were extracted.Afterward,a Similarity Structure Index Measure(SSIM)was calculated to measure the similarity score between the vehicles [26].SSIM consists of three key features of the image i.e.,contrast,luminance,and structure as calculated by using Eqs.(11)–(13).

    whereLdenotes luminance,Cdenotes contrast andSdenotes structure.μxandμyrepresent the sample mean of x and y.σxandσyare the standard deviation andσxydenotes the sample correlation coefficient between x and y.C1andC2are the constant need to stabilize the algorithm when the denominator approaches zero.Thus,the general formula of SSIM can be represented by using Eq.(14).

    whereα,β,andγdescribe the relative importance of each feature.If the SSIM score is greater than 0.2 then the vehicle is added to the true positive.The proposed algorithm for vehicle verification is given in Algorithm 1.

    3.6 ID Assignment Based on ORB Features

    Each detected vehicle was given an ID based on ORB features before tracking to reidentify it in the following frames of the image.

    ORB is a fast and efficient feature detector[27].For key point detection,it makes use of the FAST(Features from Accelerated Segment Test)keypoint detector.It is an advanced form of the descriptor BRIEF(Binary Robust Independent Elementary Features).Also,it is invariant to scale and rotation.A patch moment is obtained by using Eq.(3).

    wherepandqare the intensity values of the image pixels atxandylocations,respectively.Eq.(4)can be used to determine the center of mass using these moments.

    The patch orientation can be defined by Eq.(5).

    The extracted ORB features were used to find the matching of tracked vehicles in the succeeding frames and if a matching is found the ID was restored otherwise the vehicle was registered with a new ID in the system.Fig.6 shows the outcomes of applying an ORB feature description to the extracted vehicles and ID restoration throughout frames.

    3.7 Vehicle Tracking

    To track multiple vehicles across different image frames,the particle filter was applied.Particle filters are part of a broad class of Sequential Monte Carlo (SMC) techniques that are frequently utilized in tracking objects.To determine the minimum cost function,particle filters frequently start with the premise that the data distribution is unknown and that distribution“particles”or samples are assessed,examined,and aggregated into more meaningful conclusions[28].For tracking,the posterior probability density at the t instant is estimated,which is acquired in the following two steps.

    Step 1 Prediction:Assume that the posterior probability density functionp(gt-1|o1:t-1)and starting probability density valuep(g0)of the probability density are both known at the timet-1.gtdefines a three-dimensional vector wheregt=[gtx,gty,gts].The position of the object is expressed bygtx,gtywhereas the change in size is represented by.Thus,the prior probability can be defined using Eq.(18).

    wherep(gt|gt-1)represent the state equation of the target.

    Figure 6:ID assignment and restoration (a) ID assigned to each vehicle based on ORB features (b)features matching across frames(c)ID restored for the same vehicle in succeeding frame

    Step 2 Updating:the observation model of the system yieldsp(gt|o1:t)as given in Eq.(19).

    p(ot|gt)represent the observation likelihood function which is obtained by the observation of the tracked object.Whereas,(ot|g1:t-1)is defined as a normalized constant.The recursive Bayesian filtering also known as particle filter is simulated by the non-parametric Monte Carlo Method as given in Eq.(20).

    The result of vehicle tracking can be seen in Fig.7.

    Figure 7:Vehicle tracking using particle filter across image frames

    3.8 Trajectories and Density Estimation

    Finally,to analyze the vehicle’s tracking and paths,trajectories for each corresponding vehicle were drawn by recording the possible location of every vehicle as obtained by the particle filter across the image frames using Eq.(23).The result of estimated trajectories can be visualized in Fig.8.

    whereTirepresents the estimated trajectory ofithvehicle,represents the coordinates of each vehicle’s location represented by a rectangular bounding box.

    Figure 8:Estimated trajectories of vehicles being tracked(a)trajectories of each vehicle plotted using the centroid points of bounding boxes(b)final output

    Also,a record of detected vehicle count was maintained in each frame to estimate the density of traffic on the road as seen in Fig.9.

    Figure 9:Density estimation by using vehicle count displayed at the left corner of each image

    4 Performance Evaluation

    The dataset utilized for the vehicle detection and tracking system is briefly discussed in this section,along with the findings of three distinct experiments that were used to assess the proposed system and its evaluation against several current state-of-the-art traffic monitoring models[29].

    4.1 Dataset Description

    We used the following publicly available datasets to develop and test our proposed model.

    4.1.1 VAID Dataset

    Lin et al.presented the Vehicle Aerial Imaging from Drone (VAID) dataset in 2020 for smart traffic monitoring using vehicle detection and classification.It comprised 6000 images taken from a drone with a final image resolution of 1137×640 after downsizing.All the images are in.jpg format captured at 23.9 frames per second.For reliable imagery acquisition,the drone was positioned between 90 and 95 m above the ground.

    4.1.2 AU-AIR Dataset

    The 32,823 extracted frames from 8 video segments of more than 2 h make up the AU-AIR dataset[30].It contains 32,823 labeled frames with 132,034 object instances in total.The images are acquired at the rate of 30 frames per second with a resolution of 1920×1080.Among the multi-modal sensor data in AU-AIR are the altitude,position,time,and velocity.The traffic videos were captured at P.O.Pedersensvej and Skejby Nordlandsvej(Denmark).

    4.1.3 UAVDT Dataset

    The 100 video sequences which contain 80,000 image frames make up the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking (UAVDT) dataset [31] were selected from more than 10 h of footage taken with an Unmanned Aerial Vehicle (UAV) platform in various urban environments.All the images are in.jpg format with a resolution of 1080×540 pixels acquired at the rate of 30 frames per second.These scenarios include arterial streets,squares,toll booths,motorways,T-junctions,and crossings.

    4.2 Experimental Settings and Results

    Python (3.7) has been used to design and test the system on a computer with an Intel Core i5 processor running the 64-bit version of Windows 10.The system has 8 Giga Byte Random Access Memory (RAM) and a 2.8 Giga Hertz (GHz) Central Processing Unit (CPU).The performance of the proposed detection and tracking algorithms was evaluated using precision,F1 score,and recall metrics.

    4.2.1 Experiment I:Semantic Segmentation Accuracy

    The images from each dataset were divided into training and testing samples.80% samples were used for training and 20% were used for testing purposes.Random Forest classifier was trained as it can increase the accuracy score after fitting different subsamples of the dataset using a variety of decision tree classifiers.The overall accuracy over the training and testing samples was 92% and 77%,respectively.

    4.2.2 Experiment II:Precision,Recall,and F1 scores

    Table 1 demonstrates the precision,recall,and F1 scores for vehicle detection.True Positive represents the number of vehicles detected successfully.False Positives denote detection other than vehicles,and False Negative represents the number of missed vehicles.The results show that the proposed algorithm can recognize the various vehicles of variable size with high precision.

    Table 1:Precision,recall,and F1 scores for vehicle detection

    For tracking True Positive represents the number of vehicles successfully tracked,False Positive represents the number of false track of vehicles in more than three frames,and False Negative denotes the number of vehicles not tracked.Table 2 represents the precision,recall,and F1 scores for the proposed tracking algorithm.

    Table 2:Precision,recall,and F1 scores for vehicle tracking

    4.2.3 Experiment III:ID Assignment and ID Recovery

    This section discusses the result of ID assignment and ID recovery to track multiple objects(vehicles)across the different image frames.For this,we used True ID Rate(TIDR)to assess the vehicle ID assignment module and True Recovery Rate(TRR)which represents the number of IDs recovered successfully to assess the recoverability module.For feature matching,if the number of feature matches exceeds 5,then a match was found and the corresponding ID was recovered.In the other case,a new ID was assigned.Table 3 shows the result for the performance evaluation metrics.

    4.2.4 Experiment IV:Comparison with Other Systems

    We evaluated our proposed system in comparison with other state-of-the-art methods including deep learning techniques available in the literature.Table 4 demonstrates the comparison of different detection models over the AU-AIR,VAID,and UAVDT datasets.It can be seen that our model outperformed all other techniques in terms of precision.Table 5 presents the advantages and disadvantages of the proposed and existing models.

    Table 6 presents a comparison of the tracking algorithm with the other tracking techniques.Whereas,Table 7 compares the advantages and disadvantages of the existing and proposed model.

    Table 6:Comparison of the proposed tracking model with the state-of-the-art techniques

    5 Discussion

    The proposed system is an effective solution for intelligent traffic monitoring using aerial images.Object recognition in high-resolution aerial images is a very challenging task.Therefore,we proposed a mechanism based on multi-label semantic segmentation and particle filter-based tracking to achieve efficient results.However,the proposed method has some drawbacks.First of all,the system is only tested on RGB images captured during day time.The approach can be further validated by evaluating image and video datasets captured at night or in low-light situations since many researchers have had success with these datasets.Moreover,our segmentation and detection algorithm faces difficulty under partial or full occlusions,roads covered with trees,or similar objects.Fig.10 shows a nighttime image from the UAVDT dataset.

    Figure 10:Drawbacks of the vehicle detection algorithm.(a)Different illumination conditions at night time(b)vehicle not detected due to background cluttering(c)vehicle not detected due to occlusion

    6 Conclusion and Future Works

    In this paper,an effective system for vehicle detection and tracking under various road circumstances is proposed.The RGB images are first all segmented into five classes and then the images are subjected to mean-shift clustering for noise removal and to smoothen the output.After that,vehicle pixels are extracted and a blob detection technique was applied to detect each vehicle.Each vehicle was verified using ground truth labeling.To track multiple vehicles,each of them was assigned an ID based on ORB features.The proposed model produces significant results on all three datasets which proves the effectiveness of our methodology.

    To increase performance in the future,the authors want to test new and improved classifiers on more complicated and varied datasets.Moreover,to improve the performance of the traffic monitoring system we aim to use deep learning methods.

    Acknowledgement:The authors are thankful to Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Funding Statement:This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC (Information Technology Research Center) Support Program (IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Author Contributions:Study conception and design: Asifa Mehmood Qureshi,Jeongmin Park;data collection: Nouf Abdullah Almujally;analysis and interpretation of results: A.Qureshi,Saud S.Alotaibi and Mohammed Hamad Alatiyyah;draft manuscript preparation:Asifa Mehmood Qureshi.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:All publicly available datasets are used in the study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    天堂动漫精品| 欧美黑人欧美精品刺激| 亚洲精品亚洲一区二区| 久久久久久人人人人人| eeuss影院久久| 九九久久精品国产亚洲av麻豆| 国产精品久久久人人做人人爽| 国产私拍福利视频在线观看| 一边摸一边抽搐一进一小说| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | a级一级毛片免费在线观看| 欧美区成人在线视频| 一区福利在线观看| 免费观看人在逋| 一个人观看的视频www高清免费观看| 久久久国产成人精品二区| 熟女少妇亚洲综合色aaa.| 欧美+日韩+精品| 欧美又色又爽又黄视频| 一个人免费在线观看的高清视频| xxxwww97欧美| xxx96com| 国产精品久久久久久久电影 | 国产精品电影一区二区三区| 免费无遮挡裸体视频| 18禁黄网站禁片免费观看直播| 最近视频中文字幕2019在线8| 国产真实乱freesex| 亚洲av成人精品一区久久| 麻豆成人av在线观看| 男女下面进入的视频免费午夜| 搡老妇女老女人老熟妇| 内地一区二区视频在线| 女同久久另类99精品国产91| 波多野结衣高清作品| 免费观看的影片在线观看| 精品久久久久久久久久久久久| 亚洲色图av天堂| 欧美bdsm另类| 亚洲精品影视一区二区三区av| 欧美极品一区二区三区四区| 小说图片视频综合网站| 美女高潮喷水抽搐中文字幕| 国产成人aa在线观看| 校园春色视频在线观看| 中文字幕av在线有码专区| 国产爱豆传媒在线观看| 亚洲激情在线av| 女人高潮潮喷娇喘18禁视频| av中文乱码字幕在线| a级一级毛片免费在线观看| 国产亚洲精品一区二区www| 国产亚洲精品综合一区在线观看| 99久久九九国产精品国产免费| 日本免费一区二区三区高清不卡| 亚洲av日韩精品久久久久久密| 欧美在线一区亚洲| 日韩高清综合在线| 午夜亚洲福利在线播放| 男女床上黄色一级片免费看| 国产欧美日韩精品亚洲av| 久久久久久国产a免费观看| 最近最新中文字幕大全免费视频| 一级作爱视频免费观看| 国产精品 欧美亚洲| 国内精品一区二区在线观看| 欧美一区二区精品小视频在线| 91av网一区二区| 黄片小视频在线播放| 全区人妻精品视频| 久久精品国产综合久久久| 深爱激情五月婷婷| 午夜视频国产福利| aaaaa片日本免费| 在线观看免费午夜福利视频| 熟女少妇亚洲综合色aaa.| 国产真实乱freesex| 国产精品久久电影中文字幕| 窝窝影院91人妻| ponron亚洲| 亚洲av日韩精品久久久久久密| 麻豆国产av国片精品| 亚洲精品456在线播放app | 老司机在亚洲福利影院| 欧美日韩瑟瑟在线播放| 亚洲欧美日韩东京热| 一级毛片高清免费大全| 一区二区三区国产精品乱码| 日韩精品中文字幕看吧| 黄色日韩在线| 国产av不卡久久| 国产精品香港三级国产av潘金莲| 久久久久久久久久黄片| 在线观看美女被高潮喷水网站 | 国产私拍福利视频在线观看| 国产一区二区三区在线臀色熟女| 一本久久中文字幕| 两个人看的免费小视频| 欧美绝顶高潮抽搐喷水| 婷婷精品国产亚洲av| 成年版毛片免费区| 两个人的视频大全免费| 最好的美女福利视频网| 高清日韩中文字幕在线| 观看美女的网站| 亚洲成人久久爱视频| 美女 人体艺术 gogo| 又爽又黄无遮挡网站| 国产精品 欧美亚洲| 18+在线观看网站| 欧美区成人在线视频| 亚洲真实伦在线观看| 久久香蕉国产精品| 国产淫片久久久久久久久 | 99国产精品一区二区蜜桃av| 欧美大码av| 日本黄大片高清| 亚洲精品国产精品久久久不卡| 免费在线观看亚洲国产| 久久国产精品影院| 国产黄a三级三级三级人| 国内精品久久久久久久电影| 内射极品少妇av片p| 国内揄拍国产精品人妻在线| 成人性生交大片免费视频hd| 国产av麻豆久久久久久久| 激情在线观看视频在线高清| 国产精品影院久久| 亚洲av成人av| 亚洲精品影视一区二区三区av| 国产成人啪精品午夜网站| 国内精品一区二区在线观看| 久久久久亚洲av毛片大全| 午夜福利欧美成人| 国产亚洲欧美在线一区二区| 欧美成人一区二区免费高清观看| x7x7x7水蜜桃| 脱女人内裤的视频| 国产精品久久久久久久电影 | 欧美最黄视频在线播放免费| 美女免费视频网站| av女优亚洲男人天堂| 亚洲成av人片在线播放无| 日韩 欧美 亚洲 中文字幕| 欧美成人一区二区免费高清观看| 窝窝影院91人妻| 午夜福利18| 国产精品久久久久久人妻精品电影| 法律面前人人平等表现在哪些方面| 亚洲va日本ⅴa欧美va伊人久久| 亚洲中文字幕一区二区三区有码在线看| 亚洲一区二区三区色噜噜| 午夜影院日韩av| 九色成人免费人妻av| 老鸭窝网址在线观看| 国产成人aa在线观看| 99热这里只有是精品50| 99久久九九国产精品国产免费| 国产男靠女视频免费网站| 最近最新中文字幕大全免费视频| 可以在线观看毛片的网站| 午夜福利18| 精品久久久久久久末码| 搡老熟女国产l中国老女人| avwww免费| 欧美乱妇无乱码| 在线播放无遮挡| 亚洲片人在线观看| 少妇人妻精品综合一区二区 | 在线天堂最新版资源| 九九热线精品视视频播放| 精品午夜福利视频在线观看一区| 成人av一区二区三区在线看| 亚洲五月天丁香| 免费人成在线观看视频色| 国产成人系列免费观看| 欧美日韩一级在线毛片| 手机成人av网站| 美女高潮喷水抽搐中文字幕| 欧美日韩亚洲国产一区二区在线观看| 久久久久久人人人人人| www.www免费av| 人人妻人人看人人澡| 亚洲片人在线观看| 久久亚洲精品不卡| 51午夜福利影视在线观看| 午夜免费观看网址| 欧美乱色亚洲激情| 成年版毛片免费区| 美女免费视频网站| 亚洲性夜色夜夜综合| 中文字幕精品亚洲无线码一区| 看免费av毛片| 夜夜看夜夜爽夜夜摸| 国产私拍福利视频在线观看| 亚洲第一欧美日韩一区二区三区| 日韩人妻高清精品专区| 黄片大片在线免费观看| 欧美日韩国产亚洲二区| 天堂√8在线中文| 少妇的丰满在线观看| 成人特级黄色片久久久久久久| 亚洲无线在线观看| 制服人妻中文乱码| 人人妻人人澡欧美一区二区| 成人av在线播放网站| 亚洲最大成人中文| 综合色av麻豆| 高潮久久久久久久久久久不卡| 日韩欧美精品v在线| 日本 欧美在线| 国产在视频线在精品| 啦啦啦观看免费观看视频高清| 久久久久久久精品吃奶| 国产亚洲av嫩草精品影院| 法律面前人人平等表现在哪些方面| 亚洲国产欧美网| ponron亚洲| 床上黄色一级片| 欧美乱妇无乱码| 观看免费一级毛片| 国产精品99久久99久久久不卡| 欧美bdsm另类| 天美传媒精品一区二区| 久久久色成人| 日韩欧美国产在线观看| 三级男女做爰猛烈吃奶摸视频| 久久国产乱子伦精品免费另类| 国内毛片毛片毛片毛片毛片| 国产日本99.免费观看| 国产熟女xx| 日韩欧美一区二区三区在线观看| 欧美中文日本在线观看视频| 午夜福利高清视频| 母亲3免费完整高清在线观看| 精品久久久久久久久久久久久| 精品福利观看| 女人高潮潮喷娇喘18禁视频| 久久久国产精品麻豆| 日本撒尿小便嘘嘘汇集6| 两性午夜刺激爽爽歪歪视频在线观看| 波多野结衣高清作品| 校园春色视频在线观看| 久久国产乱子伦精品免费另类| 精品欧美国产一区二区三| 欧美高清成人免费视频www| av在线蜜桃| 国产一区二区激情短视频| 亚洲久久久久久中文字幕| 麻豆久久精品国产亚洲av| 岛国在线免费视频观看| 成人欧美大片| 久久久久久久久久黄片| 蜜桃亚洲精品一区二区三区| 波多野结衣高清无吗| 日韩高清综合在线| 亚洲最大成人中文| 国产精品精品国产色婷婷| 国产又黄又爽又无遮挡在线| 午夜福利欧美成人| 国产精品亚洲av一区麻豆| 亚洲一区二区三区色噜噜| 欧美激情在线99| 真人做人爱边吃奶动态| 一二三四社区在线视频社区8| 色综合欧美亚洲国产小说| 一区二区三区高清视频在线| 欧美黑人巨大hd| 看黄色毛片网站| 亚洲激情在线av| 99久国产av精品| 中文字幕av成人在线电影| 午夜老司机福利剧场| 无限看片的www在线观看| 长腿黑丝高跟| 国产高清三级在线| 亚洲成人中文字幕在线播放| 国内精品久久久久久久电影| 1000部很黄的大片| 午夜福利视频1000在线观看| www日本在线高清视频| 国产一区二区激情短视频| 精品人妻一区二区三区麻豆 | 欧美成人一区二区免费高清观看| 少妇熟女aⅴ在线视频| 国产一区二区三区在线臀色熟女| 丁香六月欧美| 久久精品影院6| 国产高清videossex| 亚洲精品成人久久久久久| 中文字幕精品亚洲无线码一区| 国产午夜福利久久久久久| 老鸭窝网址在线观看| 欧美绝顶高潮抽搐喷水| 一个人免费在线观看的高清视频| 看黄色毛片网站| 一进一出好大好爽视频| 亚洲欧美激情综合另类| 欧美激情久久久久久爽电影| 国产色婷婷99| 亚洲电影在线观看av| 免费大片18禁| 午夜老司机福利剧场| 精品久久久久久久久久久久久| 亚洲 欧美 日韩 在线 免费| 免费看美女性在线毛片视频| 真实男女啪啪啪动态图| 看免费av毛片| 亚洲自拍偷在线| 国产成人影院久久av| 99国产精品一区二区三区| 久久久久久久久大av| 可以在线观看的亚洲视频| 人人妻人人看人人澡| 91av网一区二区| bbb黄色大片| 日韩精品中文字幕看吧| 一二三四社区在线视频社区8| 国产中年淑女户外野战色| 日韩欧美国产在线观看| 神马国产精品三级电影在线观看| 两个人视频免费观看高清| 真人做人爱边吃奶动态| 国产一区二区在线观看日韩 | 欧美日韩一级在线毛片| 久久精品亚洲精品国产色婷小说| 国产欧美日韩一区二区三| 亚洲成人久久爱视频| 午夜日韩欧美国产| 一级毛片高清免费大全| 在线观看午夜福利视频| 级片在线观看| 色哟哟哟哟哟哟| 19禁男女啪啪无遮挡网站| 男人舔奶头视频| 久久精品影院6| 99久久综合精品五月天人人| 1000部很黄的大片| 中文字幕熟女人妻在线| 国产高清videossex| 97碰自拍视频| 少妇裸体淫交视频免费看高清| 尤物成人国产欧美一区二区三区| 一区二区三区高清视频在线| 51午夜福利影视在线观看| 久久人妻av系列| 大型黄色视频在线免费观看| 日日干狠狠操夜夜爽| 九色国产91popny在线| 久久久久亚洲av毛片大全| 两个人视频免费观看高清| xxx96com| 午夜免费成人在线视频| 天天躁日日操中文字幕| 久久伊人香网站| 一进一出抽搐gif免费好疼| 精品免费久久久久久久清纯| 黄色丝袜av网址大全| 好男人电影高清在线观看| 免费人成视频x8x8入口观看| 夜夜躁狠狠躁天天躁| 日韩欧美精品v在线| 国产亚洲av嫩草精品影院| 国产伦一二天堂av在线观看| 国产欧美日韩一区二区三| 中文字幕精品亚洲无线码一区| 国产黄片美女视频| 国产一区二区亚洲精品在线观看| 欧美成人免费av一区二区三区| 精品国产超薄肉色丝袜足j| 露出奶头的视频| 精品99又大又爽又粗少妇毛片 | 亚洲国产日韩欧美精品在线观看 | 欧美日韩福利视频一区二区| 亚洲av成人精品一区久久| 亚洲乱码一区二区免费版| 十八禁人妻一区二区| 欧美最黄视频在线播放免费| 女人高潮潮喷娇喘18禁视频| 亚洲精品久久国产高清桃花| 欧美日本视频| 日韩欧美一区二区三区在线观看| 成人性生交大片免费视频hd| 又黄又爽又免费观看的视频| 亚洲av成人av| 欧美3d第一页| 久久香蕉精品热| 国产精品精品国产色婷婷| 国产在视频线在精品| 此物有八面人人有两片| 亚洲精华国产精华精| 看黄色毛片网站| 亚洲精品美女久久久久99蜜臀| 中出人妻视频一区二区| 999久久久精品免费观看国产| 色精品久久人妻99蜜桃| 亚洲人成网站在线播放欧美日韩| 欧美乱妇无乱码| 欧美日韩黄片免| 亚洲av免费高清在线观看| 精品国产超薄肉色丝袜足j| 亚洲成av人片免费观看| 国产高潮美女av| 操出白浆在线播放| 婷婷精品国产亚洲av在线| 桃色一区二区三区在线观看| 日韩成人在线观看一区二区三区| 色av中文字幕| 草草在线视频免费看| 亚洲国产中文字幕在线视频| 黄色女人牲交| 淫妇啪啪啪对白视频| 床上黄色一级片| 免费一级毛片在线播放高清视频| 亚洲性夜色夜夜综合| 国产探花极品一区二区| 欧美成人性av电影在线观看| 一个人看的www免费观看视频| 夜夜爽天天搞| 在线天堂最新版资源| 欧美乱妇无乱码| 99久久久亚洲精品蜜臀av| 精品熟女少妇八av免费久了| 午夜福利在线在线| 91麻豆av在线| 国产蜜桃级精品一区二区三区| 国产高清videossex| 久久久久久久久久黄片| 看免费av毛片| 国产一区二区三区在线臀色熟女| 国产伦在线观看视频一区| 精品免费久久久久久久清纯| 黄色日韩在线| 真人一进一出gif抽搐免费| 老熟妇仑乱视频hdxx| 无人区码免费观看不卡| 亚洲自拍偷在线| 国产成人啪精品午夜网站| 热99在线观看视频| 真实男女啪啪啪动态图| 久久久久久国产a免费观看| 亚洲午夜理论影院| 亚洲国产精品成人综合色| 婷婷精品国产亚洲av| 性色avwww在线观看| 精品国产亚洲在线| 操出白浆在线播放| 久久国产乱子伦精品免费另类| 人妻久久中文字幕网| 久久久久久久久久黄片| 久久精品91无色码中文字幕| 真人做人爱边吃奶动态| 亚洲精品美女久久久久99蜜臀| 级片在线观看| 人人妻人人澡欧美一区二区| 欧美日韩一级在线毛片| 成人特级黄色片久久久久久久| 久久6这里有精品| 国产伦在线观看视频一区| 久久久国产精品麻豆| 国产在视频线在精品| 亚洲成人中文字幕在线播放| 亚洲人成伊人成综合网2020| 我的老师免费观看完整版| 十八禁人妻一区二区| 手机成人av网站| 一夜夜www| 好看av亚洲va欧美ⅴa在| 精品久久久久久久毛片微露脸| 一个人看视频在线观看www免费 | 老汉色∧v一级毛片| 国产中年淑女户外野战色| 日韩成人在线观看一区二区三区| 欧洲精品卡2卡3卡4卡5卡区| 日韩欧美三级三区| 日韩欧美在线乱码| а√天堂www在线а√下载| 欧美精品啪啪一区二区三区| 中文亚洲av片在线观看爽| 精品欧美国产一区二区三| 波多野结衣高清无吗| 特大巨黑吊av在线直播| 午夜免费观看网址| a级一级毛片免费在线观看| 日韩欧美免费精品| 亚洲色图av天堂| 青草久久国产| 国产三级中文精品| 老司机福利观看| 久久中文看片网| 国产精品野战在线观看| 国产黄色小视频在线观看| 免费av不卡在线播放| 1024手机看黄色片| 久久人人精品亚洲av| 日韩欧美在线二视频| 长腿黑丝高跟| 熟妇人妻久久中文字幕3abv| 免费看十八禁软件| 欧美一区二区精品小视频在线| 亚洲一区高清亚洲精品| 欧美午夜高清在线| 日日摸夜夜添夜夜添小说| 天美传媒精品一区二区| 在线天堂最新版资源| 亚洲人与动物交配视频| 啦啦啦免费观看视频1| 国产aⅴ精品一区二区三区波| 亚洲国产欧美人成| 成人高潮视频无遮挡免费网站| 一本久久中文字幕| 亚洲精品在线观看二区| 国产成人啪精品午夜网站| 久久精品影院6| 特级一级黄色大片| 俺也久久电影网| 91字幕亚洲| 人妻久久中文字幕网| 88av欧美| 国产成+人综合+亚洲专区| 一边摸一边抽搐一进一小说| 久久精品人妻少妇| 欧美中文综合在线视频| 亚洲av成人av| 日日摸夜夜添夜夜添小说| 国产精品av视频在线免费观看| 夜夜夜夜夜久久久久| 最近最新中文字幕大全电影3| 欧美av亚洲av综合av国产av| 久久久久性生活片| 99国产极品粉嫩在线观看| 成人av在线播放网站| 亚洲中文字幕日韩| 久久精品91无色码中文字幕| 波野结衣二区三区在线 | 免费在线观看成人毛片| 亚洲精品一区av在线观看| 日本在线视频免费播放| 亚洲国产欧美人成| 老熟妇乱子伦视频在线观看| 欧美日韩一级在线毛片| 少妇丰满av| 日日夜夜操网爽| 91在线精品国自产拍蜜月 | 欧美丝袜亚洲另类 | 国产一区在线观看成人免费| 日韩大尺度精品在线看网址| 岛国在线免费视频观看| 每晚都被弄得嗷嗷叫到高潮| 亚洲欧美精品综合久久99| 亚洲国产欧洲综合997久久,| 日本熟妇午夜| 亚洲人成电影免费在线| 国内揄拍国产精品人妻在线| 国产伦精品一区二区三区四那| 亚洲av熟女| 每晚都被弄得嗷嗷叫到高潮| 色av中文字幕| 国产黄a三级三级三级人| 国产毛片a区久久久久| 亚洲国产色片| 色噜噜av男人的天堂激情| 国产精品电影一区二区三区| 欧美在线一区亚洲| 日韩欧美国产一区二区入口| 观看免费一级毛片| 日本a在线网址| 日本免费一区二区三区高清不卡| 丰满人妻熟妇乱又伦精品不卡| 乱人视频在线观看| 国产99白浆流出| 亚洲av电影不卡..在线观看| 欧美性猛交黑人性爽| 国产精品亚洲一级av第二区| 欧美丝袜亚洲另类 | 香蕉av资源在线| 久久精品亚洲精品国产色婷小说| 国产欧美日韩一区二区精品| av欧美777| 欧美高清成人免费视频www| 美女cb高潮喷水在线观看| 美女免费视频网站| 亚洲av中文字字幕乱码综合| 俺也久久电影网| 搡老岳熟女国产| 真人做人爱边吃奶动态| 俺也久久电影网| 五月伊人婷婷丁香| 欧美最黄视频在线播放免费| 久久天躁狠狠躁夜夜2o2o| 村上凉子中文字幕在线| 久久久精品大字幕| 亚洲黑人精品在线| 国产亚洲精品久久久久久毛片| 久久久久性生活片| 国产精品一区二区三区四区久久| 免费无遮挡裸体视频| xxxwww97欧美| 国产国拍精品亚洲av在线观看 | 在线观看美女被高潮喷水网站 | av天堂在线播放| 国产成人a区在线观看| 欧美zozozo另类| 欧美又色又爽又黄视频| 中文字幕人妻熟人妻熟丝袜美 | 欧美三级亚洲精品| 亚洲av不卡在线观看| 亚洲狠狠婷婷综合久久图片| 国产99白浆流出| 亚洲精品乱码久久久v下载方式 | 亚洲中文字幕日韩| 欧美极品一区二区三区四区| 国产精品自产拍在线观看55亚洲| 在线天堂最新版资源| 老司机午夜福利在线观看视频|