• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      Intelligent Traffic Surveillance through Multi-Label Semantic Segmentation and Filter-Based Tracking

      2023-10-26 13:15:18AsifaMehmoodQureshiNoufAbdullahAlmujallySaudAlotaibiMohammedHamadAlatiyyahandJeongminPark
      Computers Materials&Continua 2023年9期

      Asifa Mehmood Qureshi ,Nouf Abdullah Almujally ,Saud S.Alotaibi ,Mohammed Hamad Alatiyyah and Jeongmin Park

      1Department of Creative Technologies,Air University,Islamabad,44000,Pakistan

      2Department of Information Systems,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,P.O.Box 84428,Riyadh,11671,Saudi Arabia

      3Information Systems Department,Umm Al-Qura University,Makkah,Saudi Arabia

      4Department of Computer Science,College of Sciences and Humanities in Aflaj,Prince Sattam Bin Abdulaziz University,Al-Kharj,Saudi Arabia

      5Department of Computer Engineering,Tech University of Korea,Siheung-si,Gyeonggi-do,15073,Korea

      ABSTRACT Road congestion,air pollution,and accident rates have all increased as a result of rising traffic density and worldwide population growth.Over the past ten years,the total number of automobiles has increased significantly over the world.In this paper,a novel method for intelligent traffic surveillance is presented.The proposed model is based on multilabel semantic segmentation using a random forest classifier which classifies the images into five classes.To improve the results,mean-shiftclustering was applied to the segmented images.Afterward,the pixels given the label for the vehicle were extracted and blob detection was applied to mark each vehicle.For the validation of each detection,a vehicle verification method based on the structural similarity index is proposed.The tracking of vehicles across the image frames is done using the Identifier(ID)assignment technique and particle filter.Also,vehicle counting in each frame along with trajectory estimation was done for each object.Our proposed system demonstrated a remarkable vehicle detection rate of 0.83 over Vehicle Aerial Imaging from Drone(VAID),0.86 over AU-AIR,and 0.75 over the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)dataset during the experimental evaluation.The proposed system can be used for several purposes,such as vehicle identification in traffic,traffic density estimation at intersections,and traffic congestion sensing on a road.

      KEYWORDS Traffic surveillance;multi-label segmentation;random forest;particle filter;computer vision

      1 Introduction

      For numerous real-time computer vision techniques,fast image frame sequence processing is crucial.One of the most significant areas is the tracking of moving objects in video image sequences such as traffic control and surveillance,sports reporting,and video annotation [1].The number of vehicles has increased drastically over the past few years.Therefore,there is a need to automate the traffic surveillance systems.A large number of image-based systems have been proposed by the research community.But there are still some challenges that need to be addressed to enhance the traffic monitoring system capabilities.A large number of effective image processing techniques have been proposed which perform well for static image data.However,these scenarios will get more challenging if the background and moving objects change dynamically [2].Techniques including background subtraction,and consecutive frame differencing are not suitable when the images are captured using a mobile platform because background pixels also have motion in them which classifies them as foreground objects.Therefore,several areas of computer vision and image processing,including intelligent transportation,medical images,object recognition,semantic segmentation,and humancomputer interaction,have proven to be quite effective[3].Semantic segmentation is the grouping and individual labeling of pixels that belong to the same class[4].Traditional traffic monitoring systems only consist of binary segmentation e.g.,vehicle and background labeling.However,our proposed system performs multi-class segmentation for a better understanding of the scene and different objects.Furthermore,Aerial data has the potential to greatly improve traffic management,control,efficiency,and effectiveness.But it also has some challenges which include varying object size,large covered areas other than the roads,and different road designs which need to be addressed effectively to develop systems based on data retrieved from mobile platforms.

      This paper proposes a reliable system for traffic monitoring in aerial images specifically designed by keeping in view the above-mentioned limitations.The approach requires segmenting all Red,Green and Blue(RGB)images into various classes which include vehicles,roads,buildings,sky,and greenery.Then,to further improve the result the segmented images are subjected to mean-shift clustering to group the pixels having the same class labels.The phase of vehicle detection comes next,which consists of two steps i)extracting only those pixels that belong to the vehicle class and ii)finding contour by detecting the borders of the object.To verify each detected vehicle the Structural Similarity Index(SSID)score was calculated using each image’s corresponding mask.Afterward,the traffic densities on the road were estimated by counting each verified vehicle.To track multiple vehicles within a single frame a unique ID was allocated founded on a distinctive feature descriptor named Oriented FAST Features from Accelerated Segment Test (ORB).Finally,the location of each vehicle was estimated by using the particle filter,and also the allocated IDs were retrieved in each succeeding frame by matching the ORB key point descriptors.Vehicle trajectories were estimated for each tracked vehicle.Three large publicly available datasets,the UAVDT Dataset,AUAIR Dataset,and VAID dataset were used for experimentation purposes.

      This paper’s primary contributions are as follows:

      ? Multi-label pixel segmentation technique for accurate vehicle extraction from Red,Green,and Blue(RGB)images.

      ? Proposing an easy and efficient way for detection verification based on SSID score using ground truth.

      ? Designing a powerful vehicle recognition system grounded on the ORB features for ID retrieval and particle filter for tracking.

      The rest of the paper is structured as follows:Section 2 explains and evaluates the research work that is pertinent to the proposed system.Section 3 defines the overall system methodology.The dataset used in the proposed work is described in Section 4,which also uses several tests to demonstrate the system’s robustness.The research is concluded in Section 5,which also lists some future directions.

      2 Related Work

      Researchers have been actively working on traffic monitoring algorithms for the past few years.They investigated their systems’behaviors using images taken from a static camera,satellite images,and aerial images.In maximum cases,the whole images are firstly preprocessed to remove irrelevant areas other than vehicles and then features are extracted from them.Different approaches are based on image differencing,foreground extraction,or background subtraction techniques.These approaches are simple and especially useful when the Region of Interest (ROI) is visible and of reasonable size in images [5].However,in aerial images,the vehicle size varies depending on the height at which the images are taken.Therefore,semantic segmentation approaches are being used for detection and tracking purposes[6].Moreover,the additional steps of clustering and identifier assignment for improved results are also common.Thus,the related work is categorized into semantic-based and deep learning-based traffic monitoring systems to present an overview of existing models and techniques.

      2.1 Semantic Segmentation-Based Traffic Monitoring Systems

      Zhang et al.[7] have performed aerial vehicle recognition by deploying a multi-label semantic segmentation mechanism for better scene understanding.They used Mask Region-based Convolutional Neural Network (R-CNN) to segment different regions and then eliminate background objects to reduce the computational area.To detect the aerial vehicle,visual attention mechanism was used for feature extraction which was passed onto the Adaboost classifier to get the exact location.Makrigiorgis et al.[8] incorporated segmentation for road extraction using EfficientNet which combines MobileNetV2 and ResNet18.Further,the You Look Only Once version 3(YOLOv3)algorithm detects the vehicles on the extracted ROI.In complex cases,background elimination in real-time scenarios is more challenging.Also,after the removal of invalid data deploying a pre-trained deep learning algorithm only increase the computational complexity of the model as these models can perform well if directly applied to raw images.Their road extraction mechanism can be replaced by multi-label scene segmentation to better analyze the images and to directly get vehicles for detection.

      Gomaa et al.[9] argued that in aerial images both the background and foreground are moving therefore approaches based on detecting motion are not feasible.Thus,a method based on top-hat and bottom-hat transformation along with the Otsu partitioning method and morphological operations were deployed for detection.While vehicle motion is important,Shi Tomasi features were extracted and clusters grounded on displacement and angle trajectories were formed.The background clusters were removed,leaving behind the vehicles.Robust features of each vehicle were used for tracking across images.They achieved high accuracy by using multiple feature maps.In another study[10],an object detection method for images taken under low-illumination conditions has been proposed.The methodology presented a two-stage approach i.e.,cloud-based image enhancement and edge-based detection which is an efficient and dynamic approach to address each image contrast enhancement requirement separately.The authors of[11]employed an innovative method for image stacking.Only small cars were included in the image registration procedure,and all of the stationary backgrounds near the moving vehicles were blurred using the warping technique.This algorithm’s primary objective is to eliminate distracting image background elements that can be smoothed to extract only the vehicle from the surrounding area.These methods,however,were distinguished by complex features and these systems have high time complexity.

      2.2 Deep Learning-Based Traffic Monitoring Systems

      Numerous researchers have implemented a feature detection approach for directly recognizing vehicles in images.Kong et al.[12]used a vehicle detection technique based on salient point feature extraction for image stabilization.A particle filter using a Histogram of Gradient (HOG) features was used for tracking across frames.Gupta et al.[13] deployed different deep-learning models directly to images to detect vehicles.The models include a two-stage detector named Faster Recurrent Convolutional Neural Network (R-CNN) in comparison with one-step detectors i.e.,Single Shot Detector(SSD),YOLOv3,and YOLOv4.YOLOv4 algorithm outperforms all other models by having an 88% mean average precision (mAP) score.These models are highly sensitive to class imbalance and therefore require data-augmenting methodologies.Ozturk et al.[14]proposed a vehicle detection model primarily using Convolutional Neural Network (CNN) with the support of morphological corrections named miniature CNN architecture.This post-processing is computationally expensive.Additionally,alternative datasets of aerial images do not exhibit the same accuracy.The combination of deep learning for feature extraction and Support Vector Machines (SVMs) for classification is described in[15].This method’s use of a brute-force search methodology results in a higher computing intensity.

      Baykara et al.[16]used the YOLO method to find the vehicles.Lane polygon and lane ID detection were used for vehicle tracking.A vehicle is passed to the tracking module,which gives each newly discovered vehicle an ID when its centroid falls within the lane polygon.In[17],the detection of moving vehicles was done using a frame differencing method.While CNN was utilized for classification,the Kalman filter was used to further track the vehicles.

      3 Proposed Method

      This section elaborates on the proposed traffic monitoring system.An overview of the system architecture is shown in Fig.1.All RGB images were segmented into 5 classes using the Random Forest classifier.To smoothen the obtained segmented images and to reduce noise,mean-shift clustering was applied to make clusters of the pixels having identical class labels.For vehicle detection,first of all,the pixels which belong to the vehicle class were extracted and then contours were detected by detecting each object’s edges.To verify each segmented vehicle,the SSID score was calculated using the image masks (ground truth).The density of traffic on the road was estimated by counting each verified vehicle.To track multiple vehicles,a unique ID was allocated based on the ORB features.Vehicles were tracked across multiple image frames using a particle filter.To locate each tracked vehicle,IDs were restored based on ORB key point descriptor along with trajectories approximation.The different stages of the proposed framework are thoroughly explained in the following subsections

      3.1 Image Pre-Processing

      Firstly,the RGB images from all three datasets are cropped to a constant dimension of 300×300 to maintain consistency in size.Then,these images were converted to grayscale to reduce the number of channels.

      3.2 Image Segmentation

      Image segmentation is used as a fundamental stage in many visual technologies that aim to assess situations [18].As a result,segmentation plays a crucial part in numerous applications,such as surveillance systems,driverless vehicles,virtual reality,and medical imaging.Researchers have developed a variety of object segmentation algorithms[19],including watershed transform[20],regiongrowing,graph-cuts,k-means clustering[21],conditional random fields,and more sophisticated deep learning(DL)techniques[22].To find the best solution for the segmentation of traffic scenes we used a random forest classifier which outperforms other classifiers.To train the model,different features were extracted from the images.These feature sets were then split into which includes the original pixel values,Gabor filter,Scharr filter,Prewitt filter,gaussian filter,median filter,and variance[23].These features were based on the edges and the color space changes which helped detect different regions in the images.

      Figure 1:An overview of the proposed intelligent traffic surveillance system

      First of all,the pixel value of the original image was taken as feature 1.Then,the Gabor filter was applied which is a linear filter used for disparity estimation,feature extraction,texture categorization,and edge detection.The Gabor kernel can be expressed as Eq.(1).

      wherex′=-x sin θ+y cos θandy′=x cos θ+y sin θandxandyare the image coordinates.θrepresents the parallel stripes direction of the filter,σrepresents the standard deviation of the Gaussian component,γidentifies the aspect ratio determining the function support’s ellipticity,andψdenotes the phase of the plane wave.

      The resultant matrixR(x,y)is obtained by convolving the original imagel(x,y)with the Gabor filter,using Eq.(2).

      Also,Scharr and Prewitt filters were applied to detect edges both in the horizontal and vertical direction and to highlight gradient edges using the first derivative.The magnitude and orientation of gradient using Eqs.(3)and(4).

      To extract features after reducing noise in the image a low pass filter Gaussian was applied whose kernel is computed by using Eq.(5).

      where G is the Gaussian kernel.To have an extensive and meaningful feature vector,a median filter was also applied to remove salt and pepper noise as it replaces every pixel value with the median value.As the task was to multi-label the image therefore to measure the deviation of each pixel value from its mean value,the variance was also computed by using Eq.(6).

      Figure 2:Output of image segmentation.(a)Original image,(b)after semantic segmentation

      3.3 MeanShift Clustering

      To further improve the segmentation accuracy of each class and to remove noise we applied mean shift clustering.It is a gradient ascent approach to calculate the local greatest density of a data collection by applying mean shifts.It is a non-parametric method that works well to find clusters in the data with arbitrary shapes[24].The fundamental form of the x mean shift vector can be calculated using Eq.(7) under the presumption that n sample points inxi,where i=1,2,...,n,is given in the d-dimensional space Rd.

      wherehdenotes the radius andshrepresents the high-dimensional spherical area,satisfying the y-point set relationship using Eq.(8).

      wherekdenotes that k points inxifall within the boundaries ofDh.Two elements,notably the neighborhood and color pixel bandwidths,have an impact on the mean shift method’s final clustering.For thexipoints that fall within the bounds ofDh,the following rules are defined.

      When the pixel bandwidth is short,the probability density is high when comparing the colors of pixelsxandxi.When comparing the distances of the pixelsxandxi,the high probability density is shown by small distance bandwidth betweenxandxi.As a result,these two rules can be combined to form the probability density function.Thus,the kernel function can be defined by using Eq.(9).

      Figure 3:Result of mean-shift clustering.(a)Segmentation with noise,(b)mean-shift clustering

      3.4 Vehicle Detection

      Following multi-class semantic segmentation,we extract only those pixels that belong to the vehicle class as each pixel is tagged and allocated to a certain class during segmentation.As we only wanted to detect vehicles,we set all of the pixels’values to zero except for the vehicle class.After extracting the pixels of the vehicle class,the resultant image was converted into a binary image using Eq.(10).

      where L stands for an image that only contains pixels of the vehicle class,andbwstands for the overall binary image that results,as seen in Fig.4.

      Figure 4:Vehicle pixel extraction.(a)Binary image with only vehicle masks(b)the resulting image

      As the extracted vehicle differs in color or brightness from the surroundings,therefore,to identify each vehicle separately a blob detection technique[25]was applied as represented in Fig.5.

      Figure 5:Vehicle detection using blob detection algorithm over AU-AIR,VAID,and UAVDT datasets

      3.5 Vehicle Verification

      To verify each segmented and detected vehicle,we back-propagate toward the ground truth of each particular image to confirm the presence of vehicles at certain locations.For verification,the Region of Interest(ROI)of each detection in segmented images,as well as ground truth,were extracted.Afterward,a Similarity Structure Index Measure(SSIM)was calculated to measure the similarity score between the vehicles [26].SSIM consists of three key features of the image i.e.,contrast,luminance,and structure as calculated by using Eqs.(11)–(13).

      whereLdenotes luminance,Cdenotes contrast andSdenotes structure.μxandμyrepresent the sample mean of x and y.σxandσyare the standard deviation andσxydenotes the sample correlation coefficient between x and y.C1andC2are the constant need to stabilize the algorithm when the denominator approaches zero.Thus,the general formula of SSIM can be represented by using Eq.(14).

      whereα,β,andγdescribe the relative importance of each feature.If the SSIM score is greater than 0.2 then the vehicle is added to the true positive.The proposed algorithm for vehicle verification is given in Algorithm 1.

      3.6 ID Assignment Based on ORB Features

      Each detected vehicle was given an ID based on ORB features before tracking to reidentify it in the following frames of the image.

      ORB is a fast and efficient feature detector[27].For key point detection,it makes use of the FAST(Features from Accelerated Segment Test)keypoint detector.It is an advanced form of the descriptor BRIEF(Binary Robust Independent Elementary Features).Also,it is invariant to scale and rotation.A patch moment is obtained by using Eq.(3).

      wherepandqare the intensity values of the image pixels atxandylocations,respectively.Eq.(4)can be used to determine the center of mass using these moments.

      The patch orientation can be defined by Eq.(5).

      The extracted ORB features were used to find the matching of tracked vehicles in the succeeding frames and if a matching is found the ID was restored otherwise the vehicle was registered with a new ID in the system.Fig.6 shows the outcomes of applying an ORB feature description to the extracted vehicles and ID restoration throughout frames.

      3.7 Vehicle Tracking

      To track multiple vehicles across different image frames,the particle filter was applied.Particle filters are part of a broad class of Sequential Monte Carlo (SMC) techniques that are frequently utilized in tracking objects.To determine the minimum cost function,particle filters frequently start with the premise that the data distribution is unknown and that distribution“particles”or samples are assessed,examined,and aggregated into more meaningful conclusions[28].For tracking,the posterior probability density at the t instant is estimated,which is acquired in the following two steps.

      Step 1 Prediction:Assume that the posterior probability density functionp(gt-1|o1:t-1)and starting probability density valuep(g0)of the probability density are both known at the timet-1.gtdefines a three-dimensional vector wheregt=[gtx,gty,gts].The position of the object is expressed bygtx,gtywhereas the change in size is represented by.Thus,the prior probability can be defined using Eq.(18).

      wherep(gt|gt-1)represent the state equation of the target.

      Figure 6:ID assignment and restoration (a) ID assigned to each vehicle based on ORB features (b)features matching across frames(c)ID restored for the same vehicle in succeeding frame

      Step 2 Updating:the observation model of the system yieldsp(gt|o1:t)as given in Eq.(19).

      p(ot|gt)represent the observation likelihood function which is obtained by the observation of the tracked object.Whereas,(ot|g1:t-1)is defined as a normalized constant.The recursive Bayesian filtering also known as particle filter is simulated by the non-parametric Monte Carlo Method as given in Eq.(20).

      The result of vehicle tracking can be seen in Fig.7.

      Figure 7:Vehicle tracking using particle filter across image frames

      3.8 Trajectories and Density Estimation

      Finally,to analyze the vehicle’s tracking and paths,trajectories for each corresponding vehicle were drawn by recording the possible location of every vehicle as obtained by the particle filter across the image frames using Eq.(23).The result of estimated trajectories can be visualized in Fig.8.

      whereTirepresents the estimated trajectory ofithvehicle,represents the coordinates of each vehicle’s location represented by a rectangular bounding box.

      Figure 8:Estimated trajectories of vehicles being tracked(a)trajectories of each vehicle plotted using the centroid points of bounding boxes(b)final output

      Also,a record of detected vehicle count was maintained in each frame to estimate the density of traffic on the road as seen in Fig.9.

      Figure 9:Density estimation by using vehicle count displayed at the left corner of each image

      4 Performance Evaluation

      The dataset utilized for the vehicle detection and tracking system is briefly discussed in this section,along with the findings of three distinct experiments that were used to assess the proposed system and its evaluation against several current state-of-the-art traffic monitoring models[29].

      4.1 Dataset Description

      We used the following publicly available datasets to develop and test our proposed model.

      4.1.1 VAID Dataset

      Lin et al.presented the Vehicle Aerial Imaging from Drone (VAID) dataset in 2020 for smart traffic monitoring using vehicle detection and classification.It comprised 6000 images taken from a drone with a final image resolution of 1137×640 after downsizing.All the images are in.jpg format captured at 23.9 frames per second.For reliable imagery acquisition,the drone was positioned between 90 and 95 m above the ground.

      4.1.2 AU-AIR Dataset

      The 32,823 extracted frames from 8 video segments of more than 2 h make up the AU-AIR dataset[30].It contains 32,823 labeled frames with 132,034 object instances in total.The images are acquired at the rate of 30 frames per second with a resolution of 1920×1080.Among the multi-modal sensor data in AU-AIR are the altitude,position,time,and velocity.The traffic videos were captured at P.O.Pedersensvej and Skejby Nordlandsvej(Denmark).

      4.1.3 UAVDT Dataset

      The 100 video sequences which contain 80,000 image frames make up the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking (UAVDT) dataset [31] were selected from more than 10 h of footage taken with an Unmanned Aerial Vehicle (UAV) platform in various urban environments.All the images are in.jpg format with a resolution of 1080×540 pixels acquired at the rate of 30 frames per second.These scenarios include arterial streets,squares,toll booths,motorways,T-junctions,and crossings.

      4.2 Experimental Settings and Results

      Python (3.7) has been used to design and test the system on a computer with an Intel Core i5 processor running the 64-bit version of Windows 10.The system has 8 Giga Byte Random Access Memory (RAM) and a 2.8 Giga Hertz (GHz) Central Processing Unit (CPU).The performance of the proposed detection and tracking algorithms was evaluated using precision,F1 score,and recall metrics.

      4.2.1 Experiment I:Semantic Segmentation Accuracy

      The images from each dataset were divided into training and testing samples.80% samples were used for training and 20% were used for testing purposes.Random Forest classifier was trained as it can increase the accuracy score after fitting different subsamples of the dataset using a variety of decision tree classifiers.The overall accuracy over the training and testing samples was 92% and 77%,respectively.

      4.2.2 Experiment II:Precision,Recall,and F1 scores

      Table 1 demonstrates the precision,recall,and F1 scores for vehicle detection.True Positive represents the number of vehicles detected successfully.False Positives denote detection other than vehicles,and False Negative represents the number of missed vehicles.The results show that the proposed algorithm can recognize the various vehicles of variable size with high precision.

      Table 1:Precision,recall,and F1 scores for vehicle detection

      For tracking True Positive represents the number of vehicles successfully tracked,False Positive represents the number of false track of vehicles in more than three frames,and False Negative denotes the number of vehicles not tracked.Table 2 represents the precision,recall,and F1 scores for the proposed tracking algorithm.

      Table 2:Precision,recall,and F1 scores for vehicle tracking

      4.2.3 Experiment III:ID Assignment and ID Recovery

      This section discusses the result of ID assignment and ID recovery to track multiple objects(vehicles)across the different image frames.For this,we used True ID Rate(TIDR)to assess the vehicle ID assignment module and True Recovery Rate(TRR)which represents the number of IDs recovered successfully to assess the recoverability module.For feature matching,if the number of feature matches exceeds 5,then a match was found and the corresponding ID was recovered.In the other case,a new ID was assigned.Table 3 shows the result for the performance evaluation metrics.

      4.2.4 Experiment IV:Comparison with Other Systems

      We evaluated our proposed system in comparison with other state-of-the-art methods including deep learning techniques available in the literature.Table 4 demonstrates the comparison of different detection models over the AU-AIR,VAID,and UAVDT datasets.It can be seen that our model outperformed all other techniques in terms of precision.Table 5 presents the advantages and disadvantages of the proposed and existing models.

      Table 6 presents a comparison of the tracking algorithm with the other tracking techniques.Whereas,Table 7 compares the advantages and disadvantages of the existing and proposed model.

      Table 6:Comparison of the proposed tracking model with the state-of-the-art techniques

      5 Discussion

      The proposed system is an effective solution for intelligent traffic monitoring using aerial images.Object recognition in high-resolution aerial images is a very challenging task.Therefore,we proposed a mechanism based on multi-label semantic segmentation and particle filter-based tracking to achieve efficient results.However,the proposed method has some drawbacks.First of all,the system is only tested on RGB images captured during day time.The approach can be further validated by evaluating image and video datasets captured at night or in low-light situations since many researchers have had success with these datasets.Moreover,our segmentation and detection algorithm faces difficulty under partial or full occlusions,roads covered with trees,or similar objects.Fig.10 shows a nighttime image from the UAVDT dataset.

      Figure 10:Drawbacks of the vehicle detection algorithm.(a)Different illumination conditions at night time(b)vehicle not detected due to background cluttering(c)vehicle not detected due to occlusion

      6 Conclusion and Future Works

      In this paper,an effective system for vehicle detection and tracking under various road circumstances is proposed.The RGB images are first all segmented into five classes and then the images are subjected to mean-shift clustering for noise removal and to smoothen the output.After that,vehicle pixels are extracted and a blob detection technique was applied to detect each vehicle.Each vehicle was verified using ground truth labeling.To track multiple vehicles,each of them was assigned an ID based on ORB features.The proposed model produces significant results on all three datasets which proves the effectiveness of our methodology.

      To increase performance in the future,the authors want to test new and improved classifiers on more complicated and varied datasets.Moreover,to improve the performance of the traffic monitoring system we aim to use deep learning methods.

      Acknowledgement:The authors are thankful to Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

      Funding Statement:This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC (Information Technology Research Center) Support Program (IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

      Author Contributions:Study conception and design: Asifa Mehmood Qureshi,Jeongmin Park;data collection: Nouf Abdullah Almujally;analysis and interpretation of results: A.Qureshi,Saud S.Alotaibi and Mohammed Hamad Alatiyyah;draft manuscript preparation:Asifa Mehmood Qureshi.All authors reviewed the results and approved the final version of the manuscript.

      Availability of Data and Materials:All publicly available datasets are used in the study.

      Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

      六安市| 南和县| 张家口市| 夏河县| 平阴县| 垦利县| 靖安县| 阳东县| 屯留县| 合川市| 敦化市| 中方县| 吉林市| 墨江| 阿拉善盟| 防城港市| 抚松县| 土默特左旗| 米泉市| 富宁县| 郎溪县| 英超| 莲花县| 股票| 册亨县| 慈利县| 山西省| 米林县| 邮箱| 昂仁县| 西乌珠穆沁旗| 沛县| 凤台县| 保山市| 莲花县| 湖南省| 武定县| 扎兰屯市| 常山县| 汤原县| 武穴市|