• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recognition and Tracking of Objects in a Clustered Remote Scene Environment

    2022-11-09 08:16:48HarisMasoodAmadZafarMuhammadUmairAliMuhammadAttiqueKhanSalmanAhmedUsmanTariqByeongGwonKangandYunyoungNam
    Computers Materials&Continua 2022年1期

    Haris Masood,Amad Zafar,Muhammad Umair Ali,Muhammad Attique Khan,Salman Ahmed,Usman Tariq,Byeong-Gwon Kang and Yunyoung Nam,*

    1Wah Engineering College,University of Wah,Wah Cantt,Pakistan

    2Department of Electrical Engineering,University of Lahore,Islamabad Campus,Pakistan

    3Department of Unmanned Vehicle Engineering,Sejong University,Seoul,05006,Korea

    4Department of Computer Science,HITEC University Taxila,Taxila,47040,Pakistan

    5College of Computer Engineering and Sciences,Prince Sattam Bin Abdulaziz University,Al-Khraj,Saudi Arabia

    6Department of ICT Convergence,Soonchunhyang University,Asan,31538,Korea

    Abstract: Object recognition and tracking are two of the most dynamic research sub-areas that belong to the field of Computer Vision.Computer vision is one of the most active research fields that lies at the intersection of deep learning and machine vision.This paper presents an efficient ensemble algorithm for the recognition and tracking of fixed shape moving objects while accommodating the shift and scale invariances that the object may encounter.The first part uses the Maximum Average Correlation Height (MACH) filter for object recognition and determines the bounding box coordinates.In case the correlation based MACH filter fails,the algorithms switches to a much reliable but computationally complex feature based object recognition technique i.e.,affine scale invariant feature transform (ASIFT).ASIFT is used to accommodate object shift and scale object variations.ASIFT extracts certain features from the object of interest,providing invariance in up to six affine parameters,namely translation (two parameters),zoom,rotation and two camera axis orientations.However,in this paper,only the shift and scale invariances are used.The second part of the algorithm demonstrates the use of particle filters based Approximate Proximal Gradient(APG)technique to periodically update the coordinates of the object encapsulated in the bounding box.At the end,a comparison of the proposed algorithm with other stateof-the-art tracking algorithms has been presented,which demonstrates the effectiveness of the proposed algorithm with respect to the minimization of tracking errors.

    Keywords: Object racking;MACH filter;ASIFT;particle filter;recognition

    1 Introduction

    The problem of estimating the position of fixed shape moving objects still persists in a remote scene environment because of ever-changing environmental conditions and change in the dimensions and physical attributes of the object [1,2].To address the problem of estimating the correct position of the object i.e.,recognition,and keeping track of the recognized object i.e.,tracking,this paper proposes an efficient algorithm.Recent advancements in the fields of pattern recognition and neural networks have greatly influenced the performance of state of the art and modern object recognition and tracking systems.Various correlation based,features based and Convolution neural networks (CNN) based object recognition techniques have been proposed so far [3,4].A breakthrough technique pertaining to object recognition and localization was proposed using the MACH filter [5].In a recent algorithm [6],MACH was used for the recognition of objects based on log mapping techniques.Besides MACH,several image recognition and localization methods have been proposed.The most widely used method is the temporal template matching algorithm.Polana et al.[7] developed an algorithm for recognizing human motions by obtaining spatio-temporal templates pertaining to motion.Obtained samples are then used to match test samples with the reference images.Essa et al.[8] proposed an algorithm that uses optical flow energy for the generation of spatio-temporal templates,which are used for the recognition of facial action units.However,these techniques failed to generalize a single template from a set of examples which can be used for a global set of images.The proposed algorithm uses a MACH filter,which is a generic template-based method for image recognition and can easily be adapted.The MACH filter gives maximum relative height w.r.t.the expected distortions by generating the broader peaks.The MACH is considered to be a computationally feasible correlation filter for implementation.

    In recent years,image detectors such as ASIFT have been introduced which can improve the recognition process by providing shift and scale invariances.These detectors are normally classified based on the properties relating to incremental invariance.The Harris point detector was one of the earliest ones which was rotationally invariant [9].Later,the Harris-Laplace method was developed,which was followed by Hessian-Laplace and Difference of Gaussian Detectors(DOG) [10].All of them were scale and rotational invariant methods.Some detectors which were regional-moment based were also developed,such as the Harris-Affine and Hessian-Affine [11].Similarly,in previous years,work on edge-based region detectors,entropy-based region detectors and Level-Line detectors (LLD) was also carried out.All of the techniques that are mentioned were used to provide invariance in one to two parameters and were very computationally taxing.A breakthrough technique was,however,introduced by Lowe [12].The algorithm proposed a scale invariant feature transform (SIFT),which can be used to provide image rotation as well as image scaling invariance.It also provided partial invariance to changes in view point and illumination.A few amendments and improvements have been made to the SIFT algorithm.They include PCA-SIFT,speeded up robust features (SURF) and gradient location orientation histogram (GLOH) [13,14].This paper employs an improvement to SIFT,an affine invariant extension i.e.,ASIFT.Later a short comparison between SIFT and ASIFT is implemented using MATLAB for more precise illustration.

    Similar to recognition,multiple tracking algorithms have been proposed over the years.Tracking refers to a complex problem of estimating the approximate path of an object of interest in the image plane as it starts to move.It can also be referred to as dynamic object identification.The primary motive behind tracking is to find a route for the object in all the frames of a certain video [15].To achieve tracking of fixed shape moving objects efficiently,many important preliminary functions are needed,including motion detection [16,17],classification [18,19],behavioral analysis and object identification [20,21].Motion detection and estimation are not only used for the extraction of a moving object of interest but also for other related applications such as video encoding,human motion analysis,and human machine learning and interaction [22,23].The three fundamental types of algorithms used for motion estimation using object detection are background subtraction,temporal differencing and optical flow.The most popular of the three,and ultimately the algorithm which will form the basis of our proposed techniques,is background subtraction.This uses a rather simple technique of differentiating the object of interest from a maintained background model.

    The novel contribution of this paper is that ASIFT and MACH are used in combination with particle filters and ASIFT for image tracking for the first time.The proposed algorithm involves two major steps,i.e.,object recognition and object tracking.Object recognition involves either MACH or an ASIFT filter.The MACH generates a correlation peak that can be considered maximum with respect to the produced noise.It then proceeds to minimize a metric commonly known as the Average Similarity Measure (ASM).MACH will be employed first for recognition of objects using the first frame.Objects’ coordinates are identified,and a bounding box is constructed using MACH.If the object changes its position drastically,then it is very important to recognize the feature points of the object i.e.,the points that best describe the nomenclature of the object.ASIFT is an upgraded version of the SIFT algorithm,providing invariance for up to six parameters (in comparison to SIFT,which provides invariance in four parameters).The six parameters are: Translation (2 parameters),zoom3,rotation and two camera axis orientations.

    Once the recognition part is completed,the bounding box coordinates are forwarded to the second part of the algorithm,which is based on object tracking.The tracking portion of the algorithm employs the use of particle filters for periodically updating the coordinates of the bounding box that constitutes the object of interest.The particle filters will use the probability density function for estimating the positioning of the object.The probability estimation makes it convenient for the trained tracker to train an object under certain complex conditions,such as when object gets occluded by another object.The particle filters are then improved using the proximal gradient technique,which is used for the best precision results.In the end,performance comparisons are made with recently proposed algorithms to prove the effectiveness and speed of the proposed algorithm.

    This paper proposes a tracker that first uses an ensemble of Correlation and Feature based filters for recognition of object and then proceeds to track the object of interest in an efficient manner.

    2 Proposed Methodology

    Fig.1 shows a complete block diagram of the proposed methodology.In the first step of the algorithm,preprocessing is performed.This results in all the images having the similar properties.If an image lacks clarity,then sharpening filters are employed.In the case of difficulty in recognizing the object,Sobel edge detectors or canny edge detectors may be used.If preprocessing provides results with better quality,then there is no need to use edge detection.A very easy preprocessing step has been done in this paper by subtracting the mean of image intensities and dividing it by the standard deviation.Gamma correction may improve the results,but it has not been performed to avoid delay and complexity.After improving the image quality,the DOG filtering operation is performed.

    Figure 1:Proposed system model

    2.1 Object Recognition

    2.1.1 MACH

    The working phenomenon of DOG includes smoothing of input image.The smoothing is performed by convolving the Gaussian kernel with the input image.The process is achieved by differentiating two Gaussian functions g(x,y) forσ=1,2,....The Gaussian is expressed using Eq.(1):

    Smoothing of input image f(x,y) is performed using Eq.(1) to obtain the output image g1(x,y) as shown in Eq.(2)

    Here * represents the convolution.By employing a different widthσ2,a second smoothed image is obtained using Eq.(3).

    Hence the DOG filtering operation is performed using Eq.(4).

    Since the DOG is calculated by differentiating two low pass filters,therefore it can effectively be called a bandpass filter.The DOG eliminates mostly the high frequency components some low-frequency components.After preprocessing,the next step to perform the recognition.The recognition step involves employing of a MACH filter for the object recognition.The first step in using the MACH filter is to train it.

    To perform this step,a temporal derivative is computed for each pixel of the input image,which results in a volume for every sequence involved in the training process.Afterwards,in frequency domain,each volume is represented by performing a 2-D Discrete Fourier Transform(DFT) operation using Eq.(5).The output of Eq.(4) i.e.,g(x,y) is again considered to be f(x,y) for recursion.

    In Eq.(5),the temporal derivative pertaining to each pixel of input sequence is represented by f(x,y).The frequency domain representation of f(x,y) after applying the 2-D DFT is represented by F(u,v).“L” specifies number of columns,“N” specifies number of frames and “M” specifies number of rows.All the resulting columns are then concatenated which were otherwise obtained separately as a result of the 3-D Fourier transform [24,25].Let xidenote the resulting column of dimension “d” obtained after concatenation.The dimensions are calculated using L * M.After obtaining the column vectors,the MACH filter (which is used for minimization of the ASM,average correlation energy and maximizing a metric called the average correlation height) can be synthesized [26] using Eq.(6).

    Here,h shows the frequency response of the filter,mx represents the arithmetic mean of all the input vectors xi,C is d * d dimensional diagonal covariance matrix and d signifies the total number of elements.Dx represents the average spectral density of the training videos and is also a d * d diagonal matrix.Dx is calculated using Eq.(7).

    Here * represents the complex conjugate operation.Sx represents the matrix used for average similarity.Sxis calculated using Eq.(8).

    where Mxcan be considered similar to mxwith similar values arranged in a diagonal array.For tradeoff parametersα,βandγvalues can be set appropriately [27].Thus the MACH is implemented using Eq.(9).

    In this paper,the MACH parameters have been estimated using Particle Swarm Optimization(PSO) [26].Tab.1 shows the estimation of parameters for the data sets being used for testing purposes.The PSO has been employed for the optimization of optimal tradeoff parametersα,βandγ.The chosen parameters for PSO optimization are: Experiments (120),iterations (320),particles (10),dimensions (03),[Xmin,Xmax]=[-1,1],[Vmin,Vmax]=[-0.1,0.1],W=0.9,C1=C2=2.The optimized values ofα,βandγenables much sharper correlation peaks which ensures better object recognition.

    Table 1:MACH parameters estimation for used data sets [26]

    This correlation filter is employed as part of the proposed algorithm for recognition of the object from an image.For training of the filter,a few preliminary frames (eight) for the detection of the object are required.For detection and recognition of the object,i.e.,testing of the images,the image is cross correlated with the training images using the two parameters i.e.,Peak correlation energy (PCE) and Correlation output peak intensity (COPI).Fig.2 shows vehicle motion on a circular curve of a road and its correlation peak.The peak shows that the object has been detected effectively.

    Fig.3 shows the correlation peak indicating the presence of object of interest in the image.In this particular instance,the object has undergone a phase shift of 90 degrees.The MACH has once again successfully detected the object using the correlation peak.

    Fig.4 shows the correlation peak indicating the presence of object of interest in the image.In this particular instance,the object has undergone a scaling factor of 0.84.The MACH has once again successfully detected the object using the correlation peak.

    Fig.5 shows the correlation peak indicating the presence of object of interest in the image.In this particular instance,the object has undergone an occlusion of 30%.The MACH has once again successfully detected the object using the correlation peak.

    The peak depicts the identified object coordinates.The coordinates of the object are used to construct the bounding box used for tracking of the object.The bounding box can be seen in Fig.6.

    Figure 2:Object detection using MACH with PCE and COPI value are 2.4448e-005 and 34.2351(Data set: CAR-1)

    Once the bounding box is established,the next task is to track the recognized object once it starts to move.For that,the particle filter-based algorithm will be used.In cases where the object changes its dimensions or path drastically,sometimes the MACH-based recognition filter does not give accurate results.In order to improve the efficiency of the tracker,ensemble of ASIFT and MACH has been proposed.The ensemble has been proposed for the first time,for efficient detection of the object of interest.

    2.1.2 ASIFT

    In case,object undergoes drastic changes in shift or scale,the MACH tends to give inaccurate results.In addition,in the presence of neighboring objects too close to object of interest,multiple correlation peaks tends to appear.Fig.7 shows a MACH failure case in which a motion blur causes MACH to fail.To address these concerns,a feature based technique is implemented which covers the limitations of MACH.ASIFT is primarily used in this work because of its ability to detect the presence of the image based on the feature points.In case if object encounters are shift or scale variation,ASIFT is called for better recognition results.An improvement to SIFT was presented by Mehmon [27],which was called ASIFT because of the affine improvement in the SIFT algorithm.ASIFT,in addition to all the invariances provided by the SIFT algorithm,also provided improved accuracy in removing distortions provided by the deviation of the camera axis angle.Then,it applies the SIFT method to the image.Some of the variations,like tilt,are irreversible,i.e.,once the tilt of an object is performed,100% reversal is almost impossible.However,ASIFT performs the tilt in an orthogonal direction to perform the anti-tilt operation.The orthogonal direction tilt has the maximum chance of recovery of the original image.ASIFT uses affine camera model for better estimation of view point changes encountered by the object.

    Figure 3:Object detection using MACH with PCE and COPI value are 39.7931 and 0.0014 (Data set: CAR-1)

    ASIFT main algorithm: The main algorithm consists of the following steps:

    · Each image is transformed using the affine distortion simulations caused by the change in orientation of the camera from frontal positioning.The distortions are dependent mainly on two factors: (i) Latitudeθand longitudeΦ.The image must performΦrotations,and then,the tilt must be performed with a parameter t=1/cosθ.For digital images,directional sub-sampling is used for performing the tilt.

    · The tilts and rotations are.achieved as required for a finite amount of longitude and latitude angle changes.The sampling steps will most likely ensure that simulated images remain uniform with any other images generated by the same latitude and longitude angles.

    · All simulated images are produced by steps 1 and 2.Comparison is then performed using SIFT.

    Figure 4:Object detection using MACH with PCE and COPI value are 40.70 and 0.0014 (Data set: CAR-1)

    The sampling rate of all the involved parameters to perform tilt is very critical.Object recognition is possible in any slanted case irrespective of the source producing it only if the object is perfectly planar.For this paper,a practical physical upper bound is enforced,i.e.,tmaxis physically obtained by using image pairs of the object both in the original position and in the slanting position.Multiple examples are presented here.Figs.8-10 uses data set CAR for demonstration of ASIFT results.After simulations,the feature points are calculated for recognition of the object of interest.Fig.8a shows the car moving on the road and Fig.8b shows the same car while it changes the lanes.

    Figure 5:Object detection using MACH with PCE and COPI values are 70.6701 and 0.0025 (Data set: CAR-1)

    Figure 6:Bounding box around object of interest using MACH filter

    Fig.9 depicts the calculation of feature points that contribute towards matching via ASIFT.These feature points help in the recognition of the object regardless of any changes in zoom,rotation or change in scale.Fig.10 shows the ASIFT matching results based on the feature point calculations in both the images.The data set CAR allows very limited scope for testing of the ASIFT algorithm as the vehicle shown in the data set has only shifted lanes but provided no variations in tilt.

    Figure 7:MACH failure case

    Figure 8:(a) Moving CAR (Frame-1),(b) Moving CAR (Frame-144)

    Figure 9:Feature points calculation

    Figure 10:ASIFT matching results (Dataset: CAR-2)

    Fig.11 show the ASIFT results on one more data sets that demonstrate that ASIFT uses feature point matches for efficient object recognition.The result verifies the capability of ASIFT in being able to provide better recognition of images in the case of zoom and tilt as compared to MACH.

    Figure 11:(a) Zoomed image of motor bike,(b) Original scene (c) ASIFT matching results

    Fig.12 shows that the failure case of MACH mentioned in Fig.7 has been addressed with the help of feature points extracted and matched.

    2.2 Object Tracking

    After the recognition of object using MACH or ASIFT,the next step is the efficient tracking of the object in successive frames.The tracking algorithm uses the recognition algorithm in the first frame and then a modified particle filter is employed for updating the positioning of the object in successive frames.The MACH filter is used to construct the bounding box by efficiently performing the object recognition,while the APG based technique is used for updating the coordinates of the bounding box once the object is in motion.ASIFT is called if the longitudinal or latitudinal coordinates of the object change drastically or it starts to tilt as described in the previous section.For object tracking,particle filters are used.The tracking routine that is used in this paper is APG approach which utilizes the particle filter along with the gradient descent technique for object tracking.Algorithm-1 defines the main working principal of the APG approach,summarized below:

    Figure 12:MACH failure case handled through ASIFT

    Algorithm 1: Generic APG approach [28]i Set α0=α-1=0 ∈RN and t0=t-1=1 ii For k=0,1,... until converge βk+1:=αk+tk-1-1 tk(αk-αk-1);αk+1:=argmin L 2‖‖‖‖a-βk+1+?F(βk+1)L‖‖‖‖+G(a)22 1+ atk+1:=1+4t2k 2

    APG based Tracking:Conventionally,APG trackers are generally used for unconstrained minimization problems.So,a typical prerequisite associated with using the APG approach is to first translate a model with constraints into a model with no specific constraints.First,an identity vector is instantiated as an indicator function(a)using Eq.(10)

    The identity function is then applied in the minimization equation of Eq.(11) to get Eq.(12).These equations are used to convert a constrained model to an unconstrained one.

    where A′=[Tt,I] pertains to target templates coefficients.The matrix used for representing non target template coefficients is shown as a=[aT,aI].For defining the energy pertaining to nontarget templates,a crucial parameter,μt,is defined.Now,the APG method is applied on both F(a) and G(a) using Eqs.(13) and (14):

    The method uses Eq.(15) for eventual minimization:

    Figure 13:Results of Data Set-1 (CAR-1)

    The APG-based tracking method is shown in Algorithm-3 that uses the particle filters and the APG approach defined in ALGORITHM.Implementation of the APG-based tracker is performed in MATLAB.

    3 Results and Analysis

    Multiple following datasets are used for the testing of the proposed method.The first dataset involves the Car-1 data set,which shows color images of a vehicle moving around in a curvelike pattern on a road.The data set is obtained from a video sequence that also involves a cluttered scene environment.The algorithm is tested on different frames of the data set,and it can be observed that coordinates of the red bounding box are periodically updated as the object changes its location along a curve.Fig.13 displays the data set Car-1 and the results are shown accordingly.The second dataset,Car-2,shows a car changing the lanes on the road.The images shown in the data set are grayscale images and,in some frames,the translation and zoom levels of the car change.Fig.14 shows the implementation of the algorithm on different frames of the data set Car-2.The bounding box is placed after the object detection is performed using the MACH filter.The tracking routine is performed based on APG based tracker.

    Figure 14:Results of Data Set-2 (CAR-2)

    The third dataset shows movement of a person in his office.The dataset is named as “Blur Body” [29].It shows movement of a person in an office.The data set is interesting considering that multiple images are blurred due to movement of person.The blurred images tests the ability of the tracking algorithm since feature points and correlation peaks are difficult to attain in blurry conditions.Fig.15 shows the results of BlurBody dataset using APG approach.

    Figure 15:Results of Data Set-3 (Blur Body)

    The fourth data employed for testing is referred to as “Singer”[29].A person is shown singing on stage.The data set pose challenges because multiple images are zoomed in and out as frames starts to increment.The size of the bounding box must constantly be updated for precise results.Fig.16 shows the results of Singer dataset using APG approach.The fifth data set is referred to as “Skating” [30].The data set poses challenges because skater gets occluded behind multiple other skaters.The images of the skater also gets zoomed in and out consistently in the data set.Fig.17 shows the results of Skating dataset using APG approach.

    The results of the data sets shows that the bounding box consistently changes its position in conjunction with the movement of the object.This shows the efficiency and correctness of the tracker and also shows its efficiency while working in an environment with diverse conditions.Data set CAR-1 shows a simple colored object i.e.,vehicle moving in a curve like pattern.Data set CAR-2 shows partial occlusion of a vehicle.Data set blur body shows occasional blurring of an object.Data set skating and singer each shows changes in projection and lightening conditions occasionally.

    Figure 16:Results of Data Set-4 (Singer)

    The proposed algorithm is compared with similar state of the art algorithms in terms of execution time and average tracking errors.The average tracking error is measured using the Euclidian distance of two center points,which has been normalized by the size of the target from the ground truth.The execution time is based on how fast the algorithm can detect the object of interest precisely.The execution time of algorithms is calculated using same language and machine i.e.,MATLAB 2019 and Core i5 processing machine.The algorithm is compared to some of the state-of-the-art algorithms such as novel hybrid Local Multiple system (LM-CNN-SVM) based on Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) [30],Object Detection with Deep Learning (ODDL) [31],Multi-scaled and deformable convolutional neural networks (MDCNN) [32] and Incremental Covariance Tensor Learning (ICTL) [33].Detailed results are shown using both the non-APG and the APG approach.The results depict that the APG based implementation clearly shows better performance as compared to all the existing approaches.Tabs.2 and 3 shows the comparison results based on average tracking errors and Euclidean distance respectively.Tabs.2 and 3 clearly depict that the merger of MACH filter with the APG-based filter gives more accurate results as compared to the existing algorithms.Tab.2 depicts the average tracking errors encountered by the proposed tracking algorithm in comparison with other similar state of the art algorithms.The proposed APG based algorithm shows better precision as compared to the other algorithms.Tab.3 also shows execution time for the proposed APG based algorithm,thus depicting better run time speed as compared with other algorithms.For further improvement in the proposed algorithms,PSO based deep learning methods can also be opted [34,35]

    Table 2:Comparison based on average tracking errors metric

    Table 3:Comparison of different techniques using the execution time

    4 Conclusion

    This paper proposes an efficient technique that utilizes an ensemble of two recognition techniques and a novel tracking routine for tracking of fixed shape moving objects.First,MACH is used for detecting the object of interest by maximizing the average similarity measure.The detected coordinates are used to construct a bounding box that indicates the presence of object.In each subsequent frame,the coordinates of the bounding box are updated using the APG approach.The analysis show that the proposed algorithm is not only less error prone compared to the previous methods,but it also possesses less computational complexity than its predecessors due to the APG approach.The APG method eradicates the practice of templates that are trivial in nature,resulting in fewer complexities and a faster tracking procedure.The proposed algorithm can be improved in the future by training the proposed tracker to work on objects that have been occluded in a remote scene environment.Once the object becomes occluded,the MACH filter will be used for the prediction of coordinates instead of the particle filter,to hopefully provide more accurate results.

    Funding Statement: This research was supported by X-mind Corps program of National Research Foundation of Korea (NRF) funded by the Ministry of Science,ICT (No.2019H1D8A1105622)and the Soonchunhyang University Research Fund.

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    午夜免费激情av| 亚洲va日本ⅴa欧美va伊人久久| 可以免费在线观看a视频的电影网站| 亚洲aⅴ乱码一区二区在线播放 | 免费不卡黄色视频| 亚洲七黄色美女视频| 国产免费av片在线观看野外av| 妹子高潮喷水视频| 免费观看精品视频网站| 欧美丝袜亚洲另类 | www.自偷自拍.com| 欧美性长视频在线观看| 最近最新中文字幕大全电影3 | 国产精品一区二区三区四区久久 | 99久久99久久久精品蜜桃| 两人在一起打扑克的视频| 母亲3免费完整高清在线观看| 国产精品免费一区二区三区在线| 成人国产一区最新在线观看| 一区二区三区国产精品乱码| 一卡2卡三卡四卡精品乱码亚洲| 免费一级毛片在线播放高清视频 | 国产精品影院久久| 欧美精品亚洲一区二区| 中文字幕精品免费在线观看视频| 一本综合久久免费| 久久国产精品人妻蜜桃| 欧美久久黑人一区二区| 久久人妻av系列| 精品第一国产精品| 一级毛片女人18水好多| 一级毛片女人18水好多| 国产激情久久老熟女| 亚洲国产精品久久男人天堂| www.精华液| 国产成人精品久久二区二区91| 亚洲aⅴ乱码一区二区在线播放 | 韩国精品一区二区三区| 日本免费一区二区三区高清不卡 | 午夜精品在线福利| 亚洲色图 男人天堂 中文字幕| 精品欧美国产一区二区三| 国产成人欧美在线观看| 久久久久久亚洲精品国产蜜桃av| 男女下面插进去视频免费观看| 精品一品国产午夜福利视频| 成人av一区二区三区在线看| 欧美激情久久久久久爽电影 | 91麻豆精品激情在线观看国产| 久久久久亚洲av毛片大全| 亚洲国产高清在线一区二区三 | 校园春色视频在线观看| 99国产精品99久久久久| 淫妇啪啪啪对白视频| 欧美乱码精品一区二区三区| 在线十欧美十亚洲十日本专区| 在线观看免费视频日本深夜| 99国产极品粉嫩在线观看| 国产亚洲精品第一综合不卡| 日本 av在线| 日本精品一区二区三区蜜桃| 色综合站精品国产| cao死你这个sao货| 国产成人一区二区三区免费视频网站| 午夜成年电影在线免费观看| 天堂影院成人在线观看| 一级毛片精品| 黄片播放在线免费| 热re99久久国产66热| 久久精品亚洲精品国产色婷小说| 国产人伦9x9x在线观看| 长腿黑丝高跟| 女生性感内裤真人,穿戴方法视频| 日本a在线网址| 一个人观看的视频www高清免费观看 | 9色porny在线观看| 看片在线看免费视频| 欧美激情久久久久久爽电影 | 欧美日韩福利视频一区二区| 99久久综合精品五月天人人| 中文字幕久久专区| 欧美精品啪啪一区二区三区| 久久久久久人人人人人| 国产男靠女视频免费网站| 亚洲激情在线av| 叶爱在线成人免费视频播放| 国产精华一区二区三区| 国产单亲对白刺激| 夜夜躁狠狠躁天天躁| 757午夜福利合集在线观看| 999精品在线视频| 亚洲第一av免费看| 免费高清视频大片| av视频在线观看入口| 在线观看免费午夜福利视频| 免费久久久久久久精品成人欧美视频| 成人欧美大片| 侵犯人妻中文字幕一二三四区| 国产高清视频在线播放一区| 欧美在线黄色| 美女高潮到喷水免费观看| 一级片免费观看大全| 性少妇av在线| 国产精品一区二区免费欧美| 少妇熟女aⅴ在线视频| 国产精品综合久久久久久久免费 | 午夜福利在线观看吧| 极品人妻少妇av视频| 看免费av毛片| 精品国产超薄肉色丝袜足j| 精品一品国产午夜福利视频| 国产精品爽爽va在线观看网站 | 久久中文字幕一级| 亚洲色图 男人天堂 中文字幕| 欧美另类亚洲清纯唯美| 亚洲av成人一区二区三| 亚洲熟女毛片儿| 亚洲国产精品sss在线观看| 变态另类丝袜制服| 1024视频免费在线观看| 夜夜夜夜夜久久久久| 免费在线观看日本一区| 国产精品免费视频内射| 久久人人精品亚洲av| 波多野结衣av一区二区av| 亚洲,欧美精品.| 69av精品久久久久久| bbb黄色大片| tocl精华| 日本三级黄在线观看| 亚洲av成人一区二区三| 精品国产亚洲在线| 国产在线观看jvid| 国产激情久久老熟女| 中文字幕最新亚洲高清| 成人精品一区二区免费| 午夜两性在线视频| 999久久久国产精品视频| 久久狼人影院| 琪琪午夜伦伦电影理论片6080| 亚洲国产精品成人综合色| 禁无遮挡网站| 日韩av在线大香蕉| 两性夫妻黄色片| 亚洲avbb在线观看| 91精品国产国语对白视频| netflix在线观看网站| 999久久久精品免费观看国产| 男人舔女人的私密视频| 国产又色又爽无遮挡免费看| 久久人妻福利社区极品人妻图片| 18美女黄网站色大片免费观看| 亚洲第一欧美日韩一区二区三区| 91精品国产国语对白视频| 亚洲熟女毛片儿| 亚洲欧美日韩无卡精品| 日韩三级视频一区二区三区| 一边摸一边抽搐一进一小说| 久久久精品国产亚洲av高清涩受| 亚洲欧洲精品一区二区精品久久久| 韩国av一区二区三区四区| 国产午夜福利久久久久久| 亚洲精品久久成人aⅴ小说| 日韩视频一区二区在线观看| 香蕉国产在线看| 国产精品久久电影中文字幕| 亚洲午夜理论影院| 女人被躁到高潮嗷嗷叫费观| 午夜福利一区二区在线看| 国产精品av久久久久免费| 禁无遮挡网站| 午夜久久久在线观看| 国产激情久久老熟女| 色综合婷婷激情| 一区二区三区高清视频在线| 亚洲片人在线观看| 性色av乱码一区二区三区2| 操出白浆在线播放| 两个人看的免费小视频| 欧美日韩精品网址| 午夜视频精品福利| 免费av毛片视频| 免费搜索国产男女视频| 很黄的视频免费| 91大片在线观看| 久久久久国产精品人妻aⅴ院| 日韩精品青青久久久久久| 在线永久观看黄色视频| 亚洲av电影在线进入| 99精品久久久久人妻精品| 成熟少妇高潮喷水视频| 欧美绝顶高潮抽搐喷水| 国产成人系列免费观看| 女性生殖器流出的白浆| 欧美成人一区二区免费高清观看 | 一本综合久久免费| ponron亚洲| 日韩欧美一区视频在线观看| 国产精品久久视频播放| 最新美女视频免费是黄的| 麻豆成人av在线观看| 亚洲国产精品sss在线观看| 国产精品av久久久久免费| a在线观看视频网站| 精品久久蜜臀av无| 国产伦一二天堂av在线观看| 午夜福利高清视频| 中文字幕人妻丝袜一区二区| 日韩高清综合在线| 高清在线国产一区| 女同久久另类99精品国产91| 黄色视频,在线免费观看| 黄色丝袜av网址大全| 精品电影一区二区在线| 国产成人精品无人区| 老司机福利观看| 色哟哟哟哟哟哟| 99国产综合亚洲精品| 丰满人妻熟妇乱又伦精品不卡| 免费在线观看影片大全网站| 男女之事视频高清在线观看| 欧美最黄视频在线播放免费| 少妇被粗大的猛进出69影院| 两人在一起打扑克的视频| 黑人巨大精品欧美一区二区mp4| 国产精品九九99| 欧美日韩亚洲国产一区二区在线观看| 午夜福利高清视频| 桃红色精品国产亚洲av| 亚洲一区中文字幕在线| 久久精品人人爽人人爽视色| 中文字幕最新亚洲高清| 人人妻人人澡欧美一区二区 | 少妇粗大呻吟视频| 中国美女看黄片| 国产亚洲精品综合一区在线观看 | 丝袜在线中文字幕| 午夜福利在线观看吧| 丝袜美足系列| 1024香蕉在线观看| 欧洲精品卡2卡3卡4卡5卡区| 午夜福利18| 18美女黄网站色大片免费观看| 在线国产一区二区在线| 18禁国产床啪视频网站| 亚洲成a人片在线一区二区| 国产免费男女视频| 90打野战视频偷拍视频| xxx96com| 日韩 欧美 亚洲 中文字幕| 一个人观看的视频www高清免费观看 | 精品一区二区三区视频在线观看免费| 最近最新中文字幕大全免费视频| 丝袜美足系列| 91麻豆精品激情在线观看国产| e午夜精品久久久久久久| 久久久久久久精品吃奶| 久久精品国产综合久久久| 久久精品国产清高在天天线| 中出人妻视频一区二区| 涩涩av久久男人的天堂| 国产精品美女特级片免费视频播放器 | 午夜免费观看网址| 欧美中文综合在线视频| 免费少妇av软件| 国产熟女午夜一区二区三区| 亚洲国产高清在线一区二区三 | 亚洲成av人片免费观看| 成人国产综合亚洲| 亚洲av第一区精品v没综合| 国产成人免费无遮挡视频| 国产私拍福利视频在线观看| 精品日产1卡2卡| 久久欧美精品欧美久久欧美| 久久香蕉激情| 999精品在线视频| 色精品久久人妻99蜜桃| 国产成+人综合+亚洲专区| 老汉色av国产亚洲站长工具| 国产精品 欧美亚洲| av天堂在线播放| 国产欧美日韩一区二区精品| 国产99白浆流出| 久久亚洲真实| 亚洲最大成人中文| 欧美日韩乱码在线| 国产伦一二天堂av在线观看| 国产精品自产拍在线观看55亚洲| 成人18禁高潮啪啪吃奶动态图| 纯流量卡能插随身wifi吗| 国产精品久久视频播放| 国产精品一区二区精品视频观看| 久久精品国产亚洲av高清一级| 中文字幕av电影在线播放| 国产一卡二卡三卡精品| 多毛熟女@视频| 国产高清视频在线播放一区| 一区在线观看完整版| 国产精品一区二区精品视频观看| 黄频高清免费视频| 黄色毛片三级朝国网站| 巨乳人妻的诱惑在线观看| 一进一出抽搐gif免费好疼| 亚洲欧美日韩高清在线视频| 亚洲国产日韩欧美精品在线观看 | a级毛片在线看网站| 欧美国产日韩亚洲一区| 成人av一区二区三区在线看| 亚洲av日韩精品久久久久久密| 国产精品永久免费网站| 一区二区三区激情视频| 国产三级黄色录像| 日本vs欧美在线观看视频| 又黄又爽又免费观看的视频| 在线永久观看黄色视频| 制服诱惑二区| netflix在线观看网站| 国产不卡一卡二| 久久性视频一级片| 美国免费a级毛片| 久久国产精品男人的天堂亚洲| 亚洲人成伊人成综合网2020| 亚洲精品中文字幕在线视频| 亚洲成av人片免费观看| 亚洲精品国产区一区二| 亚洲色图 男人天堂 中文字幕| 亚洲欧美日韩另类电影网站| 欧美在线一区亚洲| 老司机在亚洲福利影院| 免费人成视频x8x8入口观看| 午夜激情av网站| 一夜夜www| 精品国产国语对白av| av有码第一页| 丝袜美足系列| 国产av精品麻豆| 丝袜美足系列| 国产av精品麻豆| 色播在线永久视频| 91成年电影在线观看| 久久青草综合色| 日韩欧美三级三区| 黑人欧美特级aaaaaa片| 91精品三级在线观看| 国产精品一区二区在线不卡| 欧美黄色片欧美黄色片| 午夜免费鲁丝| 国产一区二区在线av高清观看| 一级毛片女人18水好多| 美女高潮到喷水免费观看| 久久青草综合色| 亚洲av成人av| 国产1区2区3区精品| 黄片小视频在线播放| 91大片在线观看| 91麻豆精品激情在线观看国产| 久久伊人香网站| 亚洲片人在线观看| 国产av一区在线观看免费| 黄色女人牲交| xxx96com| 欧美日本中文国产一区发布| 真人做人爱边吃奶动态| 久久久精品欧美日韩精品| 亚洲av片天天在线观看| 成在线人永久免费视频| 老司机深夜福利视频在线观看| 91av网站免费观看| 欧美绝顶高潮抽搐喷水| 亚洲国产精品成人综合色| 亚洲午夜精品一区,二区,三区| 久久久久久久精品吃奶| 久热爱精品视频在线9| 夜夜爽天天搞| 亚洲熟女毛片儿| 男人的好看免费观看在线视频 | 久久精品亚洲熟妇少妇任你| 狠狠狠狠99中文字幕| 欧美精品啪啪一区二区三区| 一级片免费观看大全| 国产精品美女特级片免费视频播放器 | 亚洲国产高清在线一区二区三 | 日本vs欧美在线观看视频| 欧美日韩福利视频一区二区| 少妇 在线观看| 麻豆av在线久日| av免费在线观看网站| 夜夜爽天天搞| 亚洲熟女毛片儿| 中国美女看黄片| 亚洲精品一卡2卡三卡4卡5卡| 电影成人av| 麻豆成人av在线观看| 丝袜人妻中文字幕| 亚洲精品在线美女| 国产精品美女特级片免费视频播放器 | 亚洲黑人精品在线| 一边摸一边做爽爽视频免费| 精品欧美国产一区二区三| 每晚都被弄得嗷嗷叫到高潮| 国产麻豆69| 涩涩av久久男人的天堂| 人妻久久中文字幕网| 亚洲精品久久成人aⅴ小说| 国产精品永久免费网站| 男女床上黄色一级片免费看| 国产成人免费无遮挡视频| 久久久国产精品麻豆| 在线观看舔阴道视频| 国产亚洲精品久久久久5区| 又黄又粗又硬又大视频| 电影成人av| 国产视频一区二区在线看| 99riav亚洲国产免费| 日韩中文字幕欧美一区二区| 国产色视频综合| 亚洲精品国产区一区二| 美女 人体艺术 gogo| www.自偷自拍.com| 十分钟在线观看高清视频www| 久久国产乱子伦精品免费另类| www国产在线视频色| 男女下面进入的视频免费午夜 | 后天国语完整版免费观看| 国产精品亚洲美女久久久| 啦啦啦 在线观看视频| 99久久综合精品五月天人人| 亚洲精品中文字幕在线视频| 亚洲av第一区精品v没综合| 中文字幕av电影在线播放| 午夜福利一区二区在线看| 国产亚洲欧美98| 老汉色∧v一级毛片| 一个人观看的视频www高清免费观看 | 狂野欧美激情性xxxx| 成人国语在线视频| 又黄又爽又免费观看的视频| www.999成人在线观看| 国内精品久久久久久久电影| 一本综合久久免费| 成在线人永久免费视频| 欧洲精品卡2卡3卡4卡5卡区| 亚洲专区国产一区二区| 亚洲精品中文字幕一二三四区| 国产精品一区二区在线不卡| 精品一品国产午夜福利视频| 免费搜索国产男女视频| xxx96com| 久久婷婷人人爽人人干人人爱 | 欧美日本视频| 免费高清视频大片| 中文字幕最新亚洲高清| 日韩精品免费视频一区二区三区| av福利片在线| 女人高潮潮喷娇喘18禁视频| 亚洲国产中文字幕在线视频| 在线免费观看的www视频| 免费在线观看日本一区| 国产精品自产拍在线观看55亚洲| 9热在线视频观看99| 在线观看免费日韩欧美大片| 久久久久亚洲av毛片大全| 久久精品国产99精品国产亚洲性色 | 精品久久久久久久毛片微露脸| 亚洲自偷自拍图片 自拍| 老鸭窝网址在线观看| 亚洲 欧美 日韩 在线 免费| 国内精品久久久久久久电影| 精品一品国产午夜福利视频| 免费观看精品视频网站| 国产精品精品国产色婷婷| 曰老女人黄片| 亚洲成av人片免费观看| 人人妻人人澡欧美一区二区 | 在线天堂中文资源库| 99久久99久久久精品蜜桃| 满18在线观看网站| 夜夜看夜夜爽夜夜摸| 欧美日韩一级在线毛片| 波多野结衣av一区二区av| 亚洲第一电影网av| 亚洲熟妇熟女久久| 国产免费男女视频| 大香蕉久久成人网| 女人高潮潮喷娇喘18禁视频| 国内久久婷婷六月综合欲色啪| 欧美日本中文国产一区发布| 超碰成人久久| 国产色视频综合| 99国产极品粉嫩在线观看| 精品久久久久久成人av| 天堂√8在线中文| 国产成+人综合+亚洲专区| 中国美女看黄片| 精品电影一区二区在线| 欧美日韩亚洲国产一区二区在线观看| www.999成人在线观看| 女人爽到高潮嗷嗷叫在线视频| 免费搜索国产男女视频| 久久久久久人人人人人| 精品日产1卡2卡| 亚洲久久久国产精品| 韩国av一区二区三区四区| 亚洲五月天丁香| 午夜福利高清视频| 亚洲avbb在线观看| 19禁男女啪啪无遮挡网站| 超碰成人久久| 香蕉国产在线看| 村上凉子中文字幕在线| 在线播放国产精品三级| av有码第一页| 老司机在亚洲福利影院| 日韩有码中文字幕| 国产精品香港三级国产av潘金莲| 人妻丰满熟妇av一区二区三区| 欧美激情高清一区二区三区| 在线观看免费视频日本深夜| 极品人妻少妇av视频| 国产免费av片在线观看野外av| 亚洲 欧美一区二区三区| 国产亚洲精品久久久久5区| 亚洲成人免费电影在线观看| 久久久精品国产亚洲av高清涩受| 午夜福利一区二区在线看| 真人做人爱边吃奶动态| 一个人观看的视频www高清免费观看 | 一二三四社区在线视频社区8| 一本大道久久a久久精品| 亚洲人成电影观看| 人人妻,人人澡人人爽秒播| 国产成人啪精品午夜网站| 久久香蕉国产精品| 午夜激情av网站| 国产国语露脸激情在线看| 在线观看免费视频网站a站| 可以在线观看毛片的网站| 高清黄色对白视频在线免费看| 亚洲一区二区三区色噜噜| 成年女人毛片免费观看观看9| 欧美黄色片欧美黄色片| 搞女人的毛片| 波多野结衣一区麻豆| 欧美av亚洲av综合av国产av| 高潮久久久久久久久久久不卡| 香蕉丝袜av| 日韩中文字幕欧美一区二区| 女人被狂操c到高潮| 久久中文看片网| 国产伦一二天堂av在线观看| 欧美乱妇无乱码| 美女免费视频网站| 免费在线观看日本一区| 国产精品久久久久久人妻精品电影| 黄色 视频免费看| 色综合欧美亚洲国产小说| 99国产精品免费福利视频| 757午夜福利合集在线观看| 99精品久久久久人妻精品| 亚洲精品久久成人aⅴ小说| 高潮久久久久久久久久久不卡| 18禁美女被吸乳视频| 亚洲国产欧美日韩在线播放| 国产激情久久老熟女| 久久精品人人爽人人爽视色| 女同久久另类99精品国产91| 首页视频小说图片口味搜索| 一二三四社区在线视频社区8| 757午夜福利合集在线观看| 99香蕉大伊视频| 一a级毛片在线观看| 一区二区三区国产精品乱码| 又大又爽又粗| 日韩成人在线观看一区二区三区| 91老司机精品| 国产高清有码在线观看视频 | 中文字幕久久专区| 久久久久久久久久久久大奶| 精品人妻1区二区| 人人澡人人妻人| 亚洲欧美一区二区三区黑人| 日本免费一区二区三区高清不卡 | 女性生殖器流出的白浆| 侵犯人妻中文字幕一二三四区| 黄色片一级片一级黄色片| 国产欧美日韩一区二区三| 91九色精品人成在线观看| 如日韩欧美国产精品一区二区三区| 啦啦啦免费观看视频1| 久久国产亚洲av麻豆专区| 这个男人来自地球电影免费观看| 亚洲欧美日韩无卡精品| 国产激情欧美一区二区| 中亚洲国语对白在线视频| 人人妻,人人澡人人爽秒播| 国产在线精品亚洲第一网站| av视频在线观看入口| 此物有八面人人有两片| 欧美大码av| а√天堂www在线а√下载| 欧美一区二区精品小视频在线| 欧洲精品卡2卡3卡4卡5卡区| 国产av精品麻豆| 亚洲国产欧美日韩在线播放| 成人18禁在线播放| 国产亚洲精品久久久久5区| 国产欧美日韩一区二区三区在线| 正在播放国产对白刺激| 日本a在线网址| 嫩草影视91久久| 精品久久久久久久久久免费视频| 日日干狠狠操夜夜爽| 大型黄色视频在线免费观看| 精品欧美一区二区三区在线| 亚洲av日韩精品久久久久久密|