• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Neural Network Driven Automated Underwater Object Detection

    2022-03-14 09:24:46AjishaMathiasSamiappanDhanalakshmiKumarandNarayanamoorthi
    Computers Materials&Continua 2022年3期

    Ajisha Mathias,Samiappan Dhanalakshmi,*,R.Kumar and R.Narayanamoorthi

    1Department of Electronics and Communication Engineering,SRM Institute of Science and Technology,Kattankulathur,Chennai,603203,India

    2Department of Electrical and Electronics Engineering,SRM Institute of Science and Technology,Kattankulathur,Chennai,603203,India

    Abstract: Object recognition and computer vision techniques for automated object identification are attracting marine biologist’s interest as a quicker and easier tool for estimating the fish abundance in marine environments.However, the biggest problem posed by unrestricted aquatic imaging is low luminance,turbidity,background ambiguity,and context camouflage, which make traditional approaches rely on their efficiency due to inaccurate detection or elevated false-positive rates.To address these challenges, we suggest a systemic approach to merge visual features and Gaussian mixture models with You Only Look Once (YOLOv3) deep network,a coherent strategy for recognizing fish in challenging underwater images.As an image restoration phase, pre-processing based on diffraction correction is primarily applied to frames.The YOLOv3 based object recognition system is used to identify fish occurrences.The objects in the background that are camouflaged are often overlooked by the YOLOv3 model.A proposed Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm, adapted by Gaussian mixture models, and integrating the results of YOLOv3 improves detection efficiency of the proposed automated underwater object detection method.The proposed approach was tested on four challenging video datasets,the Life Cross Language Evaluation Forum (CLEF) benchmark from the F4K data repository, the University of Western Australia (UWA) dataset, the bubble vision dataset and the DeepFish dataset.The accuracy for fish identification is 98.5 percent,96.77 percent,97.99 percent and 95.3 percent respectively for the various datasets which demonstrate the feasibility of our proposed automated underwater object detection method.

    Keywords: Underwater images; diffraction correction; marine object recognition; gaussian mixture model; image restoration; YOLO

    1 Introduction

    Visual surveillance in underwater environments is grasping attention due to the immense resources beneath the water.The deployment of automated vehicles such as Automated Underwater Vehicles (AUV) and other sensor-based vehicles underwater is aimed to gain knowledge about the marine ecosystem.With the profound advancements in automation, the habitats in the ocean are watched by such automated remotely operated underwater vehicles.The knowledge about the fish abundance, endangered species, and their compositions are of great interest among ecological aspirants.Thus efficient object detection methods help in the study of the marine ecosystem.The underwater videos captured through Remotely Operated Vehicle (ROV)and submarines need to be interpreted to gain meaningful information.The manual interpretation is tedious with huge data loads, the automated interpretation of such data gain interest among the computer vision researchers.The major goal in underwater object detection is to discriminate fish or other ecological species from their backgrounds.The water properties lead to many geometric distortions and color deterioration which further challenges the detection schemes [1-5].

    Various studies developed for underwater object detection helps in many ecological applications to a greater extent.The generic methods developed are useful in the detection of objects in challenging scenes.Yan et al.[6] introduced the concept of underwater object detection from the image sequence extracted from underwater videos based on statistical gradient coordinate model and Newton Raphson method to estimate the object position from the input underwater scenes.Vasamsetti et al.[7] developed an ADA-boost based optimization approach to detect underwater objects.The Ada-boost method is tested with grayscale images and detection is achieved based on edge information.Rout et al.[8] developed the Gaussian mixture model for underwater object detection which differentiates the background from the object of interest.Marini et al.[9]developed a real time fish tracking scheme from the OBSEA-EMSO testing-site.The tracking is based on K-fold validation strategy for better detection accuracy.

    Automated systems prefer a faster convergence rate with large dataset processing.The advancements in machine learning help in automated detection for deployment in real-time applications.Li et al.[10] developed a template based machine learning scheme to identify fish and to classify them.The template method uses Support Vector Machines (SVM) for detection.The deep learning-based Faster Convolutional Neural Network (CNN) developed by Spampinato et al.[11],is efficient in object detection with faster detection rate yet the model is computationally complex.Lee et al.[12] developed a Spatial Pyramid Pooling model for its flexible windowing option in building object detection for improved detection accuracy.Yang et al.[13] implemented underwater object detection using YOLOv3, the faster convergence model.Jalal et al.[14] developed a classification scheme with hybrid YOLO structures to develop an automated detection scheme.The accuracy of YOLOv3 in .underwater frames are not satisfactory as in natural images.

    From the literature, it is inferred that the deep learning algorithms such as CNN, Regions with CNN (RCNN) and Saptial Pryamid Pooling (SPP) are showing limited detection accuracy in challenging underwater environments.Out of these methods, YOLOv3 is one of the fastest.However, it cannot handle dynamic backgrounds well.Here arise the need for the development of an efficient underwater detection schemes that are suitable for challenging settings.The proposed automated underwater object detection framework includes

    ? Data preprocessing phase by proposing an efficient diffraction correction scheme named diffraction limited image restoration (DLIR) to improve the geometric deteriorations of the input image frames.

    ? In the second phase, the restored images are applied with the YOLOv3 model for fish detection of the challenging underwater frames.

    ? In the third phase, a Bi-Dimensional Empirical Mode Decomposition (BEMD) based feature parameter estimation adapted to Gaussian Mixture Model (GMM) is proposed for foreground detection of an object.With the help of transfer learning, VGGNet-16, the GMM output is adapted as a neural network path, and the output is compared with YOLOv3 output for every frame to generate the output of the proposed automated object detection framework.

    The article is organized as follows.Section 2 discusses about the proposed automated underwater object detection framework which includes the proposed Diffraction Limited Image Restoration, proposed Bi-dimensional Empirical Mode Decomposition adapted Gaussian Mixture Model and YOLOv3 based detection schemes.The experimentations, dataset descriptions, results and comparative analysis are presented in Section 3.Lastly the article is concluded in Section 4.

    2 Proposed Automated Underwater Object Detection Scheme

    The proposed automated underwater object detection approach is intended to detect multiple fish occurrences in underwater images.The frames retrieved from underwater videos constantly encounter issues of blurring, diffraction of illumination, occlusions and other deteriorations posing difficulties in object recognition.Thus for efficient detection of underwater objects the proposed detection scheme comprises of three modules.Fig.1 represents the overall schematic of the proposed approach.The first data preprocessing module is intended in correcting the color deteriorations and geometric distortions in the input frames.The second module comprises of the BEMD based feature extraction for estimation of weight factor, texture strength and the Hurst calculation from the frames.The features are adapted with the generic GMM scheme for foreground object detection.The outcomes of GMM is provided to the transfer learning VGGNET-16 for generation of bounding boxes over the object of interest.In the third module,the pre-processed frame is feed to a YOLOv3 framework for object detection of the input underwater frames.By combining the outcomes of second and third module using an OR based combinational logic block, effective object detection is performed in underwater datasets.

    Figure 1: Block schematic of proposed automated underwater object detection approach

    2.1 Data Pre-Processing Using Proposed Diffraction Limited Image Restoration Approach

    Underwater images need improvement for a variety of applications such as object detection,tracking, and other surveillances due to visibility degradations and geometric distortions.The Dark Channel Prior (DCP) approach [4,15] is the most commonly used method for restoring hazy or blurred images.The DCP method estimates the Background Light (BL) and Transmission Map(TM) for image restoration by calculating the depth map values of the red channel in the image.The DCP approach thus improves image clarity and colour adjustments while being limited in its ability to restore geometric deteriorations.For an effective underwater image restoration, the proposed diffraction limited image restoration scheme incorporates diffraction mapping along with DCP.The underwater image is primarily represented as

    whereUI(x)be the intensity of the input images at pixelx,J(x)is the original radiance of the object at pixelx,t(x)is the transmission map that differs mostly with color distribution of color in the three channels, andBLis the Background Light in the frame.The preservation of scene radianceJrequires the analysis of the TM and BL.

    The TM strength as illustrated by Beer-Lamberts law of atmospheric absorption as

    whichβis an exponentially decaying variable, whereas d denotes the range between the camera and the point of interest and is the illumination attenuation variable.The DCP method determines the least possible intensity value of an image patchΩ(x).The color image’s DCP represented as

    The BL value is estimated as

    For clear scene outcomes, the TM will be near unity and hence theUIapproximated to be close toJ(UI~=J).The TM according to DCP is thus estimated as

    For the proposed diffraction limited restoration is shown in Fig.2.The selected underwater frame is applied with basic quad-tree division.The quad-tree division simply divides the image into four equal segments.For every segment, the intensities of every pixel need to be calculated.The segment which holds the maximum intensity is chosen as the latent patchUwith sizeh×h.Let R be the entire region of the input frame andxbe the pixel in anyithinstance.Lethbe the limiting or degrading factor that can be considered as point spread function (PSF).LetJbe the scene clarity desired to be restored as the actual image.As per the diffraction theory, the image model can be expressed as

    Figure 2: Block schematic of proposed diffraction limited image restoration approach

    Considering the shifted PSF variable to beand position changing noise function as

    whereQi=x?j, be the limiting factor in terms of diffraction in theithframe.Regression functions are known to limit the degradation by using a kernel function.The cost function of the kernel regression function isW(q;i,μk), whereμkthe Kernel Regression Function, and it is always remain constant.The kernel weight variable applied to the entire patch be,

    whereσ2, corresponds to the variance of noise factor.After decreasing and trying to conflate the weights, the regularization function generates a linear model, as

    The restored imageUwas created by fusing all of the patches for the whole areaR.Underwater image reconstruction was done by approximating the average propagation chart and the distribution of background light.The DCP approach makes an effort to approximate the TM and BL.The intensity value in the red channel was calculated as

    By means of Eqs.(3)-(4), the TM and BL are evaluated, and the restoration is accomplished by rewriting Eq.(1) as

    The obtained actual outputJis the restored image of the proposed diffraction limited restoration method as a data preprocessing stage will be the input for the subsequent detection frame works.

    2.2 Proposed BEMD-GMM Transfer Learning Module

    The object detection technique is primarily used to recognize objects in an image and identify their position.If one or more objects exist the detection scheme evaluates the existence of multiple objects in the frame with bounding boxes.The challenging underwater scenes need efficient detection scheme to detect blurred and camouflaged objects in the image.

    To perform effective underwater object detection, a Bi-dimensional Empirical Mode Decomposition based Adaptive GMM scheme (BEMD-GMM) is proposed.Object detection can also called as background subtraction is depend profoundly on image intensity variance.The image intensity variance of the images can be viewed more easily in the frequency domain.BEMD is a non-linear and non-stationary version for 2D image decomposition proposed by Nunes et al.[16].The BEMD is a variant of the widely used Hilbert-Huang Transform (HHT) which decomposes the 1D signals.The preprocessed image frames are subjected to BEMD algorithm for intrinsic mode decompositions.The various modes are iteratively generated until a threshold is reached.The weight factor, texture strength, and Hurst Exponent are retrieved from the residual Intrinsic Mode Function (IMF) as feature for the blob synthesis.These features acts as the reference for GMM model for object detection.The sifting procedure for BEMD algorithm is represented in Fig.3.In the figure, the input frame is decomposed with possible IMFs and the features are extracted.Any 2D signal can be decomposed into multiple IMF’s.The input image is decomposed into the biaxial IMF during the sifting process.The following are the phases in the sifting of 2D files.The procedure is begun by setting the residual function to the same value as the input.

    whereY(k,l)is the input image withkandlas the co-ordinates.For the measurement of maxima and minima pixels, the minimum intensity pixel and maximum intensity pixel are defined.Interpolating the minima and maxima points yields the lower bound of the envelope value,denoted asEl(k,l), and the upper bound of the envelope value, denoted asEu(k,l).The envelope mean value is computed as

    Figure 3: Proposed Bi-dimensional empirical mode decomposition on underwater images with residual outcomes and their corresponding surf plot and hurst calculation

    The IMF number is determined by the modulus of the above mean value.

    The procedure is iterative until the stopping criteria is satisfied.The stopping criteria is

    The precision value derived from the BEMD morphological reconstruction is the weighted cost function.The three extrema precision values correspond to the IMF’s: 0.00000003, 0.00003,and 0.03.It is necessary to perform fractal analysis on BEMD results, which requires the calculation of the Hurst Exponent and texture strength.Hurst exponent is the relative index of the dependence of the self-IMF.This measures the regression of time series data for them to converge to their corresponding mean values as

    wherea,b and care attributes to maintain theK-blob variable as a positive function.The target location is thus calculated as

    whereStis the new object position andstis the featured particles.Before the residual value reaches its limit, the image is decomposed into a set of Intrinsic Mode Functions (IMF).This IMF is plotted in 3D to see if the higher frequency parts decompose in the resulting IMFs.The Hurst plot is mapped against log (mean) to log (variance), and the slope score is considered as the Hurst exponent.GMM based detection is one of the shape, texture, and contours feature-based object detection schemes.Here, the entire distribution of data is considered as a Gaussian function.The bell-shaped Gaussian profile is close to a normal distribution function.The clustering of each Gaussian distribution profile is collectively termed as a Gaussian Mixture Model.The mean and variance of a Gaussian distribution function are usually calculated using maximum likelihood approximation.The GMM for multivariate system is expressed as

    whereμis the mean andεis the co-variance.The GMM method models the image based on the calculated weight factor, texture strength and Hurst exponent.The blobs are generated from the BEMD parameters and the detected objects are exposed as a bounding box.The estimated foreground information is fed as input to the VGGNet-16 (Visual Geometry Group Net) transfer learning model.VGGNet is a traditional neural network scheme created by Oxford University for large-scale visual recognition [17].In the proposed framework, the VGGNet is preferred over complex architectures because feature blobs generated by GMM models must be transferred to the network.Advanced architectures want the network to extract features from the input image,which is not relevant in the proposed approach.The VGGNet used here has 16 layers, including a convolutional layer, a pooling layer, and a fully linked layer.During training, the input to VGGNet is an RGB image with a fixed size of 224 × 224.The image goes through a pile of convolutional layers using modified filters.The small detection area chosen is 3 × 3.Linear transformation of the input channel spatial padding is fixed with the resolution of each pixel.This architecture adapts the feature of foreground object estimated by the generic GMM detection scheme.Let the features used to perform foreground detection be considered asx.The new domainFof the transfer learning model thus includes the feature vectorxalong with its marginal probability, sayP(x).

    whereX={x1,...,xn}xi∈X.To perform any operation using the gained feature knowledgex,the detection is performed as

    As the name indicates, the VGGNet-16 transfers the feature ideology of GMM and generate output to adapt the deep learning domain for further stages.The proposed BEMD-GMM method exhibits more clarified detection of camouflaged objects with dynamic environments.The convergence is moderate and the detection of blurred and occluded objects other than standard objects is limited in challenging underwater conditions.

    2.3 YOLOv3 Object Detection for Challenging Underwater Scenes

    The significance of the YOLO model is its high detection speed.The features extracted and trained from the training dataset are fed into the YOLOv3 model’s input data.The YOLOv3 incorporates a DARKNET-based feature extraction scheme comprised of 53 convolutional neural layers, each with its own batch normalization model.The architecture of YOLOv3 is shown in Fig.4.This network provides candidate detection boxes in three different scale.The offset of bounding box considers the feature maps 52×52, 26×26, and 13×13.The higher order feature maps are used in multiclass detection applications.To resist the vanishing gradient problem, the activation function is leaky ReLU (Rectified Linear Units).

    Figure 4: Schematic of YOLO v3 network

    3Experimental Results and Discussion

    The proposed automated underwater object detection scheme is tested with various challenging scenes categorized as normal scenes, occluded scenes, blurred scenes, and dynamic scenes.The experiment is carried out with an Intel?CorTMe-i7 CPU, 16 GB RAM, and an NVIDIA GeForce GTX 1080 Ti GPU.The Tensor Flow deep learning libraries for YOLO are used, while GMM and BEMD are performed in MATLAB 2020b.The YOLO hyper parameters are initialized with the primary learning rate as 0.00001 and as the number of epoch’s increases the learning rate is reduced to 0.01.Once the image frame is read by the YOLOv3, it is processed by theblobFromImagefunction to construct an input blob to feed to the hidden layers of the network.The pixels in the frames are scaled to fit the model ranging from 0 to 1.The generated new blob now gets transferred to the forward layers for prediction of bounding box as the output.The layers concatenate the values and filter the low confidence scoring entities.The bounding box generated is processed with non-maximum suppression approach.This reduces the redundant boxes and checks for threshold of confidence score.The threshold needs appropriate range fixing for proper detection outputs.The NM filters are set to a minimum threshold of 0.1 in YOLOv3 applications.In underwater applications, due to the challenges in water medium, high confidence score is preferred for even moderate detection accuracy.If the threshold is high as close to 1, it leads to generation of multiple bounding boxes for a single object.The threshold is set to 0.4 in our experiments for appropriate box generation.The runtime parameters are shown in Tab.1.

    Table 1: Runtime parameters for the training the DLIR, BEMD-GMM and YOLOv3 models

    3.1 Dataset Details

    The proposed method is tested with four challenging datasets to illustrate the feasibility of our proposed methodology.The first dataset is from the Life CLEF 2015, and it comprises 93 annotated videos representing occurrences of 15 different fish breeds.The frame resolution is 640×480.This dataset was obtained from Fish4Knowledge, a broader archive of underwater images [18].The second dataset is gathered and provided by the University of Western Australia(UWA) which comprises 4418 video sequences of frame resolution 1920×1080 [19].Among these, around 2180 frames are used as training frames and 1020 frames are subjected to testing.The third dataset is from the Bali diving dataset with a resolution of 1280×720 for output comparison [20].The challenging dataset DeepFish [21] developed by Bradley and his teammates in from the coastal marine beds of tropical Australia is also tested.The dataset comprises of 38,000 diverse underwater scenes which includes coral reefs and other marine organism.The resolution is of 1920×1080 among which 30% (10,889 scenes approximately) is validated and tested in the proposed approach.

    3.2 Diffraction Correction Results

    The analysis of underwater images was subjected to numerous tests to determine the feasibility of the proposed approach.The proposed technique is compared to previous approaches such as DCP [22], MIL [23], and Blurriness Correction (BC) [24].The simulation experiment measures the algorithm’s efficiency.Several difficult illustrations of underwater scenes are chosen for the simulation.The test was performed with BL values of (0.44, 0.68, 0.87) for visually blue looking images, (0.03, 0.02, 0.2) for red and dark looking images, and (0.14, 0.85, 0.26) for greenish images.The majority of the red-spread frames are dark.The transmitting maps for red, blue, and green are 0.2, 04, and 0.8, accordingly.The DLIR methods performance is validated with the full reference metrics including Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE),Structural Similarity Index Metrics (SSIM), and Edge Preservation Index.

    DLIR outputs of underwater images of different luminous scenes are shown in Fig.5.The increased range of PSNR value exposes the improved quality of the restored image.The MSE should be as low as possible so the error factor must be as low as possible to achieve better reconstruction.The SSIM value should be close as unity for better restoration which exhibits lesser deviation from the original.EPI is Edge Preservation Index which also needs to be close as unity for better conservation of restored output.Tab.2 relates different algorithms to the proposed approach quantitatively.The simulation is run with a frame size of 720 × 1280.The time taken for pre-processing using DLIR method is 0.6592 s, indicating that the algorithm has less computational complexity than many current algorithms.

    3.3 Proposed Automated Object Detection Analysis

    The object detection efficiency of the proposed method is tested and the results of varying scenes are analyzed qualitatively.Fig.6 represents the detection outcomes of the proposed method with the frames from Life CLEF-15, UWA, and Bubble vision dataset.The shape and size of the bounding box varies following the shape and size of the object of interest.From the detection outcomes, it is observed that the GMM output detects the camouflaged object in clip 132 and the blurred objects in clip 122 and missed the object in clip 48.It is also visualized that the YOLOv3 output can detect the blurred object in clip 48.Thus at the combined output of the proposed,the objects are detected as the joint contribution of the GMM method and YOLOv3 method.

    Fig.6 demonstrates the object detection of complex underwater scenes, collectively referred to as DeepFish.The results distinguishes between object identification pre and post underwater image restoration.The output clearly shows that the DLIR restored frames helps in better detection than the actual input image.Furthermore, the BEMD-GMM model outperforms the YOLOv3 approach because it is more sensitive to occluded and dynamic scenes.The proposed automated detection scheme misses a few instances that are even more difficult to determine.As shown in image 4, 1763 images out of 38,000 images of the DeepFish dataset missed the detection.The proposed approach is tested for its validation in terms of Average Tracking Error (ATE) and IOU and is compared with the existing GMM, BEMD-GMM, Yolov3 algorithms.Tab.3 shows the average tracking error of various methods.The ground truth values are calculated manually by considering the width and height of the object of interest and its centroid position.

    Figure 5: Diffraction correction of underwater images based on the proposed pre-processing scheme taken from bubble vision video [20].(a) Input frame, (b) Corresponds to the h × h latent patch, (c) Diffraction correction in X and Y direction, (d) Diffraction corrected image, (e) Depth map estimation, (f) Transmission map estimation and (g) Proposed diffraction corrected output

    Table 2: Quantitative comparison of restoration outcomes

    Figure 6: Object detection outcomes of original underwater images and the restored images of deepfish dataset [21].(a) Represents the input challenging scenes and their restored images (b)Represents ground truth (c) Represents detection based on BEMD-GMM approach (d) Represents the detection based on YOLOv3 approach and (e) Represents the proposed automated object detection approach

    Extensive evaluation of the proposed scheme is performed and the metrics including accuracy of detection, recall, the precision of tracking, and speed of detection (Fps) are calculated to gauge the proposed method.The metrics are estimated by calculating the True Positive (TP), False Positive (FP), and False Negative (FN) detection constraints.The speed of detection is measured as 18 Fps (Frames per second) whereas the conventional YOLOv3 model can detect 20 Fps since the architecture is simple than the proposed scheme.The results are compared with the stateof-art deep learning schemes of underwater object recognition including SVM [25], KNN [26],CNN-SVM [11], CNN-KNN [12], and YOLOv3 [13] schemes.The performance analysis is shown for the LCF-15 dataset, UWA dataset, Bubble Vision dataset and the DeepFish dataset in Fig.7.The accuracy for fish identification is 98.5 percent, 96.77 percent, 97.99 percent and 95.3 percent respectively for the different datasets which validate the efficacy of the proposed method.

    Table 3: Average tracking error of video sequences of challenging underwater scenes

    Figure 7: Performance evaluation of the proposed detection scheme in comparison to the SVM,KNN, CNN-SVM, CNN-KNN, and YOLOv3 models for various datasets

    The IoU metric is a metric to determine the correctness of bounding box positioning in object detection approaches.The value of IoU ranges from 0.1 to 1.0 which precisely means, if the IoU metric reads above 0.5, the prediction is valid.As the name indicates the ratio of the area of intersection over the area of union is the IoU is estimated for the input sequences.From the IoU outcomes in Fig.8, it is evident that the convergence of output of the proposed scheme is around 0.8 and it is close to unity and this shows the correctness in object detection.

    Figure 8: IoU measure of the proposed method concerning the ground-truth value.The red box represents the proposed algorithm and green defines the ground truth

    4 Conclusion

    Efficient object recognition has been the key goal in underwater object detection schemes.In this article, we have developed and demonstrated an automated underwater object detection framework that performs object detection of challenging underwater scenes.The output of the proposed automated detection scheme is gauged for its precision in terms of reduced tracking error than the earlier available detection schemes.The proposed detection scheme can be used in underwater vehicles equipped with high-end processors as an automated module for detecting object of interest by marine scientists.As the proposed method is particularly developed for challenging underwater scenes, the method is efficient in detection of occluded and camouflaged scenes.Although the approach shows improved detection accuracy from the existing schemes,the work is still limited in the detection of objects from highly deteriorated scenes.Future work includes developing efficient tracking algorithms for ecological classification applications and developing more tracking trajectories for features derived from the objects.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成人漫画全彩无遮挡| 99久久精品国产国产毛片| 亚洲婷婷狠狠爱综合网| 18禁裸乳无遮挡动漫免费视频| 亚洲国产av影院在线观看| 五月伊人婷婷丁香| 69精品国产乱码久久久| 黑人高潮一二区| 日日撸夜夜添| 91久久精品国产一区二区三区| 黄色怎么调成土黄色| 热re99久久国产66热| 午夜老司机福利剧场| 欧美人与性动交α欧美软件 | av在线老鸭窝| 国产成人一区二区在线| 熟女电影av网| 精品一区二区免费观看| 有码 亚洲区| 9色porny在线观看| 人人妻人人添人人爽欧美一区卜| 久久精品久久精品一区二区三区| 最近手机中文字幕大全| 国产精品久久久久成人av| 日韩成人av中文字幕在线观看| 美女福利国产在线| 久久久久网色| 在线观看www视频免费| 亚洲美女搞黄在线观看| 久久狼人影院| 久久精品人人爽人人爽视色| 国产高清不卡午夜福利| 亚洲一区二区三区欧美精品| 久久这里只有精品19| 久久久久视频综合| 久久 成人 亚洲| 又黄又粗又硬又大视频| 日本av手机在线免费观看| 亚洲av成人精品一二三区| 男女啪啪激烈高潮av片| 欧美丝袜亚洲另类| 麻豆乱淫一区二区| 国内精品宾馆在线| 一区二区三区四区激情视频| 日韩av不卡免费在线播放| 纯流量卡能插随身wifi吗| 国产一区二区三区av在线| 国产一级毛片在线| 自拍欧美九色日韩亚洲蝌蚪91| 国产一区二区三区综合在线观看 | 性色av一级| 国产精品欧美亚洲77777| 七月丁香在线播放| 国产精品偷伦视频观看了| 国产熟女欧美一区二区| 精品少妇黑人巨大在线播放| 久久久欧美国产精品| 蜜桃在线观看..| 精品亚洲成a人片在线观看| 国产国语露脸激情在线看| 中文精品一卡2卡3卡4更新| 久久ye,这里只有精品| 黄色毛片三级朝国网站| 天堂俺去俺来也www色官网| av在线app专区| 久久久精品免费免费高清| 久久久久久久久久成人| 国产精品一区二区在线观看99| av网站免费在线观看视频| 免费观看a级毛片全部| 久久久国产一区二区| 国产成人免费无遮挡视频| 国产黄色视频一区二区在线观看| 国产成人精品一,二区| 久久久久精品人妻al黑| 免费高清在线观看视频在线观看| 2018国产大陆天天弄谢| av电影中文网址| 激情视频va一区二区三区| 亚洲国产毛片av蜜桃av| videosex国产| 蜜臀久久99精品久久宅男| 亚洲国产精品专区欧美| 黑人欧美特级aaaaaa片| 在线 av 中文字幕| 国产亚洲一区二区精品| 成年动漫av网址| 午夜免费男女啪啪视频观看| videosex国产| 婷婷色综合大香蕉| 97在线视频观看| xxxhd国产人妻xxx| 亚洲精品国产av成人精品| 在线观看www视频免费| 亚洲欧美色中文字幕在线| 久久久久久久久久成人| 国产成人精品无人区| 精品福利永久在线观看| 18禁裸乳无遮挡动漫免费视频| 91在线精品国自产拍蜜月| 免费观看性生交大片5| 国产精品一区二区在线不卡| 精品第一国产精品| 亚洲婷婷狠狠爱综合网| 亚洲国产欧美日韩在线播放| 80岁老熟妇乱子伦牲交| 欧美亚洲 丝袜 人妻 在线| 国产精品人妻久久久影院| 免费观看无遮挡的男女| 国产亚洲一区二区精品| 赤兔流量卡办理| 天堂中文最新版在线下载| 最黄视频免费看| 国产片内射在线| 日本欧美国产在线视频| 国产精品麻豆人妻色哟哟久久| 国产毛片在线视频| 大香蕉久久成人网| 亚洲av电影在线观看一区二区三区| 亚洲四区av| 亚洲av免费高清在线观看| 香蕉精品网在线| 赤兔流量卡办理| 久久久久久久久久久免费av| 亚洲精品,欧美精品| 亚洲综合色网址| 国产亚洲午夜精品一区二区久久| 大片免费播放器 马上看| 久久人人爽人人片av| 少妇精品久久久久久久| 色94色欧美一区二区| 2022亚洲国产成人精品| 欧美精品国产亚洲| 日韩欧美精品免费久久| 纯流量卡能插随身wifi吗| 18禁国产床啪视频网站| 亚洲成人av在线免费| 日本av免费视频播放| 丰满乱子伦码专区| 欧美精品av麻豆av| 日韩 亚洲 欧美在线| 少妇精品久久久久久久| 香蕉丝袜av| 国产一区二区在线观看日韩| 丰满乱子伦码专区| 9热在线视频观看99| 男女午夜视频在线观看 | 免费不卡的大黄色大毛片视频在线观看| 欧美性感艳星| 久久午夜综合久久蜜桃| 久久久久久人人人人人| av.在线天堂| 十分钟在线观看高清视频www| 男男h啪啪无遮挡| 999精品在线视频| 国产在线一区二区三区精| 在线观看一区二区三区激情| 成人亚洲欧美一区二区av| 免费人妻精品一区二区三区视频| 成人漫画全彩无遮挡| 99久久精品国产国产毛片| 熟妇人妻不卡中文字幕| 夫妻性生交免费视频一级片| 色婷婷av一区二区三区视频| 夜夜骑夜夜射夜夜干| 男女午夜视频在线观看 | 丝袜人妻中文字幕| 亚洲国产精品一区三区| 亚洲精品456在线播放app| 18禁国产床啪视频网站| 丝袜人妻中文字幕| 久久综合国产亚洲精品| 蜜桃国产av成人99| 性高湖久久久久久久久免费观看| 亚洲国产精品专区欧美| 在现免费观看毛片| 久久这里有精品视频免费| 国产一区二区激情短视频 | 亚洲,欧美精品.| 亚洲av欧美aⅴ国产| 大码成人一级视频| 春色校园在线视频观看| 美女中出高潮动态图| 成人亚洲欧美一区二区av| 国产极品粉嫩免费观看在线| 久久午夜福利片| 人妻系列 视频| 久久久欧美国产精品| 人成视频在线观看免费观看| 欧美国产精品一级二级三级| 男人爽女人下面视频在线观看| 在线观看一区二区三区激情| 肉色欧美久久久久久久蜜桃| 人人妻人人澡人人看| 亚洲,一卡二卡三卡| 激情视频va一区二区三区| 熟女av电影| 欧美激情国产日韩精品一区| 如日韩欧美国产精品一区二区三区| 中文字幕最新亚洲高清| 婷婷成人精品国产| 人人妻人人爽人人添夜夜欢视频| 国产黄频视频在线观看| 黄色怎么调成土黄色| 亚洲国产欧美日韩在线播放| 免费av不卡在线播放| 色婷婷av一区二区三区视频| 黄色视频在线播放观看不卡| 欧美老熟妇乱子伦牲交| 少妇被粗大猛烈的视频| 丝袜美足系列| 亚洲第一av免费看| 99视频精品全部免费 在线| av免费在线看不卡| 国产精品熟女久久久久浪| 免费少妇av软件| 狠狠婷婷综合久久久久久88av| 日本av免费视频播放| 最近最新中文字幕免费大全7| 性色avwww在线观看| 80岁老熟妇乱子伦牲交| 亚洲一级一片aⅴ在线观看| 久久久久国产精品人妻一区二区| 99视频精品全部免费 在线| 国产视频首页在线观看| 涩涩av久久男人的天堂| 制服诱惑二区| 曰老女人黄片| 大陆偷拍与自拍| 丰满乱子伦码专区| 老熟女久久久| 国产精品一国产av| 五月天丁香电影| 久久鲁丝午夜福利片| 亚洲精品色激情综合| 亚洲成人一二三区av| av.在线天堂| 亚洲内射少妇av| 亚洲精品成人av观看孕妇| 国产成人一区二区在线| 国产一区二区在线观看日韩| 国产精品久久久久成人av| 国产淫语在线视频| 超色免费av| 天美传媒精品一区二区| av黄色大香蕉| 亚洲欧美一区二区三区黑人 | 最黄视频免费看| 亚洲成人手机| 亚洲国产看品久久| 亚洲色图 男人天堂 中文字幕 | 日韩 亚洲 欧美在线| 久久毛片免费看一区二区三区| 免费高清在线观看视频在线观看| 黄色怎么调成土黄色| 自线自在国产av| 黑丝袜美女国产一区| 成人国语在线视频| 在线 av 中文字幕| 一级爰片在线观看| 男女啪啪激烈高潮av片| 黄色怎么调成土黄色| 久久精品国产亚洲av天美| 国产有黄有色有爽视频| 色哟哟·www| 久久精品久久久久久久性| 看免费成人av毛片| www.色视频.com| 2021少妇久久久久久久久久久| 国产色爽女视频免费观看| 国产一区二区三区综合在线观看 | 麻豆精品久久久久久蜜桃| 欧美+日韩+精品| 日本猛色少妇xxxxx猛交久久| 超碰97精品在线观看| 亚洲丝袜综合中文字幕| 99视频精品全部免费 在线| 国产女主播在线喷水免费视频网站| 日韩精品有码人妻一区| 美女国产高潮福利片在线看| 欧美日本中文国产一区发布| 国产一区二区在线观看av| 国产精品熟女久久久久浪| 国产精品人妻久久久久久| 波多野结衣一区麻豆| 性色avwww在线观看| 久久久国产欧美日韩av| 久久人妻熟女aⅴ| 99热网站在线观看| 午夜福利影视在线免费观看| 蜜桃在线观看..| 满18在线观看网站| 桃花免费在线播放| 亚洲人与动物交配视频| 看免费成人av毛片| 国产欧美另类精品又又久久亚洲欧美| 秋霞在线观看毛片| 国产av码专区亚洲av| 亚洲内射少妇av| 下体分泌物呈黄色| 欧美+日韩+精品| 一级毛片 在线播放| 97人妻天天添夜夜摸| 亚洲成av片中文字幕在线观看 | videosex国产| 亚洲内射少妇av| 黄色 视频免费看| 国产精品久久久久久久电影| 成人18禁高潮啪啪吃奶动态图| 免费黄网站久久成人精品| 国产成人精品久久久久久| 成人18禁高潮啪啪吃奶动态图| 韩国精品一区二区三区 | 亚洲精品国产av成人精品| 久热这里只有精品99| 日本av免费视频播放| 制服人妻中文乱码| 欧美3d第一页| 久久婷婷青草| 制服诱惑二区| 国产片特级美女逼逼视频| 久久久欧美国产精品| 亚洲av成人精品一二三区| 欧美日韩av久久| 国产欧美亚洲国产| av又黄又爽大尺度在线免费看| 黄网站色视频无遮挡免费观看| 久久久国产精品麻豆| 精品一区二区三区四区五区乱码 | 久久影院123| 亚洲伊人色综图| 日韩一本色道免费dvd| 啦啦啦视频在线资源免费观看| 久久狼人影院| 亚洲欧洲国产日韩| 亚洲人与动物交配视频| 色网站视频免费| 97在线视频观看| 自线自在国产av| 日韩人妻精品一区2区三区| 卡戴珊不雅视频在线播放| 青春草视频在线免费观看| 亚洲美女黄色视频免费看| 人人妻人人添人人爽欧美一区卜| 久热久热在线精品观看| 日本黄大片高清| 久久青草综合色| 久久精品熟女亚洲av麻豆精品| 69精品国产乱码久久久| 精品一区二区三区视频在线| 大香蕉久久网| 国产精品三级大全| 乱码一卡2卡4卡精品| 国产精品久久久久久久电影| 国产精品女同一区二区软件| 国产色爽女视频免费观看| 日本av免费视频播放| 99久国产av精品国产电影| a级片在线免费高清观看视频| 搡女人真爽免费视频火全软件| 在现免费观看毛片| 亚洲国产日韩一区二区| 一本色道久久久久久精品综合| 三级国产精品片| 中文字幕人妻丝袜制服| 久久精品久久精品一区二区三区| 少妇的逼水好多| 精品一区二区三区四区五区乱码 | 一边亲一边摸免费视频| 亚洲第一av免费看| 日韩精品有码人妻一区| 亚洲精品第二区| 国产有黄有色有爽视频| 精品少妇久久久久久888优播| 国产免费又黄又爽又色| 欧美bdsm另类| 春色校园在线视频观看| 十八禁网站网址无遮挡| 97在线视频观看| 精品人妻在线不人妻| 人妻系列 视频| 九草在线视频观看| 国产亚洲最大av| 久久久欧美国产精品| 夫妻午夜视频| 丝袜喷水一区| 国产国语露脸激情在线看| 日韩制服丝袜自拍偷拍| 国产精品一国产av| 热99国产精品久久久久久7| 久久av网站| 免费av中文字幕在线| 日韩欧美一区视频在线观看| 国国产精品蜜臀av免费| 国产一区有黄有色的免费视频| 777米奇影视久久| 18禁在线无遮挡免费观看视频| 18+在线观看网站| 欧美性感艳星| 日韩欧美一区视频在线观看| 欧美性感艳星| 久久鲁丝午夜福利片| av天堂久久9| 久久av网站| 搡老乐熟女国产| 亚洲三级黄色毛片| 在线观看国产h片| 日本91视频免费播放| 亚洲美女视频黄频| 免费黄频网站在线观看国产| 美女大奶头黄色视频| 97精品久久久久久久久久精品| 超色免费av| 男人爽女人下面视频在线观看| 国产探花极品一区二区| 国产男女内射视频| 免费在线观看完整版高清| 久久99热这里只频精品6学生| 国产亚洲精品第一综合不卡 | 国产乱人偷精品视频| 最新的欧美精品一区二区| 国产无遮挡羞羞视频在线观看| 国产精品无大码| 一级黄片播放器| 黄片无遮挡物在线观看| a 毛片基地| 只有这里有精品99| 国产精品蜜桃在线观看| 男女边摸边吃奶| 丰满饥渴人妻一区二区三| 男人操女人黄网站| 免费看av在线观看网站| 久久这里只有精品19| 少妇人妻精品综合一区二区| 日产精品乱码卡一卡2卡三| 亚洲精品456在线播放app| 亚洲一区二区三区欧美精品| 欧美成人精品欧美一级黄| 国产一区二区在线观看av| 国产精品久久久久久精品古装| 日韩伦理黄色片| 99久久人妻综合| 日本午夜av视频| 国产乱人偷精品视频| 国产一区二区三区av在线| 九九爱精品视频在线观看| 免费不卡的大黄色大毛片视频在线观看| 天天躁夜夜躁狠狠躁躁| 在线观看三级黄色| 日韩在线高清观看一区二区三区| av一本久久久久| 99国产精品免费福利视频| 黑人高潮一二区| 亚洲人成网站在线观看播放| 视频中文字幕在线观看| 色5月婷婷丁香| 国产精品久久久久久av不卡| 久久女婷五月综合色啪小说| 丝袜美足系列| 中文天堂在线官网| 在线 av 中文字幕| 91aial.com中文字幕在线观看| 内地一区二区视频在线| 26uuu在线亚洲综合色| 日本91视频免费播放| 欧美国产精品一级二级三级| av在线老鸭窝| 久久影院123| www.av在线官网国产| 久久精品久久精品一区二区三区| 欧美日韩一区二区视频在线观看视频在线| 久久久久精品性色| 男女下面插进去视频免费观看 | 亚洲第一区二区三区不卡| 嫩草影院入口| 国产高清不卡午夜福利| 亚洲高清免费不卡视频| 亚洲av欧美aⅴ国产| 蜜桃在线观看..| 国产男人的电影天堂91| 人妻一区二区av| 啦啦啦在线观看免费高清www| 另类亚洲欧美激情| 久久97久久精品| 高清视频免费观看一区二区| 毛片一级片免费看久久久久| 全区人妻精品视频| 2021少妇久久久久久久久久久| 亚洲一区二区三区欧美精品| 在线免费观看不下载黄p国产| 日产精品乱码卡一卡2卡三| 女人被躁到高潮嗷嗷叫费观| av播播在线观看一区| 亚洲精品一区蜜桃| 一级毛片电影观看| 国产xxxxx性猛交| 一本—道久久a久久精品蜜桃钙片| 自线自在国产av| 九色成人免费人妻av| 中国美白少妇内射xxxbb| 狂野欧美激情性xxxx在线观看| 欧美激情极品国产一区二区三区 | 久久久久久久亚洲中文字幕| 国产日韩一区二区三区精品不卡| 午夜视频国产福利| 看免费av毛片| 制服丝袜香蕉在线| 好男人视频免费观看在线| 一级毛片黄色毛片免费观看视频| 国产亚洲精品久久久com| 日本-黄色视频高清免费观看| 久久精品国产综合久久久 | 久久这里只有精品19| 男人添女人高潮全过程视频| 日本与韩国留学比较| 97人妻天天添夜夜摸| 看非洲黑人一级黄片| 国产片特级美女逼逼视频| 午夜av观看不卡| 久久久久久久亚洲中文字幕| 久久久久久伊人网av| 午夜免费男女啪啪视频观看| 美女内射精品一级片tv| 亚洲国产日韩一区二区| 老司机影院成人| av在线观看视频网站免费| 在线免费观看不下载黄p国产| av免费观看日本| videos熟女内射| 成人国语在线视频| 亚洲,欧美,日韩| 精品久久久精品久久久| 欧美日韩一区二区视频在线观看视频在线| 久久99热6这里只有精品| 美女国产视频在线观看| av在线老鸭窝| 久久综合国产亚洲精品| 美女脱内裤让男人舔精品视频| 在现免费观看毛片| 亚洲熟女精品中文字幕| 亚洲国产看品久久| 欧美国产精品一级二级三级| 欧美最新免费一区二区三区| 一二三四在线观看免费中文在 | 黄色毛片三级朝国网站| 在线观看人妻少妇| 成人18禁高潮啪啪吃奶动态图| 插逼视频在线观看| 99久国产av精品国产电影| 一级,二级,三级黄色视频| 男人舔女人的私密视频| 高清av免费在线| 免费看不卡的av| 美国免费a级毛片| 大香蕉久久成人网| 蜜桃国产av成人99| 日韩av免费高清视频| 精品少妇黑人巨大在线播放| 亚洲欧美日韩卡通动漫| 母亲3免费完整高清在线观看 | 久久国产亚洲av麻豆专区| 日本91视频免费播放| 观看美女的网站| 大话2 男鬼变身卡| 水蜜桃什么品种好| 欧美精品一区二区免费开放| 精品一区二区三卡| 日日爽夜夜爽网站| 成人亚洲欧美一区二区av| 国产亚洲一区二区精品| 在线观看三级黄色| 久热这里只有精品99| 久久精品国产综合久久久 | 国产成人av激情在线播放| 欧美日韩一区二区视频在线观看视频在线| 亚洲精品中文字幕在线视频| 制服诱惑二区| 久久婷婷青草| 精品国产国语对白av| 亚洲欧美色中文字幕在线| 校园人妻丝袜中文字幕| 伊人亚洲综合成人网| 婷婷色综合www| 美女主播在线视频| 日本欧美视频一区| 99国产综合亚洲精品| 999精品在线视频| 777米奇影视久久| 丁香六月天网| 日本爱情动作片www.在线观看| 伊人亚洲综合成人网| 亚洲高清免费不卡视频| av播播在线观看一区| 亚洲国产看品久久| 国产视频首页在线观看| 国产探花极品一区二区| 九色亚洲精品在线播放| 最近最新中文字幕免费大全7| 精品亚洲成国产av| 亚洲色图综合在线观看| 男女边摸边吃奶| 亚洲综合精品二区| 最黄视频免费看| 蜜桃在线观看..| 国产视频首页在线观看| 制服丝袜香蕉在线| 久久久久久人妻| 制服人妻中文乱码| 性高湖久久久久久久久免费观看| 三级国产精品片| 国产国拍精品亚洲av在线观看| 少妇的逼水好多| 日本欧美国产在线视频| 久久这里只有精品19| 久久精品国产自在天天线|