• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Railway Fastener Inspection Method Based on Abnormal Sample Generation

    2024-01-20 13:01:36ShubinZhengYueWangLimingLiXieqiChenLelePengandZhanhaoShang

    Shubin Zheng ,Yue Wang ,Liming Li,3,★ ,Xieqi Chen,3 ,Lele Peng,3 and Zhanhao Shang

    1Higher Vocational and Technical College,Shanghai University of Engineering Science,Shanghai,200437,China

    2School of Urban Railway Transportation,Shanghai University of Engineering Science,Shanghai,201620,China

    3Shanghai Engineering Research Centre of Vibration and Noise Control Technologies for Rail Transit,Shanghai University of Engineering Science,Shanghai,201620,China

    ABSTRACT Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspection methods have insufficient detection ability in cases of imbalanced samples.To solve this problem,we propose an approach based on deep convolutional neural networks (DCNNs),which consists of three stages: fastener localization,abnormal fastener sample generation based on saliency detection,and fastener state inspection.First,a lightweight YOLOv5s is designed to achieve fast and precise localization of fastener regions.Then,the foreground clip region of a fastener image is extracted by the designed fastener saliency detection network(F-SDNet),combined with data augmentation to generate a large number of abnormal fastener samples and balance the number of abnormal and normal samples.Finally,a fastener inspection model called Fastener ResNet-8 is constructed by being trained with the augmented fastener dataset.Results show the effectiveness of our proposed method in solving the problem of sample imbalance in fastener detection.Qualitative and quantitative comparisons show that the proposed F-SDNet outperforms other state-of-the-art methods in clip region extraction,reaching MAE and max F-measure of 0.0215 and 0.9635,respectively.In addition,the FPS of the fastener state inspection model reached 86.2,and the average accuracy reached 98.7%on 614 augmented fastener test sets and 99.9%on 7505 real fastener datasets.

    KEYWORDS Railway fastener;sample generation;inspection model;deep learning

    1 Introduction

    As an essential transportation mode,urban rail transit has been widely recognized and applied in various domains due to its advantages,including high transportation efficiency,large transportation volume,and low energy consumption.Therefore,the safety and reliability of urban railway transportation have become a significant issue for researchers and scholars.In urban rail transportation,the track bears the weight of the train.As shown in Fig.1,the fastener,as a key component of the track structure,stably fixes the steel rail to the rail sleeper and provides cushioning and shock absorption.Therefore,fasteners are vital for the safety and reliability of railway transportation.

    Figure 1:Railway image.(a)Includes WJ7 fasteners(b)Includes WJ8 fasteners

    In general,prolonged high-speed and high-load operation of trains and the influence of the railway environment will result in fasteners experiencing a certain degree of wear and tear,such as fracture,offset loosening,and deformation,and even become lost or experience other complex defects [1].These abnormal fasteners can seriously threaten the safety of railway transportation if they are not detected and repaired in time.Therefore,this paper focuses on the automatic positioning and detection method of railway fasteners on the basis of sample generation to enhance the safety of rail transit.

    In recent years,the emerging,popular technology of computer vision has made non-destructive testing of railway transportation feasible and improved the efficiency and accuracy of inspecting.The common methods currently used in the field of computer vision include traditional image processing,machine learning,and deep learning.However,when dealing with complex images,traditional image processing and machine learning methods [2,3] may not be able to extract the features of objects accurately,and they require manual design of features and classifiers,which usually increases the time cost and reduces the detection efficiency.With the development of deep learning,defect detection methods based on deep convolutional neural networks(DCNNs)are capable of automatically learning features within images.These methods exhibit a high degree of adaptability,enabling them to perform localization,classification,and prediction on large-scale datasets.These methods have been extensively studied and applied to defect detection in the railway domain.References[4,5]used DCNNs for fast and automated detection of train wheel defects,coupled with data augmentation techniques to achieve condition monitoring and fault diagnosis.Wang et al.[6] proposed a cascade DCNN to solve the loosening detection problem of bolts.In literature [7],Wei et al.proposed a condition monitoring method based deep learning for the pantograph slide plate.References[8,9]developed various models for rail surface and fastener defect detection based on DCNN.In this paper,we mainly focus on railway fasteners.Existing automated detection methods for fasteners still present numerous challenges,such as the following:

    1)Sample imbalance in the railway fastener dataset is not considered.Gibert et al.[10]proposed a multitask learning framework to detect defects on fasteners.Wei et al.[11]studied the localization of fastener regions by using vertical and horizontal projections in traditional image processing techniques,followed by fastener classification using support vector machine (SVM).Bai et al.[12]applied the support vector data description (SVDD) algorithm to classify defective fasteners on the basis of the detection results of the improved Faster R-CNN.However,the failure rate of fasteners is typically low in real railway lines,resulting in far fewer abnormal fastener samples than normal fastener samples.DCNN requires a balanced training sample size,and training on an imbalanced fastener dataset cannot extract sufficient features of abnormal fastener.This condition leads to the detection model being unable to accurately identify abnormal fasteners,thus reducing the stability and accuracy of the model and significantly affecting the fastener inspection task and line maintenance scheduling.Therefore,solving the problem of imbalanced fastener samples is imperative.

    2) Data augmentation methods for fastener images can only increase the number of abnormal fastener images and in essence cannot increase the defective state of fasteners,i.e.,it cannot consider the diversity of different broken positions and offset orientations in fasteners.Chandran et al.[13]expanded a dataset by applying rotation,flip,and scale transformations to fastener images.Xiao et al.[14] used copy-and-paste method to enhance the defects in the training images and then used ResNet-101 backbone to extract the features of fasteners.Liu et al.[15] proposed template matching with prior knowledge of fasteners,which reduced the imbalance of samples by random sorting.Liu et al.[16] improved the inspection performance by constructing fastener sample pairs.Wang et al.[17]used generative adversarial network(GAN)for fastener image generation to improve the inspection performance.Yao et al.[18] utilized GAN to track the distribution of faulty data and established a mapping relationship between image data to generate negative samples.However,the abnormal state of railway fasteners exhibits various patterns.Simply increasing the quantity of abnormal fastener images would prevent the inspection model from fully learning the features of abnormal fasteners,thereby limiting its effectiveness in practical inspection task.Therefore,we simulate field-based defect scenarios by adding fastener abnormal states to effectively improve the accuracy and robustness of the actual fastener inspection task.

    In addition,few shot learning methods can partially address the problem of imbalanced samples,such as Siamese network[19],matching network[20],learning to learn[21],and prototypical network[22].However,the existing methods cannot fundamentally solve the problem because of the limited abnormal fasteners,resulting in insufficient detection performance.Moreover,these few shot learning models need to be retrained at each actual inspection and cannot be universally applied to the fastener inspection task.

    Through a review of the above literature,one can conclude that defect detection on unbalanced fastener datasets still faces challenges.Therefore,this paper solves the problem by generating a large number of abnormal fastener samples.

    The specific contributions are as follows:

    1.In this paper,we propose a hierarchical learning method to solve the sample imbalance problem in supervised fastener detection,which has three stages: fastener localization based on lightweight YOLOv5s,abnormal fastener sample generation based on saliency detection,and fastener state detection.

    2.A novel fastener saliency detection network called F-SDNet,which extracts the foreground clip region of fastener images,is proposed.On the basis of the clip region,data augmentation is used to generate abnormal fastener samples(e.g.,broken,loosen and missing fasteners).Our method can balance normal and abnormal fastener samples,which is beneficial for training a robust inspection model.

    3.A ResNet-based[23]model called fastener ResNet-8 is proposed for fastener state inspection.We evaluate our method by using 7505 real fastener images,achieving a precision of 99.9%.Our method demonstrates outstanding performance regarding accuracy and speed.It effectively solves the problem of imbalanced fastener samples,proving its effectiveness in real railway inspection scenarios.

    In the following sections of this paper,we provide an overview of the proposed method and introduce the detailed model architecture in Section 2,including fastener localization model,abnormal fastener sample generation based on saliency detection,and fastener inspection model.We show experimental results and comparison results with other methods in Section 3.We present the conclusion and outlook in Section 4.

    2 Method Overview

    This paper proposes a hierarchical learning method to solve the problem of imbalanced dataset.The method consists of three stages.First,a lightweight object detection network is used for rapid fastener localization.Second,the clip region is extracted based on a saliency detection model.This step is followed by the application of random cropping,rotation,and background fusion to generate abnormal fastener samples.Finally,a CNN-based fastener state inspection model is constructed,which includes normal and abnormal fasteners.The framework of the proposed method is illustrated in Fig.2.

    Figure 2:Framework of the proposed method

    2.1 Fastener Localization Based on Lightweight YOLOv5s

    2.1.1RelatedWorks

    Traditional fastener localization methods are mainly based on a prior information such as track structure and geometric features of fasteners [24-26],which cannot be adapted to new track and fastener types.In recent years,deep learning-based object detection methods have been widely applied to fastener localization.In[11],Faster R-CNN-based fastener localization model has a high parameter count and slow detection speed.Wei et al.[27] proposed an improved YOLOv3 model to locate fasteners.Chen et al.[28] designed a lightweight YOLO architecture to locate fasteners,reducing runtime memory and increasing detection speed.Qi et al.[29] introduced MYOLOv3-Tiny and reduced the model parameters,but the accuracy of fastener detection was not satisfactory.

    The actual railway environment is complex,and railway inspection images have problems such as small object segmentation,low contrast between fasteners and background,and grayscale images instead of color images,thus requiring a finer model that can balance detection accuracy and inference speed in practical fastener inspection tasks.Therefore,You Only Look Once(YOLO)[30],which is a single-stage algorithm,is more suitable for the application scenario of high-accuracy,high-efficiency,and real-time detection in this paper.

    2.1.2LightweightYOLOv5s

    We considered the YOLOv5 series,which offers a balance between detection speed and performance,because of the specific nature of railway images and the demand for efficiency in daily fastener detection.Compared with YOLOv3 and YOLOv4,YOLOv5 has undergone improvements in algorithmic structure and network architecture,thereby exhibiting superior performance and scalability.Among the YOLOv5 models,we selected YOLOv5s,which is the smallest network model with the lowest GFLOPs,as the base network for fastener localization tasks.Starting from the redundancy of information,we utilize sparse training and channel pruning techniques to design a lightweight YOLOv5s(Fig.3).The aim is to improve localization speed while maintaining a certain level of accuracy,making it more suitable for the localization and segmentation of fastener regions.The final results consist of fastener images and missing fastener images,which form the original fastener datasets.

    Figure 3:Lightweight flowchart of the fastener localization model

    A pruning factorγiis introduced for each input channel of the batch normalization (BN) layer in YOLOv5s.Unimportant channels are removed based on the absolute value of this factor,thereby reducing the complexity of the network while preserving overall accuracy.γiis defined as follows:

    wherexiandyidenote the input and output data of the BN layer,respectively;μandσ2denote the mean and variance of the batch,respectively;andγiandβiare the scaling and shifting transformation factors,respectively.

    Inspired by the pruning strategy in[31],a global threshold-based approach is adopted to prune the model,as illustrated in Fig.4.First,|γ|in each BN layer of the model after sparse training are sorted in ascending order.A global thresholdηis determined.Then,a mask matrix is generated by comparingηwith each convolution layer in the model,and channels smaller thanηare directly pruned.However,a minimum channel retention ratioφis set simultaneously during channel pruning to ensure the integrity of the network structure.If allγin a certain convolution layer are smaller thanη,then a portion of channels larger thanγcan still be retained.

    Figure 4:Channel pruning process

    2.2 Abnormal Fastener Sample Generation Based on Saliency Detection

    2.2.1RelatedWorks

    Saliency detection is extensively utilized in image processing and computer vision,playing an important role in segmentation tasks by quickly and accurately locating salient regions in images and highlighting the foreground clip region of fasteners.Earlier saliency detection algorithms[32,33]were based on manually designed features,which were time consuming and had inadequate performance.Currently,saliency detection has advanced rapidly with the development of deep learning.References[34,35]applied multiscale feature fusion strategies that help combine feature information from different scales,which can extract the information of salient objects more effectively.Wei et al.[36]designed F3-Net,which contains a cross-feature fusion mechanism.The feature maps of most saliency detection models can generally reflect the approximate location of the objects but are not effective in recovering object details for complex structures.Zhao et al.[37] proposed an edge-guided strategy to generate high-quality edge information by using local edge information and global location information.Liu et al.[38]used side output supervision to obtain clear boundaries of salient objects.Qin et al.[39]proposed BASNet,which accurately segments salient objects while maintaining highquality boundaries.

    Accurately segmenting the foreground clip region of fastener images is necessary to improve the authenticity of the generated abnormal fastener samples.However,the existing saliency detection models [26-33] can extract most of the features of the fasteners but still cannot accurately segment the boundaries due to the low contrast and similar grayscale features between the foreground and the background in real fastener images.Therefore,we designed a fastener saliency detection network (F-SDNet) to segment the foreground clip region of fastener images.As shown in Fig.5,F-SDNet generates a coarse clip saliency map through a feature extraction module and a saliency prediction module,and then refines the clip edges through a boundary aware module.F-SDNet facilitates multiscale learning,enhances boundary features,and generates more accurate saliency maps,providing a basis for generating abnormal fastener samples.The detailed descriptions of the network modules and loss function can be found in Section 2.2.2.

    Figure 5:Architecture of our proposed clip saliency detection network F-SDNet

    2.2.2F-SDNet

    F-SDNet consists of three main modules.The first module is the clip feature extraction module based on the improved ResNet-50.The second module is the clip saliency prediction module,which generates clip coarse saliency maps.In addition,referring to the deep supervision mechanism in[37],the proposed joint loss function is supervised for each layer of the output feature maps.The third module is the encoder-decoder-based clip boundary-aware module,which mainly serves to refine the coarse saliency map.

    (1)Feature extraction module

    ResNet is one of the most commonly used feature extraction networks in deep learning.Its deeper network structure,fewer parameters,and good generalization ability make it well suited for image feature extraction tasks.To better extract the features of fasteners,we choose the improved ResNet-50 as the backbone network and designed a clip feature extraction module.This module aims to address the problems of low contrast and similarity between foreground and background features in fastener images.It enhances the attention and discriminability of the clip regions in fastener images,enabling more accurate feature extraction of clip regions.

    The feature extraction module consists of five stages.Stage 1 is SE-Resblock,as shown in Fig.6.Unlike the input convolution layer of original ResNet-50,SE-Resblock has 64 convolution filters with a size of 3×3 and a stride of 1,and it incorporates the SE [40] attention mechanism.A basic residual block consists of three convolution layers and a residual connection,where the first and third convolution layers use a 1×1 kernel,and the second convolution layer uses a 3×3 kernel.However,this structure fails to accurately capture the edge feature information of clips.We introduce the SE attention mechanism named SE-Basic resblock in the last residual block of stages 2-5 (Fig.7).By incorporating the SE attention mechanism at each stage,the feature extraction module adaptively emphasizes significant feature channels,contributing to better extraction of important information such as the shape and localization of the clip.

    Figure 6:SE-resblock

    Figure 7:SE-basic resblock

    (2)Saliency prediction module and joint loss

    Inspired by U-Net [41] and U2-Net [42],this section designs a clip saliency prediction module based on a decoder structure.Five stages are set up in the saliency prediction module to match the input and output feature maps at the corresponding scale.Each stage consists of three convolution layers,and each convolution is followed by a BN and a ReLU activation function.Bilinear interpolation is used as the upsampling method.

    Furthermore,we propose a joint loss to supervise the output feature maps of the last layer in each stage,aiming to improve the accuracy of the coarse saliency map structure.

    BCE loss,which is widely used for binary classification problems,aims to minimize the difference between the true labels and the predicted labels.Dice loss is a commonly used loss function for image segmentation problems.The formulas are as follows:

    whereGandPdenote the ground truth and the predicted map,respectively;·is the dot product;and||||isl1norm.BCE loss evaluates the model’s performance by comparing the pixel-wise differences between the ground truth and the predicted map,while Dice loss focuses more on exploring the foreground region.

    We propose a joint loss by integrating BCE loss and Dice loss,which is defined as follows:

    whereαis empirically set to 0.5.

    (3)Boundary-aware module

    The unclear edges of the coarse saliency map generated by the saliency prediction module can lead to differences between the subsequently generated abnormal fasteners and the real abnormal fasteners,which affects the performance of the inspection model.Therefore,we design a boundary-aware module to optimize the clip regions and the boundaries(Fig.8).

    Figure 8:Detailed structure of clip boundary-aware module

    The boundary-aware module references the U-shaped encoder-decoder and residual architecture.This module performs boundary refinement on the coarse saliency maps generated by the saliency prediction module and adds the coarse saliency maps Scoarseto the processed saliency mapsSbato obtain the final refined mapsSrefined,as shown in Eq.(7).

    The boundary-aware module consists of an input layer,an encoder,a bridge stage,a decoder,and an output layer.Unlike the structure of the feature extraction module and the saliency prediction module,the encoder and decoder of the boundary-aware module include only four stages,and each stage consists of a cascade structure of 3×1 and 1×3 convolution layers,followed by max pooling.We adopt a convolution with 64 kernels of size 3×3 in the bridge stage at the bottom layer,followed by a BN and a ReLU.Boundary details are optimized further by using CBAM[43]to connect the feature maps before max pooling in the encoder with the corresponding stage of feature maps after upsampling in the decoder,which helps capture boundary details and important contextual information within the clip region,thereby improving the recognition accuracy of clip boundary details.

    2.2.3AbnormalFastenerSampleGeneration

    The high-speed railway WJ7 fastener is composed of a screw spike,clip,flat washer,insulated gauge block,gauge baffle,under-rail pad,iron pad,insulation plate under iron pad,and pre-embedded sleeves(Fig.9).

    Figure 9:Assembly forms of the fastener system.(a)Fastener assembly image(b)Real fastener image

    As shown in Fig.10a,the stress is highest in the curved section where the rear end of the clip contacts the gauge baffle.The maximum stress reaches 1312 MPa,which is nearly equivalent to the strength of the clip material 60Si2Mn.When the train passes,the movement mode of the clip generally takes on a butterfly shape,resulting in fatigue fracture of the clip.The real broken position of the clip is consistent with the finite element stress analysis result,with the broken position primarily concentrated in the curved section where the rear end of the clip contacts the gauge baffle.

    The detailed process of generating abnormal fastener samples based on the foreground clip region images obtained from Section 2.2 is as follows(Fig.11):

    1) Different broken positions are chosen manually based on the fastener’s force analysis result,and broken positions of the clip are chosen randomly at the same time.

    2)The clip region is cropped based on the broken position,or a rotation factor is used to rotate the clip to obtain an abnormal clip.

    3)The abnormal clip is combined with the background image to generate abnormal fasteners.In this study,completely missing fastener images are used as background image.

    Figure 11:Process of abnormal fastener sample generation

    2.3 Fastener State Inspection Model

    We propose a fastener state inspection model called Fastener ResNet-8,which is based on a lightweight version of ResNet (Fig.12).The model can detect four types of fastener states: normal WJ7,normal WJ8,abnormal WJ7,and abnormal WJ8,where abnormal fasteners contain defects such as breakage,loosening,and loss.To improve the classification speed of the network,we design a lightweight classification network,reducing the redundant classification performance.In addition,contextual features are connected and degradation is prevented through skip connection,thereby facilitating better training and optimization of the network and ensuring detection accuracy.

    The model mainly consists of an input convolution layer,three residual blocks,and a fully connected layer.The input convolution layer has 64 convolution filters with a size of 3×3,stride of 2,and padding of 1.The residual blocks have 64 convolution filters with a size of 3×3,stride of 1,and padding of 1,but the first convolution layer in the second and third residual blocks has a stride of 2.The rest of the design follows ResNet-18,including the initial 7×7 convolution and max pooling,and the final global average pooling layer and fully connected layer.Compared with the original minimal ResNet-18,Fastener ResNet-8 reduces the number of stacked residual blocks,significantly reducing the parameter calculation and improving the speed of classification prediction.The model is trained using a cross-entropy loss function,which is defined as follows:

    whereNis the number of training samples,Cis the number of classes,Y(i)denotes the class label of the i-th sample,andWdenotes the weight parameter matrix.Moreover,l is the truth expression,which takes the value of 1 when the predicted label matches the true label and 0 otherwise.In addition,we use the Adam method to optimize theLossand updateW.

    3 Experiments and Analysis

    3.1 Image Acquisition

    The fastener image acquisition system in this study primarily consists of the linear image acquisition unit and the track inspection beam (Fig.13).The image acquisition system includes an industrial high-speed linear CCD camera and non-visible light sources.A high-precision speed sensor is installed on the wheelset to achieve synchronized image capture and spatial equidistant sampling with two CCD linear cameras.The main parameters of the image acquisition unit are shown in Table 1.

    Figure 13:Image acquisition system

    Table 1: Parameters of the image acquisition unit

    The maximum inspection speed of the track inspection vehicle is about 180 km/h,each image field of view has a size of 1500 mm×1500 mm,and a single image acquisition unit can capture about 33 images per second,containing about 200 fasteners.

    3.2 Experimental Setup

    3.2.1ExperimentalEnvironment

    To verify the performance of the proposed fastener defect detection method based on data expansion,we conducted experiments on railway fastener region localization,abnormal fastener sample generation based on saliency detection,and fastener state inspection under the same computer hardware configuration(Table 2).

    Table 2: Configuration of the experimental environment

    3.2.2OverallTrainingProcess

    The overall training process of this paper is shown in Fig.14 and can be described as follows:

    Figure 14:Overall training process

    Step 1:Labelimg software is used to annotate real railway images collected in railway lines,and lightweight YOLOv5s is trained for the fastener localization model.

    Step 2:Labelme software is used to annotate the original fastener dataset,and F-SDNet is trained to generate abnormal fasteners.

    Step 3:Fastener ResNet-8 is trained with the generated abnormal fastener samples and original fastener images to obtain the fastener state inspection model.

    3.3 Fastener Localization

    3.3.1FastenerLocalizationDataset

    Currently,there is no publicly available dataset of railway fasteners.In this paper,a total of 225 railway images containing WJ7 fasteners and 80 images containing WJ8 fasteners were collected.Table 3 provides detailed information.

    Table 3: Detailed information of fastener localization dataset

    3.3.2LightweightResults

    This experiment uses the railway images that we collected for training,and the training parameters of YOLOv5s based on sparse training are shown in Table 4.Fig.15 shows the channel numbers in each layer of the model before and after pruning,with a total of 6294 channels pruned.Table 5 provides a comparison of the model parameters before and after pruning.Following fine-tuning of the pruned model,the parameter count decreased by 76.6%,while the mean average precision(mAP)of model decreased by only 3.2%compared with the value before pruning.The experimental results show that the sparse training and channel pruning strategy used in this section can significantly reduce the model’s parameters and size while improving detection speed,with only a minimal loss in model performance.

    Table 4: Parameters of sparse training

    Figure 15:Number of channels in each layer of the model before and after pruning

    Table 5: Comparison of model metrics before and after pruning

    3.3.3InspectionResults

    To further validate the performance of our proposed lightweight YOLOv5s,this study conducts comparative experiments with four other object detection methods:Faster-RCNN[44],YOLOv3[45],Tiny-YOLOv3,YOLOv4 and the original YOLOv5s,where the backbone of Faster-RCNN is ResNet-50 and the backbone of YOLOv3 is Darknet53.The comparative results of the five methods are shown in Table 6.

    Table 6: Performance comparison of different object detection methods

    In the experiments for fastener localization,we evaluate the effectiveness of the proposed method by using the following evaluation metrics:precision,recall,mAP,model size,FPS,and comprehensive evaluation indicatorλ.

    whereTP,TN,FP,FNrepresent true positives,true negatives,false positives,and false negatives,respectively.Nis the total number of predicted images,tedenotes the detection end time,andtsdenotes the detection start time.is the normalized value ofxij,andrepresents the optimal value of each index.The larger the value of precision,recall,mAP,and FPS,the better,and the smaller the value of model size,the better.wis the weight parameter of each index.This fastener localization model emphasizes detection speed and accuracy,thus,we setw1=0.2,w2=w5=0.3,w3=w4=0.1.According to Table 6,Tiny-YOLOv3 has significantly faster detection speed than other methods,but its precision of 63.72%falls short of the required detection accuracy.Although Faster R-CNN achieves the best detection accuracy,it has the slowest detection speed.Lightweight YOLOv5s performs similarly to Faster R-CNN and YOLOv5s in terms of detection performance,but its detection speed is noticeably faster than Faster R-CNN,with a 17.12% improvement compared with the original YOLOv5s.In addition,our model has a low parameter count and a compact size of only 3.8 MB,making it suitable for deployment on resource-constrained detection platforms.The comprehensive evaluation indicator of our proposed method is 0.858,which clearly shows the best performance.

    Therefore,in this paper,we employ lightweight YOLOv5s to accomplish fast localization and segmentation of railway fastener regions.The localization results are shown in Fig.16.

    3.4 Abnormal Fastener Sample Generation Based on Saliency Detection

    3.4.1ExperimentalSetup

    In this experiment,1750 fastener images were resized to 512×512×3 and allocated to the training set,validation set,and test set in an 8:1:1 ratio.We choose the Adam optimizer to train F-SDNet,setting the training to 50 epochs.We set the batch size to 8 because of GPU memory limitations.The learning rate is fixed at 0.001,and we do not incorporate learning rate decay because this strategy can lead to an increase in training error.

    Figure 16:Results of fastener positioning.(a)WJ7 fasteners.(b)WJ8 fasteners

    3.4.2EvaluationMetrics

    To further quantify and compare the performance of our proposed F-SDNet,we introduce MAE,max F-measure(max-Fm),mean F-measure(mean-Fm),max E-measure(max-Em),mean E-measure(mean-Em),S-measure(Sm),AP,and AUC in the saliency detection experiment.

    MAE is defined as the difference between predicted saliency mapS∈[0,1]W×Hand the binary ground truthG∈{0,1}W×H.A small MAE score corresponds to more accurate predictions and better detection performance.

    F-measure,denoted asFβ,is computed by the weighted harmonic mean of precision and recall:

    whereβ2is set to 0.3.On the basis of Eq.(15),an F-measure curve can be constructed.The maximum value of this curve is max-Fm,and the average value is mean-Fm.

    whereφsis an enhanced permutation matrix,which reflects the correlation between the saliency mapSand the ground truthGafter subtracting their global mean values.The E-measure curve is constructed by simultaneously considering the global mean of the image and the local pixel matching method.The maximum value of this curve is max-Em,and the average value at its adaptive threshold is mean-Em.

    S-measure evaluates the structural similarity between the predicted saliency map and the binary ground truth based on object perceptionSoand region perceptionSr.

    whereαis set to 0.5.

    On the basis of the values of true positive rate(TPR)and false positive rate(FPR),an ROC curve can be constructed.The area under the ROC curve is known as the AUC.

    3.4.3ComparisonswithState-of-the-Arts

    The superiority of the proposed F-SDNet was demonstrated by using seven existing salient object detection methods for visual comparison: BASNet [39],EDRNet [46],EGNet [37],PoolNet [38],R2-Net[47],RESCSFNet[48],and U2-Net[42].Fig.17 displays the predicted saliency maps of the fasteners,with the first and ninth rows being abnormal fasteners.The first row represents broken fasteners,while the ninth row represents loose fasteners.The remaining rows are normal fasteners.Evidently,other existing models exhibit poor performance in detecting normal and broken fasteners and fail to detect loose fasteners.In contrast,the predicted saliency maps of F-SDNet are closest to the ground truth,not only accurately identifying the foreground clip region of fasteners but also effectively suppressing background noise with better stability and robustness.

    Figure 17:Visual comparison of fastener saliency maps.(a)Source image(b)GT(c)Ours(d)BASNet(e)EDRNet(f)EGNet(g)PoolNet(h)R2-Net(i)RESCSFNet(j)U2-Net

    Quantitative comparative experiments were conducted using the evaluation metrics introduced in Section 3.3.2 to further evaluate the effectiveness of the proposed F-SDNet for fastener saliency detection.BASNet,EDRNet,EGNet,PoolNet,R2-Net,RESCSFNet,and U2-Net were compared,as shown in Table 7.Our proposed method consistently achieves the highest scores in all evaluation metrics,providing objective evidence of its superior detection accuracy and segmentation performance.As depicted in Fig.18,the curves of “Ours” consistently remain above those of other methods in the F-measure curve,E-measure curve,and P-R curve,with the largest S-measure score.This finding indicates that our method’s predicted saliency maps closely match the ground truth.

    Table 7:Comparison of the proposed method with seven other methods based on eight quantitative evaluation metrics

    Figure 18: (Continued)

    Figure 18:Evaluation curves on the eight models.(a)F-measure curves(b)E-measure curves(c)PR curves(d)S-measure curves

    3.4.4AbnormalFastenerSampleGeneration

    With the use of the abnormal fastener sample generation method based on F-SDNet,a total of 6330 abnormal WJ7 fasteners and 1275 abnormal WJ8 fasteners are generated.As shown in Fig.19,the generated abnormal fasteners include broken and loose ones.

    Figure 19:Generated abnormal fastener samples

    3.5 Fastener State Inspection

    3.5.1TrainingProcess

    Among the abnormal fastener samples generated in Section 2.2 and the original fastener samples obtained in Section 2.1,4000 images were selected as the pre-training dataset,where the ratio of four classes (normal WJ7,abnormal WJ7,normal WJ8,abnormal WJ8) is set to 1:1:1:1 to ensure the balance of sample numbers.The ratio of training set,validation set,and test set is 7:1.5:1.5,i.e.,the training set contains 2772 fastener samples;the validation set and test set each contain 614 fastener samples;and the input image sizes for training,validation,and testing are set to 224×224×3 in this section.The above dataset is fed into Fastener ResNet-8 with the parameter configuration in Table 8 for training to generate the fastener state inspection model,and the training loss function curve is shown in Fig.20.

    Table 8: Configuration for pre-training

    Figure 20:Training loss function curve of Fastener ResNet-8

    3.5.2ExperimentalResults

    We compared Fastener ResNet-8 with six state-of-the-art models,using detection accuracy(Acc),FPS,and model size as the evaluation index.The results are shown in Table 9,and Acc is defined as follows:

    Our model significantly outperforms the lightweight networks MobileNetv3 [49] and Efficient-Netv2 [50] in terms of accuracy.In addition,our model achieves a classification speed of 86.2 f/s,i.e.,the detection time for each fastener image is 11.6 ms,surpassing other classification networks.Although the MobileNetv3 model is the smallest in size,it achieves an accuracy of only 47.8%and is unable to accomplish the fastener inspection task.

    Table 9: Comparative results of fastener inspection using different methods

    To verify the detection accuracy of Fastener ResNet-8,we test 614 fastener samples from the test set.As depicted in Fig.21,the detection accuracy for abnormal fasteners reaches 100%.However,there are prediction errors for normal fasteners,with 2 normal WJ7 fasteners being incorrectly predicted as abnormal WJ7 fasteners and 6 normal WJ8 fasteners being incorrectly predicted as abnormal WJ8 fasteners.These prediction errors fall within an acceptable range,and Fastener ResNet-8 achieves an average accuracy of 98.7%.Thus,satisfactory results were obtained overall.

    Figure 21:Classification confusion matrix of Fastener ResNet-8

    To further validate the feasibility of our proposed fastener state inspection model in practical applications,we randomly localize and segment 7505 fastener samples from real railway images.These samples are fed into the trained Fastener ResNet-8 model for detection,and the classification results are shown in Table 10 and Fig.22.

    As observed from the confusion matrices in Figs.21 and 22,Fastener ResNet-8 can classify WJ8 fasteners accurately,while a minor classification error occurs in identifying normal WJ7 fasteners.On a total of 6908 normal WJ7 samples,6907 samples were correctly classified,while one sample was misclassified as an abnormal WJ7 fastener.The misclassified fastener is depicted in Fig.23.Manual inspection found that the normal fastener is located at the ends of the railway image.The saved images are cropped based on a fixed length because of the continuous capturing of railway images using a linear CCD camera.As a result,incomplete fastener regions are present at the ends of images,which visually resemble broken fasteners,leading to the erroneous classification.In addition,fasteners that are obscured by foreign objects are misclassified as abnormal fasteners but are reclassified after manual review.This paper can detect broken fasteners,loose fasteners and missing fasteners,as shown in Fig.24.

    Table 10: Results of the real fastener inspection task(7505 fasteners)

    Figure 22:Classification confusion matrix of 7505 real fasteners

    Figure 23:Misclassified WJ7 fastener

    Figure 24:Abnormal fastener detection results.(a and b)broken fasteners(c)loose fastener(d)missing fastener

    4 Conclusion

    In real railway scenarios,the occurrence of abnormal fasteners is significantly lower than that of normal fasteners.Imbalanced datasets can affect the stability and accuracy of inspection models.Therefore,we proposed a novel data augmentation-based method for fastener defect detection.This method used a saliency detection network to segment the foreground clip region of fasteners.Then,on the basis of the segmented clips,random cropping,rotation,and background fusion were performed to generate a large number of abnormal fastener samples.Finally,we fed the augmented fastener dataset into the classification model to classify WJ7 and WJ8 fasteners into normal and abnormal states.Experimental results demonstrate the outstanding performance of our fastener defect detection method in imbalanced datasets,achieving remarkable accuracy and speed with strong robustness.It can be applied to various types of fastener detection tasks and has significant theoretical and practical value.

    The fastener inspection system in this paper adopts the offline inspection method.First,inspection images are collected using a track inspection vehicle.Then,the collected inspection images are processed using the server in the background.Finally,the inspection results are published after manual review.The limitation of the proposed method is that the detection speed of the fastener state inspection model,Fastener ResNet-8,is 86 frames per second,whereas a single image acquisition unit can collect around 200 fasteners per second.As a result,it cannot meet the demand for real-time detection.

    In the future,we will further optimize the inspection model to improve its accuracy and speed to achieve real-time detection.Meanwhile,we will extend the proposed method to research on imbalanced sample defect detection in track surface structures based on deep learning.

    Acknowledgement:The authors wish to express sincere appreciation to the reviewers for their valuable comments,which significantly improved this paper.The authors would like to give thanks to the Shanghai University of Engineering Science,for promoting this research and providing the laboratory facility and support.

    Funding Statement:This work was supported in part by the National Natural Science Foundation of China(Grant Nos.51975347 and 51907117)and in part by the Shanghai Science and Technology Program(Grant No.22010501600).

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: Shubin Zheng,Yue Wang,Liming Li;data collection: Yue Wang,Liming Li;analysis and interpretation of results: Yue Wang,Xieqi Chen;draft manuscript preparation: Shubin Zheng,Lele Peng,Zhanhao Shang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Not applicable.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    最近手机中文字幕大全| 亚洲欧洲国产日韩| 丝袜美腿诱惑在线| 精品一品国产午夜福利视频| 久久久精品区二区三区| 王馨瑶露胸无遮挡在线观看| 午夜久久久在线观看| 免费在线观看黄色视频的| 狠狠婷婷综合久久久久久88av| 91麻豆精品激情在线观看国产 | 亚洲av欧美aⅴ国产| 国产一区二区三区综合在线观看| 18禁国产床啪视频网站| 国产精品香港三级国产av潘金莲 | 一级,二级,三级黄色视频| 亚洲av日韩在线播放| 90打野战视频偷拍视频| 啦啦啦 在线观看视频| 1024香蕉在线观看| 赤兔流量卡办理| 精品一品国产午夜福利视频| 国产xxxxx性猛交| 每晚都被弄得嗷嗷叫到高潮| 啦啦啦啦在线视频资源| 一级a爱视频在线免费观看| 久久精品国产a三级三级三级| 国产日韩欧美在线精品| 欧美精品亚洲一区二区| 亚洲自偷自拍图片 自拍| 精品少妇久久久久久888优播| 2018国产大陆天天弄谢| 各种免费的搞黄视频| 一区二区av电影网| 久久久国产精品麻豆| 人妻 亚洲 视频| 精品亚洲成a人片在线观看| av线在线观看网站| 另类精品久久| 99热全是精品| 五月天丁香电影| 男人添女人高潮全过程视频| 在线亚洲精品国产二区图片欧美| 免费在线观看视频国产中文字幕亚洲 | avwww免费| 91字幕亚洲| 亚洲欧美日韩另类电影网站| 成年人午夜在线观看视频| 久久午夜综合久久蜜桃| 久久久久精品国产欧美久久久 | 亚洲成色77777| 亚洲精品美女久久久久99蜜臀 | 国产一区二区 视频在线| 一本—道久久a久久精品蜜桃钙片| 久久精品成人免费网站| 满18在线观看网站| 久久人妻熟女aⅴ| 大陆偷拍与自拍| 九草在线视频观看| 人人妻人人爽人人添夜夜欢视频| 在线观看免费视频网站a站| 晚上一个人看的免费电影| 亚洲情色 制服丝袜| 老司机午夜十八禁免费视频| 大陆偷拍与自拍| 久久精品熟女亚洲av麻豆精品| 国产一级毛片在线| 每晚都被弄得嗷嗷叫到高潮| 天天躁夜夜躁狠狠躁躁| 国产精品国产三级国产专区5o| 丁香六月欧美| av片东京热男人的天堂| 国产成人免费观看mmmm| 亚洲国产欧美在线一区| 2018国产大陆天天弄谢| av在线播放精品| 啦啦啦在线免费观看视频4| av国产精品久久久久影院| 日日摸夜夜添夜夜爱| 一级a爱视频在线免费观看| 嫩草影视91久久| 欧美在线黄色| 精品亚洲成a人片在线观看| 丝袜在线中文字幕| 久久久久国产精品人妻一区二区| 女人爽到高潮嗷嗷叫在线视频| 看免费av毛片| 欧美中文综合在线视频| 日韩制服骚丝袜av| 亚洲人成电影观看| 在线看a的网站| 女人久久www免费人成看片| 亚洲人成网站在线观看播放| 波多野结衣av一区二区av| 999精品在线视频| 亚洲精品日本国产第一区| 亚洲国产欧美网| 精品一区二区三区av网在线观看 | 男女边吃奶边做爰视频| 亚洲免费av在线视频| 精品熟女少妇八av免费久了| 欧美精品人与动牲交sv欧美| 天天添夜夜摸| 久久鲁丝午夜福利片| 深夜精品福利| 久久狼人影院| 免费黄频网站在线观看国产| 老汉色∧v一级毛片| 亚洲欧美日韩另类电影网站| 免费日韩欧美在线观看| 亚洲成人手机| 免费在线观看日本一区| 中文字幕最新亚洲高清| 熟女少妇亚洲综合色aaa.| 欧美精品一区二区免费开放| 中文字幕人妻丝袜制服| 国产亚洲欧美在线一区二区| 久久亚洲国产成人精品v| 日韩视频在线欧美| 大香蕉久久网| 久久久国产精品麻豆| 久久久久精品人妻al黑| 久久毛片免费看一区二区三区| 国产av一区二区精品久久| 波多野结衣av一区二区av| 天天躁夜夜躁狠狠躁躁| 另类精品久久| 悠悠久久av| 亚洲三区欧美一区| 久久久久国产精品人妻一区二区| 女性被躁到高潮视频| 高清欧美精品videossex| 99久久综合免费| 亚洲,一卡二卡三卡| 一级毛片女人18水好多 | 欧美在线黄色| 免费黄频网站在线观看国产| 亚洲av在线观看美女高潮| 纯流量卡能插随身wifi吗| 中文字幕制服av| 国产无遮挡羞羞视频在线观看| 97精品久久久久久久久久精品| 国产片特级美女逼逼视频| 在现免费观看毛片| 老司机在亚洲福利影院| 欧美人与性动交α欧美软件| 9191精品国产免费久久| 日日爽夜夜爽网站| 午夜日韩欧美国产| 午夜日韩欧美国产| 美女福利国产在线| 精品少妇久久久久久888优播| 久久影院123| 亚洲情色 制服丝袜| 中国国产av一级| 国产精品一区二区在线不卡| 欧美少妇被猛烈插入视频| 欧美精品av麻豆av| 搡老乐熟女国产| 亚洲精品国产色婷婷电影| 国产精品免费大片| 免费av中文字幕在线| 夫妻午夜视频| 欧美少妇被猛烈插入视频| 精品一区在线观看国产| 久久ye,这里只有精品| 色视频在线一区二区三区| 亚洲精品久久午夜乱码| 久久青草综合色| 19禁男女啪啪无遮挡网站| 十八禁网站网址无遮挡| 性高湖久久久久久久久免费观看| 九草在线视频观看| 亚洲av日韩精品久久久久久密 | 亚洲熟女精品中文字幕| 中文字幕人妻熟女乱码| 免费高清在线观看视频在线观看| 久久精品成人免费网站| 国产色视频综合| 又大又黄又爽视频免费| 少妇人妻 视频| 国产成人免费观看mmmm| 大码成人一级视频| 大片电影免费在线观看免费| 两个人看的免费小视频| 久久综合国产亚洲精品| 看免费av毛片| 99久久综合免费| 99国产精品一区二区三区| 美女国产高潮福利片在线看| 97人妻天天添夜夜摸| 国产视频一区二区在线看| 亚洲欧美一区二区三区国产| 亚洲精品在线美女| 天天躁夜夜躁狠狠躁躁| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲国产看品久久| 亚洲精品一二三| 黄色视频不卡| 久久国产精品大桥未久av| 国产熟女欧美一区二区| 男女下面插进去视频免费观看| 国产精品 国内视频| 最黄视频免费看| 桃花免费在线播放| 国产男人的电影天堂91| 欧美少妇被猛烈插入视频| 亚洲情色 制服丝袜| 热re99久久国产66热| 欧美 亚洲 国产 日韩一| 一二三四在线观看免费中文在| 亚洲欧美精品综合一区二区三区| 精品少妇一区二区三区视频日本电影| netflix在线观看网站| 啦啦啦中文免费视频观看日本| 国产成人精品久久二区二区免费| 国产在视频线精品| 777久久人妻少妇嫩草av网站| 久久久精品区二区三区| 91麻豆精品激情在线观看国产 | 丰满迷人的少妇在线观看| av在线播放精品| 一本大道久久a久久精品| 亚洲欧美中文字幕日韩二区| 黄网站色视频无遮挡免费观看| 亚洲伊人色综图| 男人添女人高潮全过程视频| 亚洲av美国av| 天天添夜夜摸| 亚洲一区二区三区欧美精品| 国产成人av教育| 大片电影免费在线观看免费| 91老司机精品| 精品久久久精品久久久| 国产精品一区二区精品视频观看| 18禁观看日本| 男人添女人高潮全过程视频| 99re6热这里在线精品视频| 波野结衣二区三区在线| 欧美精品人与动牲交sv欧美| 制服诱惑二区| 各种免费的搞黄视频| √禁漫天堂资源中文www| 丁香六月欧美| 男人爽女人下面视频在线观看| 激情视频va一区二区三区| 嫩草影视91久久| 精品人妻在线不人妻| 啦啦啦 在线观看视频| 国产精品av久久久久免费| 纵有疾风起免费观看全集完整版| 天天躁夜夜躁狠狠久久av| 久久精品久久久久久噜噜老黄| 在线av久久热| 国产精品 欧美亚洲| 黄色毛片三级朝国网站| 亚洲精品日本国产第一区| 亚洲精品在线美女| 免费黄频网站在线观看国产| 99国产精品免费福利视频| 亚洲精品一区蜜桃| 韩国高清视频一区二区三区| 一区二区三区乱码不卡18| 9色porny在线观看| 亚洲精品乱久久久久久| 亚洲色图 男人天堂 中文字幕| 国产精品av久久久久免费| 国产91精品成人一区二区三区 | 国产有黄有色有爽视频| 免费少妇av软件| 亚洲成人免费电影在线观看 | 精品亚洲乱码少妇综合久久| a 毛片基地| 亚洲成人手机| 国产高清国产精品国产三级| 国产一区二区在线观看av| 男女边摸边吃奶| 亚洲成色77777| 免费观看av网站的网址| 日韩大码丰满熟妇| 国产成人91sexporn| 欧美国产精品一级二级三级| 大片免费播放器 马上看| 九色亚洲精品在线播放| 日韩制服丝袜自拍偷拍| 久久毛片免费看一区二区三区| 国产亚洲欧美在线一区二区| 交换朋友夫妻互换小说| 99香蕉大伊视频| 国产男人的电影天堂91| 日日爽夜夜爽网站| 亚洲欧美中文字幕日韩二区| 久久久久久久国产电影| 久久久精品免费免费高清| 久久久国产精品麻豆| 久久久亚洲精品成人影院| 成年女人毛片免费观看观看9 | 亚洲黑人精品在线| 又大又黄又爽视频免费| 国产人伦9x9x在线观看| 精品国产国语对白av| 性色av一级| 国产在线观看jvid| 十八禁网站网址无遮挡| 1024视频免费在线观看| 久久久久久亚洲精品国产蜜桃av| 精品一区二区三区av网在线观看 | 99久久人妻综合| 午夜免费男女啪啪视频观看| 亚洲人成77777在线视频| 视频在线观看一区二区三区| 国产97色在线日韩免费| 亚洲欧美日韩高清在线视频 | 人体艺术视频欧美日本| 国产国语露脸激情在线看| 99久久99久久久精品蜜桃| 91字幕亚洲| 国产伦理片在线播放av一区| 欧美日韩亚洲综合一区二区三区_| 亚洲天堂av无毛| 亚洲欧洲精品一区二区精品久久久| 色播在线永久视频| 韩国高清视频一区二区三区| 美女午夜性视频免费| 香蕉丝袜av| 一区在线观看完整版| 夫妻午夜视频| 中文字幕精品免费在线观看视频| 丝袜在线中文字幕| 久久天堂一区二区三区四区| 老鸭窝网址在线观看| 少妇的丰满在线观看| 国产99久久九九免费精品| 亚洲熟女毛片儿| 午夜福利一区二区在线看| 少妇裸体淫交视频免费看高清 | 欧美日本中文国产一区发布| 色精品久久人妻99蜜桃| 久久久久久亚洲精品国产蜜桃av| 黄色毛片三级朝国网站| 久久精品久久久久久久性| 男人添女人高潮全过程视频| 免费女性裸体啪啪无遮挡网站| 99热国产这里只有精品6| 搡老乐熟女国产| a级毛片黄视频| tube8黄色片| 成人免费观看视频高清| 亚洲人成电影免费在线| 精品第一国产精品| 亚洲精品第二区| 日韩精品免费视频一区二区三区| 午夜福利在线免费观看网站| 18在线观看网站| 男女高潮啪啪啪动态图| 亚洲一区中文字幕在线| 国产一区有黄有色的免费视频| 欧美日韩综合久久久久久| 国产精品国产三级专区第一集| 国产精品一区二区在线不卡| 中文乱码字字幕精品一区二区三区| 欧美成人精品欧美一级黄| 91麻豆精品激情在线观看国产 | 成人手机av| 色综合欧美亚洲国产小说| 9191精品国产免费久久| 亚洲精品国产区一区二| 午夜免费男女啪啪视频观看| 免费女性裸体啪啪无遮挡网站| 美女脱内裤让男人舔精品视频| 免费高清在线观看视频在线观看| 国产一区亚洲一区在线观看| 在线观看免费高清a一片| 国产成人欧美| 国产一卡二卡三卡精品| 亚洲黑人精品在线| 在线av久久热| 天天躁日日躁夜夜躁夜夜| 嫩草影视91久久| 国产在线视频一区二区| 久久性视频一级片| 亚洲av电影在线观看一区二区三区| 国产精品久久久久久精品电影小说| 国产野战对白在线观看| 久久女婷五月综合色啪小说| 汤姆久久久久久久影院中文字幕| 桃花免费在线播放| 亚洲国产精品一区三区| 日韩一区二区三区影片| 国产午夜精品一二区理论片| 日韩人妻精品一区2区三区| av一本久久久久| 在线观看www视频免费| 欧美黄色片欧美黄色片| 国产一区二区三区av在线| 黄色a级毛片大全视频| 欧美亚洲日本最大视频资源| 捣出白浆h1v1| 欧美日韩视频精品一区| av在线播放精品| 欧美成人精品欧美一级黄| 欧美激情极品国产一区二区三区| 国产国语露脸激情在线看| 亚洲中文日韩欧美视频| 国产亚洲精品第一综合不卡| 51午夜福利影视在线观看| 一本—道久久a久久精品蜜桃钙片| 日韩免费高清中文字幕av| 丝袜喷水一区| 天堂中文最新版在线下载| √禁漫天堂资源中文www| 欧美精品啪啪一区二区三区 | 国产亚洲欧美精品永久| 久久ye,这里只有精品| 亚洲精品久久午夜乱码| 高清欧美精品videossex| 亚洲精品中文字幕在线视频| 午夜福利视频在线观看免费| kizo精华| 亚洲九九香蕉| 国产精品国产三级专区第一集| 18禁裸乳无遮挡动漫免费视频| 国产一区二区 视频在线| 夜夜骑夜夜射夜夜干| 亚洲欧美精品综合一区二区三区| 亚洲 国产 在线| 赤兔流量卡办理| 99精国产麻豆久久婷婷| 一级毛片 在线播放| 欧美日韩国产mv在线观看视频| 欧美精品啪啪一区二区三区 | 亚洲成av片中文字幕在线观看| 亚洲五月婷婷丁香| 日日爽夜夜爽网站| 国产野战对白在线观看| cao死你这个sao货| 国产高清videossex| 悠悠久久av| 大陆偷拍与自拍| 啦啦啦在线观看免费高清www| 亚洲欧美日韩另类电影网站| 十八禁网站网址无遮挡| 久久久久久亚洲精品国产蜜桃av| 一本大道久久a久久精品| 岛国毛片在线播放| 亚洲久久久国产精品| 亚洲国产欧美一区二区综合| 亚洲精品国产一区二区精华液| 久久国产精品影院| 视频区欧美日本亚洲| 一级毛片 在线播放| 国产精品熟女久久久久浪| 精品国产一区二区三区四区第35| 日韩一本色道免费dvd| 免费在线观看日本一区| av天堂在线播放| 大型av网站在线播放| 50天的宝宝边吃奶边哭怎么回事| 国产精品久久久久久精品电影小说| 欧美+亚洲+日韩+国产| 宅男免费午夜| cao死你这个sao货| 美女扒开内裤让男人捅视频| 五月天丁香电影| 涩涩av久久男人的天堂| 高清视频免费观看一区二区| 黄频高清免费视频| 女性生殖器流出的白浆| 中国美女看黄片| av网站在线播放免费| 亚洲av片天天在线观看| 欧美亚洲日本最大视频资源| 精品人妻在线不人妻| 大话2 男鬼变身卡| 操美女的视频在线观看| 日日夜夜操网爽| 精品国产超薄肉色丝袜足j| 精品一区二区三卡| 99国产综合亚洲精品| 国产麻豆69| 久久国产亚洲av麻豆专区| 欧美日韩黄片免| av视频免费观看在线观看| 国产成人精品久久久久久| 女人精品久久久久毛片| 亚洲中文日韩欧美视频| 亚洲国产欧美日韩在线播放| 国产精品二区激情视频| 亚洲免费av在线视频| 亚洲欧美激情在线| 亚洲av电影在线进入| 各种免费的搞黄视频| 欧美日韩av久久| 深夜精品福利| 又大又黄又爽视频免费| 97人妻天天添夜夜摸| 国产在线免费精品| 一级片'在线观看视频| 在线亚洲精品国产二区图片欧美| 满18在线观看网站| 国产成人免费无遮挡视频| 国产精品久久久av美女十八| 欧美成狂野欧美在线观看| 国语对白做爰xxxⅹ性视频网站| 精品人妻在线不人妻| 精品国产一区二区久久| a 毛片基地| 岛国毛片在线播放| 中文字幕色久视频| 久久久精品区二区三区| 夫妻性生交免费视频一级片| 亚洲欧洲日产国产| 男人爽女人下面视频在线观看| 18禁国产床啪视频网站| 成人三级做爰电影| 妹子高潮喷水视频| 欧美激情高清一区二区三区| 久久综合国产亚洲精品| 在线观看人妻少妇| 日本色播在线视频| 色婷婷久久久亚洲欧美| 亚洲,欧美,日韩| 大片电影免费在线观看免费| 亚洲国产欧美日韩在线播放| 交换朋友夫妻互换小说| av在线播放精品| 考比视频在线观看| 色婷婷av一区二区三区视频| 亚洲三区欧美一区| 51午夜福利影视在线观看| 久久青草综合色| 两个人免费观看高清视频| 日韩制服骚丝袜av| 老司机在亚洲福利影院| 久久国产精品大桥未久av| 成人18禁高潮啪啪吃奶动态图| 欧美日韩亚洲高清精品| 亚洲一区二区三区欧美精品| 久久 成人 亚洲| 亚洲人成网站在线观看播放| 亚洲综合色网址| 好男人电影高清在线观看| 美女大奶头黄色视频| 91九色精品人成在线观看| 大香蕉久久网| 91老司机精品| 人妻 亚洲 视频| 色婷婷av一区二区三区视频| 国产日韩欧美视频二区| 黑丝袜美女国产一区| 91老司机精品| 成人午夜精彩视频在线观看| 女人精品久久久久毛片| 国产亚洲av高清不卡| 少妇裸体淫交视频免费看高清 | 999久久久国产精品视频| 色精品久久人妻99蜜桃| 欧美日韩精品网址| 成年美女黄网站色视频大全免费| 国产免费一区二区三区四区乱码| 久久久久精品人妻al黑| 校园人妻丝袜中文字幕| 国产三级黄色录像| 国产1区2区3区精品| www.av在线官网国产| 亚洲精品久久午夜乱码| 国产欧美日韩一区二区三 | 亚洲第一青青草原| svipshipincom国产片| 免费少妇av软件| 国产精品亚洲av一区麻豆| 午夜影院在线不卡| 欧美日韩福利视频一区二区| 亚洲一区中文字幕在线| 国产一卡二卡三卡精品| 免费观看a级毛片全部| 午夜日韩欧美国产| 亚洲国产看品久久| 一区二区三区精品91| 真人做人爱边吃奶动态| 久久久国产欧美日韩av| 欧美激情高清一区二区三区| 日本猛色少妇xxxxx猛交久久| 日韩中文字幕欧美一区二区 | 日本av免费视频播放| 免费一级毛片在线播放高清视频 | 大香蕉久久网| 国产精品欧美亚洲77777| 亚洲精品第二区| 国产亚洲欧美在线一区二区| 777米奇影视久久| 99热全是精品| 欧美日韩亚洲综合一区二区三区_| 校园人妻丝袜中文字幕| 欧美激情 高清一区二区三区| 伦理电影免费视频| h视频一区二区三区| 一级片'在线观看视频| 日韩视频在线欧美| 国产一区亚洲一区在线观看| 少妇精品久久久久久久| 女人精品久久久久毛片| 91成人精品电影| 一级毛片 在线播放| 91成人精品电影| 国产一区二区在线观看av| 久久久久久久大尺度免费视频| 欧美成狂野欧美在线观看| 悠悠久久av| 午夜av观看不卡| 久久精品国产a三级三级三级| 国产成人一区二区三区免费视频网站 | 爱豆传媒免费全集在线观看| 女人高潮潮喷娇喘18禁视频| 亚洲色图 男人天堂 中文字幕| 在线观看www视频免费|