• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Study on Enhancing Chip Detection Efficiency Using the Lightweight Van-YOLOv8 Network

    2024-05-25 14:40:00MengHuangHongleiWeiandXianyiZhai
    Computers Materials&Continua 2024年4期

    Meng Huang,Honglei Weiand Xianyi Zhai

    School of Mechanical Engineering and Automation,Dalian Polytechnic University,Dalian,116034,China

    ABSTRACT In pursuit of cost-effective manufacturing,enterprises are increasingly adopting the practice of utilizing recycled semiconductor chips.To ensure consistent chip orientation during packaging,a circular marker on the front side is employed for pin alignment following successful functional testing.However,recycled chips often exhibit substantial surface wear,and the identification of the relatively small marker proves challenging.Moreover,the complexity of generic target detection algorithms hampers seamless deployment.Addressing these issues,this paper introduces a lightweight YOLOv8s-based network tailored for detecting markings on recycled chips,termed Van-YOLOv8.Initially,to alleviate the influence of diminutive,low-resolution markings on the precision of deep learning models,we utilize an upscaling approach for enhanced resolution.This technique relies on the Super-Resolution Generative Adversarial Network with Extended Training(SRGANext)network,facilitating the reconstruction of high-fidelity images that align with input specifications.Subsequently,we replace the original YOLOv8s model’s backbone feature extraction network with the lightweight Vanilla Network(VanillaNet),simplifying the branch structure to reduce network parameters.Finally,a Hybrid Attention Mechanism(HAM)is implemented to capture essential details from input images,improving feature representation while concurrently expediting model inference speed.Experimental results demonstrate that the Van-YOLOv8 network outperforms the original YOLOv8s on a recycled chip dataset in various aspects.Significantly,it demonstrates superiority in parameter count,computational intricacy,precision in identifying targets,and speed when compared to certain prevalent algorithms in the current landscape.The proposed approach proves promising for real-time detection of recycled chips in practical factory settings.

    KEYWORDS Lightweight neural networks;attention mechanisms;image super-resolution enhancement;feature extraction;small object detection

    1 Introduction

    The escalating volume of discarded electronic devices has led to the emergence of a significant reservoir of reusable semiconductor chips.Compared to their new counterparts,recycled chips come at a markedly lower price,prompting numerous enterprises to adopt chip regenerating equipment for cost reduction.Following successful functional testing,chips undergo tape-and-reel packaging for automated surface mounting on circuit boards.Ensuring precise chip orientation during this process is crucial,as any deviation can result in severe consequences,including the scrapping of circuit boards.To maintain consistent chip orientation,a small circular marker is typically placed on one end of the chip to denote pin arrangement.Detecting and identifying the position of this circular marker is essential for both testing and packaging.However,recycled chips often exhibit substantial surface wear,leading to blurred and challenging-to-detect markers.Moreover,the computational resources required by models for detection are substantial,posing difficulties and challenges during algorithm deployment on computing-limited terminal apparatus.Therefore,a critical challenge in this field is how to minimize the complexity and computational demands of detection models while ensuring precision.

    Recycled chip mark detection falls under the category of small object recognition in the field of machine vision.The common methods include conventional image processing and deep learning.Conventional image processing mainly involves image segmentation,texture analysis,image restoration,and feature matching.Feature matching is widely used due to its adaptability in dealing with grayscale changes,deformation,and occlusion.Cui et al.[1]employed diverse approaches for identifying distinct irregularities on the exterior of mobile phone casings,including the least squares method,image differential algorithm,and an improved traditional template matching algorithm for detecting IR holes,ink spots,and LOGOs.Yao et al.[2]extracted the HU invariant moment features and template features of target contours for coarse matching to obtain candidate part contours,used an improved Harris corner detection method to obtain corners,and based on gray-scale fast matching,achieved the effect of improving the precision of part image matching.While these conventional methods have made progress in target feature matching,they perform well on clear chip surfaces but are less effective on severely worn marks on recycled chips.

    With the progress of modern information technology and intelligent manufacturing techniques,deep learning-based systems for detecting small targets in industrial products have found widespread application [3].Given that the hardware employed in industrial settings is often edge devices with limited memory and computational power,considerations extend beyond the requirements of the mechanical drive systems to encompass the computational demands of the models.Consequently,reducing the size of the model proves advantageous for seamless integration into computing devices[4].The prevalent methodologies for object detection include multi-step strategies like Region-Centric Convolutional Neural Network (R-CNN) [5],Fast R-CNN [6],Mask R-CNN [7],and one-step methodologies such as One-Shot Multibox Detector (SSD) [8],You Only Look Once (YOLO) [9],and Transformer-based object detection algorithms like Detection Transformer (DETR) [10].Twostage algorithms first generate candidate boxes and then extract features from these boxes before regressing the objects,which slows down the detection speed.Transformer-based DETR performs poorly in small object detection and has extremely long training times,10–20 times that of two-stage algorithms.In contrast to alternative categories,object detection algorithms employing a single-stage approach,grounded in regression,combine both localization and classification objectives,leading to increased detection speed and significant advantages in terms of real-time object detection.Moreover,the YOLO algorithm is highly scalable and easily extends to new detection tasks.Based on single-stage algorithms,for more accurate specific target detection,Li et al.[11]proposed an aviation engine part surface defect detection model,YOLO-KEB.This model incorporates the Efficient Channel Attention Network(ECA-Net)into YOLO’s foundational feature extraction network to improve its capabilities in feature extraction.Additionally,it integrates the Bi-directional Feature Pyramid Network(BiFPN)module into the feature integration network for a comprehensive integration of multi-scale features,thereby amplifying the model’s performance for object detection.Wang et al.[12] introduced a network for small defect detection,YOLOV4-SA,combining the Spatial Attention Module (SAM)with YOLOV4.SAM corrects feature values and highlights defect areas,thus effectively recognizing small defects.To address the deployment issue of object detection models on terminal devices with limited computational resources,Zhou[13]and colleagues introduced the YOLOv5s-GCE lightweight model designed to identify surface defects on strip steel.This model incorporates the Ghost module and integrates the Coordinate Attention(CA)strategy,effectively decreasing the model’s dimensions and computational demand without compromising detection accuracy.Yang et al.[14]proposed the improved CBAM-MobilenetV2-YOLOv5 model,introducing both the Mobile Network Version 2(MobilenetV2)module and Convolutional Block Attention Module(CBAM)for a lighter strip steel surface defect detection model.Following this,Zhang [15] improved YOLOv5 by using a lighter Shuffle Network Version 2 (ShuffleNetv2) as the backbone network,reducing model complexity and having an advantage in detection speed.Zhou et al.[16] proposed the YOLOv8-EL object detection method,using Generative Adversarial Network for Generative Art (GauGAN) for the purpose of augmenting the dataset to rectify the imbalance of different defects in the dataset.The method incorporates the Context Aggregation Module(CAM)in the backbone and feature extraction networks to suppress background noise and builds a Multi-Attention Detection Head (MADH) to effectively improve detection accuracy.

    The aforementioned studies have undertaken significant efforts in the lightweight processing of computationally intensive object detection models,offering valuable insights.Addressing challenges related to the identification of small and heavily worn markings on semiconductor chips,as well as the deployment complexities of generic detection algorithms on resource-limited devices,this paper presents a novel lightweight chip marker detection algorithm.Leveraging the characteristics of chip markings and building upon YOLOv8s as the baseline,our approach enhances the detection performance of the original YOLOv8s method while reducing the computational load,rendering the network more lightweight.The principal achievements of this study are delineated as follows:

    (1) Effectively generating high-quality samples using the SRGANext sample generator to meet the input size requirements of the detection model.Simultaneously,significantly enhancing image resolution in this process contributes to providing a superior dataset for the detection model.

    (2) Introducing the lightweight VanillaNet [17] as the backbone feature extraction network for YOLOv8s,successfully reducing the number of convolutional layers and computational resources.This adjustment results in a more lightweight detection model.

    (3) Integrating the HAM [18] into the foundational structure of YOLOv8s to enhance the network’s proficiency in capturing target feature information.This technology elevates the model’s predictive capacity,enabling the real-time and accurate detection of chip markings.

    Section 2 of this paper outlines the chip marker detection approach,encompassing data collection,image preprocessing,and the Van-YOLOv8 detection model.In Section 3,the experimental details are expounded,covering evaluation metrics,assessing the effectiveness of validation data preprocessing,conducting ablation experiments,and comparing results with other experiments.Section 4 concludes the paper and provides future prospects.

    2 Experimental Setup and Methodology

    2.1 Data Collection

    As depicted in Fig.1,the experimental platform primarily consists of an industrial camera,a light source,a vibrating disk,a fiber optic sensor,and a feed box.Chips are transported along the track of the vibrating disk and,once detected by the fiber optic sensor,trigger the industrial camera to capture an image.The indicator on the front side of the chip is used to identify the position of the first pin.Accurate detection of this circular mark is essential.Therefore,the YOLOv8 network is used to detect the position of the chip mark.When the indicator is positioned in the lower-left corner,chips continues to move forward and is placed into the tray by a robotic arm to proceed to the next inspection process.Otherwise,the chip is blown back to the vibrating disk by air holes.The detection continues until all the chips on the vibrating disk are checked.The company requires an image processing speed of 200 pcs/min,and to improve the speed of image capture and transmission,the image resolution is set to 304 pixels×168 pixels.

    Figure 1: Experimental platform for chip symbol detection

    2.2 Image Preprocessing

    In practical applications,low-resolution images may hinder deep learning models from effectively capturing and identifying critical features,thereby impacting the ultimate detection performance.To surmount this challenge,the paper introduces SRGANext technology,enhancing the quality and clarity of chip marker images significantly.The specific process is illustrated in Fig.2,where a 128 ?128-sized block is extracted from the image’s bottom-left corner,and the circular marker is magnified.If the template matches the marker in this region,it indicates correct positioning.The YOLOv8 model uses the Letterbox[19]function to process images to fit the model’s input size.This function maintains the original aspect ratio of the image by adding padding on one or both necessary sides to adjust the size to meet the input dimensions.This method introduces additional non-informative areas,reducing the effective resolution.Therefore,to improve recognition accuracy,this paper first enlarges the image to 512×512 before inputting it into the YOLOv8 model.To enhance the clarity of the enlarged image,a super-resolution magnification method based on the SRGANext network is used.The images are then annotated to construct a training set for network training,followed by chip detection.

    The SRGANext architecture represents an enhanced iteration of the Super-Resolution Generative Adversarial Network (SRGAN) [20],with structural improvements facilitated by the Convnext [21]network.It comprises a generative network and a discriminative network.The framework takes in low-resolution images with three channels,and the generator network reconstructs high-resolution images from these.The discriminator serves as a tool to help the generator produce better quality images.The discriminator only guides the generator during the training phase.In the inference stage,the generator independently reconstructs high-resolution images.The specific structure is illustrated in Fig.3.

    Figure 2: Overall framework for data preprocessing

    Figure 3: SRGANext network architecture

    The generative module in the SRGANext architecture is derived from the Convnext network,incorporating internal adjustments in channel quantity,as illustrated in Fig.3a.The generator network sequentially goes through a stem module,four Stage modules,and an upsample module.Each Stage module contains a certain number of SRGANext Block modules in the ratio of 3:3:9:3.As shown in Fig.3b,the SRGANext Block is a residual module that includes DW Conv and PW Conv for adjusting the channel count.Additionally,the LayerNorm component is employed for channel-wise normalization,thereby diminishing the model’s intricacy and lessening computational demands.

    The discriminative component within the SRGANext architecture draws inspiration from the SRGANext Block as its foundational module.It incorporates Depthwise Convolution modules that substantially decrease the parameter count and diminish the interdependence among channels,thereby expediting model training,as illustrated in Fig.3c.Unlike the generator,the discriminator network undergoes downsampling,reducing the feature map size to half of its original.Ultimately,it undergoes global average pooling for feature map size reduction to 1 × 1,bringing the reconstructed representation closer to the actual image.Fig.4 shows the image reconstruction effect of the SRGANext network after 100 training cycles,demonstrating a notable improvement in the resolution ratio of the reconstructed images.

    Figure 4: SRGANext network processing effect

    In this study,inputting low-resolution ratio chip marker images into the SRGANext network effectively increases pixel density,thereby enhancing image clarity and detail.This super-resolution enhancement technology not only aids in compensating for information loss caused by low resolution in deep learning models but also strengthens the accuracy of the model in chip marker detection tasks.

    2.3 Van-YOLOv8 Detection Model

    The challenge of detecting small and heavily worn markings on recycled chips has given rise to issues of false positives and negatives,underscoring the pressing need for enhanced precision in detection.At the same time,prevalent target identification methodologies present issues such as heightened intricacy and considerable computational demands,rendering the deployment of algorithms on edge devices notably challenging.Therefore,this paper proposes a lightweight YOLOv8s network structure,namely Van-YOLOv8,as illustrated in Fig.5,to address these issues.

    The primary detection process involves three key steps:Image preprocessing,model refinement,and model testing.In the first step,low-resolution ratio images undergo super-resolution ratio enlargement to reconstruct a high-quality dataset.After selecting specific images,the circular markings within them are annotated using the LabelImg annotation software.Notably,some heavily worn chips that,even after preprocessing,fail to exhibit a complete marking are manually identified,with those meeting the circular marking criteria deemed qualified.Transitioning to the subsequent stage,the categorized images,both after preprocessing and in their original state,are utilized for the training and validation procedures,and subsequently input into the Van-YOLOv8 model for training.The detection framework consists of a simplified VanillaNet,an amalgamated Attention Mechanism (HAM) element,and the YOLOv8s convolutional neural architecture component.The VanillaNet significantly reduces the model’s volume,thus lowering computational resource demands.The backbone network,augmented with the HAM module at the bottom,enhances feature extraction capabilities.The input is processed through the Neck network and detection head,ultimately providing predicted bounding box coordinates and class labels for the targets.In the third step,the trained model undergoes evaluation using the test set,allowing for an analysis of its detection performance.

    Figure 5: Topology of the Van-YOLOv8 model

    2.3.1 Baseline-YOLOv8s Network

    In this manuscript,we utilize the single-stage detection algorithm YOLOv8s as the reference model.As illustrated in Fig.6,this framework comprises three fundamental elements:Backbone architecture,Neck module,and Head component.In the Backbone architecture,in contrast to YOLOv5,YOLOv8s adopts a more lightweight C2f module in lieu of the C3 module.In the Neck network,YOLOv8s omits the 1×1 convolutional downsampling unit observed in YOLOv5 and substitutes the C3 with a C2f.In the Head network,YOLOv8s utilizes a disentangled head configuration,segregating the tasks of classification and regression,and shifts from Anchor-Based to Anchor-Free.

    2.3.2 Integrated VanillaNet Minimalist Network

    Considering the limited computational resources typically found in endpoint devices within conventional enterprises,deploying complex chip marker detection models that demand significant computing power often becomes a constrained task.In response to this challenge,this paper addresses the issue by introducing a streamlined neural network module,VanillaNet,into the backbone feature extraction network of the baseline model YOLOv8s.In contrast to intricate residual and attention modules,VanillaNet comprises basic convolutional and pooling layers,eliminating complex connections and skip connections.Such a design streamlines the network structure,significantly reducing the model’s volume and parameter count,consequently lowering the computational intricacy.

    Figure 6: YOLOv8s network structure diagram

    The architecture of VanillaNet is depicted in Fig.7(using a 6-layer structure as an example),and it mainly consists of three parts:A backbone block,which converts the input image from 3 channels to multiple channels and performs downsampling;a main body that extracts useful information;and a densely connected layer for generating classification results.For the backbone block,a 4 × 4 × 3× C convolution layer with a stride of 4 is used to downsample the original 3-channel image into a feature map with C channels.In the primary segments—stage 1,stage 2,and stage 3—max-pooling layers with a 2-unit stride are implemented to modify the dimension of the feature map and double the number of channels from the preceding layer.In stage 4,an average pooling operation is employed without augmenting the channel quantity.Finally,the fully connected layer outputs the classification result.

    Figure 7: VanillaNet network structure

    To preserve feature map information at each layer while minimizing computational costs,we opted for 1 × 1 convolutional kernels.Following each 1 × 1 convolutional layer,we applied the Series Informed Activation Function (SIAF),expressed mathematically as shown in Eq.(1).This choice aims to effectively activate the neural network’s response,rendering the network more flexible and responsive during the information propagation process.To further simplify the training process,we introduced batch normalization after each convolutional layer.

    Here,n denotes the quantity of cascaded activation functions,andai,birepresent the scaling and offset for each activation,preventing mere accumulation.

    2.3.3 Introduction of Hybrid Attention Mechanisms

    In order to concentrate on the regions of the image that include circular marks,to enhance the network’s feature extraction capabilities,a HAM module is utilized.HAM incorporates an attention mechanism,integrating both channel and spatial attention mechanisms to retrieve crucial relevant details from initial images’channel and spatial properties.In contrast to traditional attention mechanisms,this approach is more flexible and adaptive,offering a balance between adaptability and computational efficiency.

    Part A:Channel Attention (CAM).Channel attention chiefly concentrates on modifying the weights of individual channels at each spatial location.Channel attention distributes weights to convolutional feature maps.Following the convolution of the original image,global average pooling is executed to derive a vector with dimensions [C,1,1].The resulting tensor quantity undergoes convolutional and activation processes to produce the weight vector corresponding to the channels‘s’.As depicted in Fig.8,the[C,H,W]dimensional input feature‘X’undergoes global average pooling(GAP) for dimension reduction and information condensation.Interactions among neighboring channels are delineated by contemplating each individual channel alongside its surrounding ‘k’channels.Efficient prediction of channel-based attention mechanism is achieved through a Conv1D convolution employing a kernel with dimensions‘k’בk’,where the kernel dimension is proportionate to the channel dimension ‘C’.The input representation map ‘X’undergoes multiplication with the weight vector‘s’associated with channels to produce the output representation map‘Y’.The equation is expressed as[22]:

    Here,σrepresents the Sigmoid activation operation.The dimension of the convolutional kernelkadjusts proportionally to the channel dimension,with the model parametersγ=2 andb=1.The symbol‖oddspecifies thatkmust exclusively be positive odd integers.The adopted channel attention strategy in this investigation utilizes a regional inter-channel communication approach,maintaining optimal effectiveness while concurrently diminishing model complexity.

    Figure 8: CAM diagram

    Part B:Spatial Attention(SAM).Spatial attention is concerned with adjusting the importance of different spatial positions within each channel.The attention mechanism based on spatial attributes filters out less relevant portions of the background in an image and directs attention towards and transforms regions of significance.In Fig.9,the feature maps are subjected to processing through Max Pooling (MaxPool) and Mean Pooling (AvgPool) to produce two arrays with dimensions [1,H,W],aggregating every channel at the same representation point.The pair of representation maps are later merged to generate a unified representation map with dimensions [2,H,W],and subsequently transformed to [1,H,W] through a convolutional stratum.The spatial influence is employed to modulate the original feature map with dimensions [C,H,W] for refinement.By backpropagating the effective receptive field to the initial image,the network can dynamically concentrate on crucial portions[23].

    Figure 9: SAM diagram

    Here,f n×ndenotes a convolutional operation employing a kernel dimension ofn×n.Xrepresents the input representation map,whileYindicates the resulting representation map.G(X)denotes the segmentation of the input representation map into a lattice of points.

    3 Experimental Study

    The Van-YOLOv8 network is deployed and trained on the TensorRT (version 8.4) framework,using FP16 optimization mode.The experiment employed an NVIDIA GeForce RTX 3060 GPU;an Intel Core i7-12700H CPU with a base frequency of 2.70 GHz and 16 GB of RAM;the operating system was Windows 11.The programming software used was PyCharm 2021.3.1,and the CUDA version was 11.8.The dimensions of the input image were configured to 512 pixels×512 pixels,with the count of iterations (Epoch) designated as 300.The size of the training batch was established as 8,while the quantity of threads(Num workers)was defined as 4.During the model training process,the weights trained on the COCO dataset were used as pre-trained weights.Additionally,600 images generated by the SRGANext network were utilized as training samples for the detection model,with a subset of 150 original images employed for testing and validation purposes.

    3.1 Evaluation Metrics

    In this experiment,we introduced evaluation criteria for gauging the efficacy of image reconstruction and object detection.To evaluate the fidelity of reconstructed images,this study incorporates two metrics:Structural SIMilarity(SSIM)and Peak Signal-to-Noise Ratio(PSNR).

    Here,l,c,andsdenote the resemblance in brightness,contrast ratio,and composition,respectively.MSEstands for Mean Square Error.

    For measuring the performance of object detection,this paper introduces five evaluation metrics:Detection precision (Precision),mean Average Precision (mAP),Frames Per Second (FPS),model parameter count(Params),and computational load(GFLOPs).

    Here,TP refer to correctly identified positive instances.FN indicate incorrectly missed positive instances.FP signify incorrectly identified negative instances.ksymbolizes the quantity of categories,andAP(i) denotes the Average Precision value corresponding to thei-th category.ElapsedTimeencompasses the total duration for image preprocessing,inference,and post-processing.Floating Point Operations Per Second(FLOPs)correspond to the quantity of floating-point operations conducted.C0signifies the count of resultant channels,Cidenotes the quantity of initial channels,kwdenotes the convolution kernel width,andkhindicates the convolution kernel height.

    3.2 Effectiveness of Image Preprocessing

    In this part of our study,we examine how well the SRGAN and SRGANext models reconstruct images across various datasets.We tested these models using several datasets:DIV2K,ImageNet100,COCO2017,Set5,and Set14.According to Table 1,the SRGANext framework uniformly surpasses the SRGAN framework in both SSIM and PSNR across all these datasets.These metrics are important for evaluating the quality of image reconstruction.

    Table 1: Comparison of SRGAN and SRGANext model performance

    Moreover,this study delves into a comprehensive examination of the comparison between images preprocessed using the SRGANext network and the original images in the Van-YOLOv8 model for chip datasets.The preprocessed images exhibit sharper and more accurate features in chip marker detection tasks,contributing to an improved ability of the model to recognize subtle markings.Table 2 clearly illustrates the substantial improvements of preprocessed images over original images across various performance metrics,emphasizing the notable role of SRGANext preprocessing in enhancing chip marker detection effectiveness.

    Table 2: Influence of preprocessed and original images on detection performance

    3.3 Ablation Experiment

    In this study,YOLOv8s is employed as the foundational model,and the VanillaNet element is integrated into the core feature extraction network to streamline the model’s intricacy.Simultaneously,a HAM is incorporated to focus on local information and enhance feature extraction capabilities.To affirm the efficacy of the enhancements implemented on the foundational model,ablation studies were conducted from two perspectives:

    1) Derived from the foundational model,each enhancement module was introduced one at a time to validate the influence of individual modules on model identification accuracy,number of parameters,detection speed,etc.

    2) In the conclusive model (Van-YOLOv8),each enhancement module was systematically excluded one by one (except for the exclusion of the VanillaNet module) to evaluate the influence of specific improvement modules on the final model’s performance.The outcomes of the experiments are illustrated in Table 3.

    Table 3: Ablation experimental study

    The experimental findings indicate that by exclusively integrating the VanillaNet component within the feature extraction network of the backbone,the parameter count decreased from 23.9 M in the baseline to 11.1 M,resulting in a reduction of 12.8 M.VanillaNet uses a sequence of convolutionpooling structures to extract features,without direct connections between different blocks.The feature map is continuously downsampled through convolution and pooling layers to subsequent blocks,avoiding branched structures and thereby reducing a significant amount of computation.Hence,incorporating the simplified VanillaNet module into the backbone for feature extraction efficiently diminishes the model’s intricacy and enhances inference speed.Furthermore,integrating the HAM component within the structure explicitly establishes the interrelation among image channels and spatial dimensions.It aggregates information from multiple convolution kernels in a nonlinear manner,focusing more quickly on local information.The HAM module,composed of SAM and CAM,only slightly increases the parameters and computational cost,enhancing the model’s inference velocity.The ultimate model proposed in this manuscript,incorporating both the VanillaNet and HAM modules,achieves a detection accuracy of 90.2% and an mAP of 91.9%,respectively,increasing by 4.8% and 4.4%.Moreover,the final model maintains a lower number of parameters and computational load,only 54.8%and 42.0%of the baseline,with an FPS increase of 11.3 fps compared to the baseline.In summary,the lightweight chip mark detection model presented in this study efficiently simplifies the model’s structure while preserving robust detection accuracy and real-time efficiency.

    3.4 Comparison with Current Advanced Algorithms

    In our study,we compared the performance of various advanced object detection algorithms with our Van-YOLOv8 network.This encompassed the two-step algorithm Fast RCNN and various singlestep algorithms: SSD,YOLOv4-Tiny [24],YOLOv5s,YOLOv7s [25],and YOLOv8s.The results of these comparative tests are condensed in Table 4.

    Table 4: Experimental comparisons

    According to the results provided in Table 4,the framework within this investigation demonstrated superior execution concerning detection accuracy,mAP,model parameter count,computational load,and FPS.Compared to the baseline YOLOv8s,it increased detection accuracy by 4.8%,mAP by 4.4%,diminished the quantity of model parameters by 10.8 M,and lowered the computational load from 50.7 GFLOPs to 21.3 GFLOPs,a decrease of 29.4 GFLOPs,while FPS increased by 11.3 fps.In comparison with YOLOv5s,another one-stage object detection algorithm,our approach showed even more significant advantages in chip mark detection,with notable improvements in detection accuracy and mAP.The model’s parameter count was reduced by 14.8 M,only 46.9%of YOLOv5s,and the computational load decreased by 43.8 GFLOPs,only 32.7%of YOLOv5s,with an increase of 24.1 fps.Furthermore,Van-YOLOv8 reduces the parameter count by 13.1 M compared to YOLOv7s,3.2 M compared to YOLOv4-tiny,17.0 M compared to SSD,and 44.7 M compared to Faster RCNN.The frames per second(FPS)are enhanced by 30.7 fps over YOLOv7s,7.2 fps over YOLOv4-tiny,33.5 fps over SSD,and 33.8 fps over Faster RCNN.Additionally,Fig.10 presents the detection accuracy and mAP curve of the Van-YOLOv8 model training.The curve showing the changes in detection accuracy over iterations indicates that the Van-YOLOv8 model quickly improves in target detection accuracy and can achieve a result close to 1 when stabilized.To more intuitively display the differences between each model,examples of the detection process are illustrated in Fig.11.

    Figure 10: Van-YOLOv8 model training results

    Through detailed results and comparative analysis,the Van-YOLOv8 model significantly curtails computational costs while ensuring enhanced detection precision compared to other cutting-edge algorithms.This not only underscores the outstanding performance of Van-YOLOv8 in object detection tasks but also indicates its effective management of computational resources while enhancing accuracy.The successful manifestation of this balance highlights the unique design and excellent performance advantages of the Van-YOLOv8 model.

    Figure 11: Detection results of different models.The boxes indicate the locations of detected marks,and the numbers signify the model’s confidence level in the detected objects

    4 Conclusion

    The Van-YOLOv8 model,leveraging SRGANext for processing the chip dataset,integrates VanillaNet and a hybrid attention mechanism,demonstrating outstanding performance in detection accuracy.Simultaneously,Van-YOLOv8 achieves a significant improvement in computational costs,striking a balance between efficiency and performance.This is particularly crucial for implementing target detection tasks in resource-constrained environments,offering a balanced solution that meets high accuracy requirements while effectively managing computational expenses.

    While Van-YOLOv8 excels in object detection tasks,its design is more tailored to specific recycled chip detection,and adaptability differences may exist for other types of object detection tasks.Additionally,Van-YOLOv8’s performance is sensitive to certain key hyperparameters,requiring careful tuning for optimal performance,thereby increasing the difficulty of model optimization.In the future of object detection,it is recommended to improve the framework’s versatility to ensure broader applicability in diverse object detection scenarios.Moreover,exploring automated methods for hyperparameter optimization,such as reinforcement learning or optimization algorithm-based auto-tuning tools,can assist in reducing the complexity of model tuning while enhancing performance stability and generalization.

    Acknowledgement:We sincerely thank the Scientific Research Funding Project of Liaoning Provincial Department of Education in 2021,the Comprehensive Reform Project of Undergraduate Education and Teaching in Liaoning in 2021,and the Graduate Innovation Fund of Dalian University of Technology for providing the necessary technical support for this research.

    Funding Statement:This work was supported by the Liaoning Provincial Department of Education 2021 Annual Scientific Research Funding Program (Grant Numbers LJKZ0535,LJKZ0526),and the 2021 Annual Comprehensive Reform of Undergraduate Education Teaching (Grant Numbers JGLX2021020,JCLX2021008),and Graduate Innovation Fund of Dalian Polytechnic University(Grant Number 2023CXYJ13).

    Author Contributions:Study conception and design: Honglei Wei,Meng Huang,Xianyi Zhai;data collection: Meng Huang;analysis and interpretation of results: Honglei Wei,Meng Huang;draft manuscript preparation:Meng Huang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data cannot be made publicly available upon publication because they are owned by a third party and the terms of use prevent public distribution.The data that support the findings of this study are available upon reasonable request from the authors.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久免费观看电影| 777久久人妻少妇嫩草av网站| 天天躁夜夜躁狠狠躁躁| 久久久国产精品麻豆| 欧美日韩视频精品一区| 夫妻午夜视频| 精品福利观看| 久久久欧美国产精品| cao死你这个sao货| 国产激情久久老熟女| 欧美性长视频在线观看| 午夜福利视频在线观看免费| 欧美日韩亚洲综合一区二区三区_| 99精品久久久久人妻精品| 在线观看国产h片| 美女国产高潮福利片在线看| 男人添女人高潮全过程视频| 人成视频在线观看免费观看| 亚洲第一青青草原| 国产不卡av网站在线观看| 99国产综合亚洲精品| 日本一区二区免费在线视频| 99国产精品99久久久久| 成年av动漫网址| www.av在线官网国产| 亚洲熟女精品中文字幕| 黄色怎么调成土黄色| 建设人人有责人人尽责人人享有的| 成人午夜精彩视频在线观看| 亚洲精品自拍成人| www.精华液| 美女国产高潮福利片在线看| 91麻豆精品激情在线观看国产 | 亚洲人成电影观看| 天堂8中文在线网| 性色av一级| 蜜桃在线观看..| 免费看十八禁软件| 欧美日韩视频高清一区二区三区二| 亚洲欧美激情在线| 日本黄色日本黄色录像| 少妇人妻 视频| 国产免费视频播放在线视频| 新久久久久国产一级毛片| 成年人免费黄色播放视频| 成年人黄色毛片网站| 欧美久久黑人一区二区| 中文字幕制服av| 欧美日本中文国产一区发布| 中文字幕人妻熟女乱码| 亚洲精品av麻豆狂野| 丰满饥渴人妻一区二区三| 国产一区有黄有色的免费视频| 亚洲成人免费电影在线观看 | 日韩电影二区| 18禁国产床啪视频网站| 精品一品国产午夜福利视频| 国产主播在线观看一区二区 | 久久性视频一级片| 精品熟女少妇八av免费久了| 十分钟在线观看高清视频www| 久久久国产一区二区| 人人妻人人添人人爽欧美一区卜| 久久国产亚洲av麻豆专区| 热99国产精品久久久久久7| 亚洲五月色婷婷综合| 美女视频免费永久观看网站| 精品亚洲成国产av| kizo精华| 免费高清在线观看日韩| 久久天躁狠狠躁夜夜2o2o | 韩国高清视频一区二区三区| 国产亚洲av高清不卡| 在线 av 中文字幕| 日韩免费高清中文字幕av| 亚洲精品日本国产第一区| 蜜桃国产av成人99| 成年av动漫网址| 青春草视频在线免费观看| 亚洲人成网站在线观看播放| 国产亚洲欧美在线一区二区| 男女免费视频国产| 欧美日韩成人在线一区二区| 一本色道久久久久久精品综合| 久久青草综合色| 大片免费播放器 马上看| 人体艺术视频欧美日本| 天天躁夜夜躁狠狠久久av| 国产97色在线日韩免费| 久久 成人 亚洲| 亚洲av综合色区一区| 成年动漫av网址| 女性生殖器流出的白浆| 美女脱内裤让男人舔精品视频| 亚洲一码二码三码区别大吗| av福利片在线| 2018国产大陆天天弄谢| 大香蕉久久网| 久久99一区二区三区| 午夜av观看不卡| 少妇的丰满在线观看| 狂野欧美激情性xxxx| 精品一区二区三区四区五区乱码 | 日本一区二区免费在线视频| 国产精品 国内视频| 亚洲欧美清纯卡通| 妹子高潮喷水视频| 巨乳人妻的诱惑在线观看| 久久精品国产亚洲av涩爱| 久久久亚洲精品成人影院| 最新在线观看一区二区三区 | 精品福利观看| 乱人伦中国视频| 99国产精品免费福利视频| 亚洲黑人精品在线| 国产免费现黄频在线看| 亚洲七黄色美女视频| 两性夫妻黄色片| 又大又黄又爽视频免费| 一区二区av电影网| 日韩视频在线欧美| 亚洲伊人色综图| 丝袜美腿诱惑在线| 久久影院123| 丝袜在线中文字幕| 久久精品国产亚洲av高清一级| 自拍欧美九色日韩亚洲蝌蚪91| 国产免费一区二区三区四区乱码| 曰老女人黄片| 男人舔女人的私密视频| 男女无遮挡免费网站观看| 欧美黑人精品巨大| 欧美日本中文国产一区发布| 国产精品一区二区在线观看99| 国产女主播在线喷水免费视频网站| 婷婷色综合大香蕉| 日韩av免费高清视频| 97精品久久久久久久久久精品| 久久av网站| 黄色片一级片一级黄色片| 亚洲欧美清纯卡通| 十八禁网站网址无遮挡| 91国产中文字幕| 久久久精品94久久精品| 亚洲 欧美一区二区三区| 成人18禁高潮啪啪吃奶动态图| 男女高潮啪啪啪动态图| 国产一区亚洲一区在线观看| 亚洲 欧美一区二区三区| 韩国高清视频一区二区三区| 午夜免费观看性视频| 99精国产麻豆久久婷婷| 欧美精品一区二区免费开放| 91麻豆av在线| 成人国语在线视频| www.精华液| 日日夜夜操网爽| 国产视频一区二区在线看| 欧美av亚洲av综合av国产av| 19禁男女啪啪无遮挡网站| 男女边摸边吃奶| 日韩中文字幕视频在线看片| 王馨瑶露胸无遮挡在线观看| 亚洲国产成人一精品久久久| 国产精品二区激情视频| 亚洲av欧美aⅴ国产| 色综合欧美亚洲国产小说| 久久 成人 亚洲| 9热在线视频观看99| 我要看黄色一级片免费的| 久久精品亚洲熟妇少妇任你| 亚洲av综合色区一区| 尾随美女入室| 精品福利永久在线观看| 一区二区av电影网| 日本av手机在线免费观看| 午夜激情av网站| 国产精品麻豆人妻色哟哟久久| 夫妻性生交免费视频一级片| 日本色播在线视频| 成人影院久久| 日韩 亚洲 欧美在线| 国产精品二区激情视频| 国产精品免费大片| 嫩草影视91久久| 人人妻人人澡人人爽人人夜夜| 久久久国产精品麻豆| 精品人妻在线不人妻| 欧美老熟妇乱子伦牲交| 国产精品.久久久| 日本wwww免费看| 男男h啪啪无遮挡| 亚洲国产成人一精品久久久| 美女高潮到喷水免费观看| 久9热在线精品视频| 国产成人免费无遮挡视频| 超碰97精品在线观看| 蜜桃在线观看..| 黄频高清免费视频| 又大又黄又爽视频免费| 国产亚洲午夜精品一区二区久久| 热99国产精品久久久久久7| 日本五十路高清| 少妇裸体淫交视频免费看高清 | 操美女的视频在线观看| 国产一级毛片在线| 国产黄色免费在线视频| 国产在视频线精品| 亚洲精品美女久久久久99蜜臀 | 亚洲男人天堂网一区| 又大又黄又爽视频免费| 亚洲精品一卡2卡三卡4卡5卡 | 亚洲黑人精品在线| 王馨瑶露胸无遮挡在线观看| 国产伦理片在线播放av一区| 免费久久久久久久精品成人欧美视频| 男女午夜视频在线观看| av福利片在线| 激情视频va一区二区三区| 国产精品av久久久久免费| 欧美精品av麻豆av| 国产精品香港三级国产av潘金莲 | 搡老岳熟女国产| 国产精品成人在线| 老司机亚洲免费影院| 国产av一区二区精品久久| 亚洲国产av新网站| 欧美日韩黄片免| 中文字幕人妻丝袜制服| 99国产精品99久久久久| 国产91精品成人一区二区三区 | 香蕉国产在线看| 咕卡用的链子| 精品少妇黑人巨大在线播放| 婷婷丁香在线五月| 黑人巨大精品欧美一区二区蜜桃| 两性夫妻黄色片| 一本综合久久免费| 日韩,欧美,国产一区二区三区| 首页视频小说图片口味搜索 | 一本色道久久久久久精品综合| 免费看av在线观看网站| 亚洲精品自拍成人| 亚洲欧美中文字幕日韩二区| 免费女性裸体啪啪无遮挡网站| 久久精品久久精品一区二区三区| a级片在线免费高清观看视频| 18禁黄网站禁片午夜丰满| 丝袜脚勾引网站| 成在线人永久免费视频| 亚洲欧美日韩高清在线视频 | 国产爽快片一区二区三区| 久久天躁狠狠躁夜夜2o2o | 亚洲一区二区三区欧美精品| 国产片特级美女逼逼视频| 性高湖久久久久久久久免费观看| 精品少妇久久久久久888优播| 国产日韩欧美在线精品| 制服诱惑二区| 中文字幕精品免费在线观看视频| 久久毛片免费看一区二区三区| 日本wwww免费看| 热re99久久精品国产66热6| 99国产精品一区二区蜜桃av | 国产亚洲欧美在线一区二区| 天堂8中文在线网| 久久狼人影院| 一本—道久久a久久精品蜜桃钙片| 悠悠久久av| 免费看十八禁软件| 国产成人a∨麻豆精品| 亚洲色图综合在线观看| 各种免费的搞黄视频| 亚洲 欧美一区二区三区| 国产精品久久久久久精品电影小说| av在线老鸭窝| 久久国产精品大桥未久av| 日本一区二区免费在线视频| 精品一区二区三区av网在线观看 | 老司机靠b影院| 亚洲人成网站在线观看播放| 亚洲一区二区三区欧美精品| 国产人伦9x9x在线观看| 国产视频一区二区在线看| 国产一区二区三区综合在线观看| 男人操女人黄网站| 91老司机精品| 在现免费观看毛片| 韩国精品一区二区三区| 免费看不卡的av| 视频在线观看一区二区三区| av网站在线播放免费| 欧美人与性动交α欧美软件| 精品人妻在线不人妻| 亚洲av男天堂| 亚洲av在线观看美女高潮| 日韩av在线免费看完整版不卡| 青青草视频在线视频观看| 亚洲精品中文字幕在线视频| 日本黄色日本黄色录像| 亚洲五月婷婷丁香| 久久久精品94久久精品| 丰满少妇做爰视频| 亚洲av成人不卡在线观看播放网 | 亚洲国产日韩一区二区| 亚洲熟女精品中文字幕| 久久 成人 亚洲| 又紧又爽又黄一区二区| 欧美老熟妇乱子伦牲交| 欧美精品人与动牲交sv欧美| 日韩中文字幕视频在线看片| 欧美日韩av久久| 亚洲国产看品久久| 久久中文字幕一级| 在线观看免费午夜福利视频| 9191精品国产免费久久| 国产xxxxx性猛交| 国产精品国产三级专区第一集| 欧美精品av麻豆av| 丰满饥渴人妻一区二区三| 另类亚洲欧美激情| 一级,二级,三级黄色视频| 又粗又硬又长又爽又黄的视频| 欧美精品啪啪一区二区三区 | 在线观看www视频免费| 欧美在线黄色| 亚洲激情五月婷婷啪啪| 嫁个100分男人电影在线观看 | 国产片特级美女逼逼视频| 国产欧美日韩一区二区三 | 性色av乱码一区二区三区2| 欧美日韩亚洲国产一区二区在线观看 | 视频在线观看一区二区三区| 黄色怎么调成土黄色| 精品人妻1区二区| 天天操日日干夜夜撸| 午夜91福利影院| 国产国语露脸激情在线看| 麻豆乱淫一区二区| 久久久亚洲精品成人影院| 亚洲国产精品成人久久小说| 母亲3免费完整高清在线观看| 不卡av一区二区三区| 最近中文字幕2019免费版| 国产av国产精品国产| 亚洲av美国av| 人人澡人人妻人| 欧美激情 高清一区二区三区| 伦理电影免费视频| 日本wwww免费看| 欧美 日韩 精品 国产| 美女脱内裤让男人舔精品视频| www.精华液| av欧美777| 日韩av免费高清视频| 男人添女人高潮全过程视频| 国产成人啪精品午夜网站| 国产精品香港三级国产av潘金莲 | 久久天堂一区二区三区四区| 男女国产视频网站| 国产精品久久久人人做人人爽| 熟女少妇亚洲综合色aaa.| 免费少妇av软件| 一本色道久久久久久精品综合| 久久久国产欧美日韩av| 国产男人的电影天堂91| 亚洲国产毛片av蜜桃av| 七月丁香在线播放| 久久国产精品男人的天堂亚洲| 欧美黄色淫秽网站| 国产成人系列免费观看| 熟女少妇亚洲综合色aaa.| 成年动漫av网址| 免费久久久久久久精品成人欧美视频| 一区福利在线观看| 中文字幕最新亚洲高清| 欧美黑人欧美精品刺激| 免费一级毛片在线播放高清视频 | 亚洲男人天堂网一区| 国产精品 欧美亚洲| 国产午夜精品一二区理论片| 亚洲国产精品一区三区| 欧美人与善性xxx| 精品一区二区三区av网在线观看 | 国产高清视频在线播放一区 | 美女福利国产在线| 老司机影院毛片| 久久精品国产综合久久久| 叶爱在线成人免费视频播放| 中文字幕精品免费在线观看视频| 男女下面插进去视频免费观看| 美女国产高潮福利片在线看| 天天躁夜夜躁狠狠久久av| 少妇 在线观看| 国产成人影院久久av| 一区二区三区激情视频| 最黄视频免费看| 男人舔女人的私密视频| 首页视频小说图片口味搜索 | 国产精品一区二区在线不卡| 蜜桃在线观看..| 久久人人爽av亚洲精品天堂| 午夜福利,免费看| 人人妻人人添人人爽欧美一区卜| 看十八女毛片水多多多| 国产在视频线精品| av网站免费在线观看视频| 亚洲欧美中文字幕日韩二区| 久久99精品国语久久久| 黄色 视频免费看| 日韩伦理黄色片| 99久久人妻综合| 成人国产av品久久久| 777久久人妻少妇嫩草av网站| 母亲3免费完整高清在线观看| 少妇 在线观看| 欧美日韩成人在线一区二区| 中文字幕制服av| 69精品国产乱码久久久| 午夜激情av网站| 波野结衣二区三区在线| 我的亚洲天堂| 国产又色又爽无遮挡免| 91老司机精品| 国产成人精品在线电影| 热99国产精品久久久久久7| 亚洲黑人精品在线| 久久久久久免费高清国产稀缺| 别揉我奶头~嗯~啊~动态视频 | 亚洲人成77777在线视频| 最近手机中文字幕大全| 久久久亚洲精品成人影院| 建设人人有责人人尽责人人享有的| 波多野结衣一区麻豆| 伊人久久大香线蕉亚洲五| 国产97色在线日韩免费| 一区二区日韩欧美中文字幕| 青青草视频在线视频观看| 大香蕉久久网| 99国产精品一区二区三区| 国产精品久久久久久人妻精品电影 | 自拍欧美九色日韩亚洲蝌蚪91| 午夜免费观看性视频| 欧美久久黑人一区二区| 大香蕉久久网| 高清欧美精品videossex| 亚洲av欧美aⅴ国产| 午夜两性在线视频| 操出白浆在线播放| 丝袜喷水一区| 欧美少妇被猛烈插入视频| 久久av网站| 丁香六月欧美| 免费女性裸体啪啪无遮挡网站| 久久亚洲精品不卡| 欧美大码av| 久久鲁丝午夜福利片| 久久精品国产综合久久久| 国产成人精品久久久久久| 一级a爱视频在线免费观看| 国产一区二区三区av在线| 国产深夜福利视频在线观看| 一级片'在线观看视频| 免费看不卡的av| 久久精品久久精品一区二区三区| 亚洲九九香蕉| 国产黄色视频一区二区在线观看| 制服诱惑二区| 丁香六月天网| av一本久久久久| 十八禁网站网址无遮挡| 一级毛片我不卡| 国产精品久久久久久人妻精品电影 | 欧美精品av麻豆av| 国产亚洲欧美在线一区二区| 超碰成人久久| 汤姆久久久久久久影院中文字幕| 天天躁夜夜躁狠狠躁躁| 国产亚洲欧美精品永久| 亚洲,欧美精品.| 少妇猛男粗大的猛烈进出视频| 免费女性裸体啪啪无遮挡网站| 久久国产精品影院| 最黄视频免费看| 欧美av亚洲av综合av国产av| 久久影院123| 看十八女毛片水多多多| 国产精品久久久久久精品电影小说| 大香蕉久久成人网| 人人妻,人人澡人人爽秒播 | 丁香六月天网| 伦理电影免费视频| 亚洲av日韩精品久久久久久密 | 国产高清不卡午夜福利| 亚洲欧美一区二区三区国产| 99香蕉大伊视频| 在线观看一区二区三区激情| 成人18禁高潮啪啪吃奶动态图| 成人亚洲精品一区在线观看| 亚洲午夜精品一区,二区,三区| 国产精品一国产av| 少妇精品久久久久久久| 人人澡人人妻人| 欧美日韩综合久久久久久| 国产精品99久久99久久久不卡| 久9热在线精品视频| 午夜福利视频在线观看免费| 精品第一国产精品| 人体艺术视频欧美日本| 午夜两性在线视频| 日本vs欧美在线观看视频| 久久久精品免费免费高清| 一区二区三区四区激情视频| 国产免费福利视频在线观看| 老汉色∧v一级毛片| 亚洲国产av影院在线观看| 乱人伦中国视频| 亚洲av男天堂| tube8黄色片| 少妇人妻 视频| 免费在线观看影片大全网站 | 黄色怎么调成土黄色| 中文字幕人妻丝袜一区二区| 搡老乐熟女国产| 99re6热这里在线精品视频| 一区二区三区四区激情视频| 岛国毛片在线播放| 亚洲中文日韩欧美视频| 精品久久久久久久毛片微露脸 | 亚洲国产欧美在线一区| 亚洲av美国av| 亚洲国产精品一区三区| 欧美黑人欧美精品刺激| 亚洲欧美日韩高清在线视频 | 国产在视频线精品| 香蕉丝袜av| 我要看黄色一级片免费的| 免费观看av网站的网址| 在现免费观看毛片| av网站免费在线观看视频| 黄色视频在线播放观看不卡| 美女大奶头黄色视频| 国产精品免费视频内射| 又大又爽又粗| 亚洲精品在线美女| 麻豆国产av国片精品| 悠悠久久av| 国产97色在线日韩免费| 宅男免费午夜| 一本大道久久a久久精品| 久久精品亚洲av国产电影网| 精品亚洲成国产av| 91老司机精品| 大型av网站在线播放| 纯流量卡能插随身wifi吗| 午夜激情久久久久久久| 十八禁网站网址无遮挡| 国产片特级美女逼逼视频| 亚洲人成电影免费在线| 日韩欧美一区视频在线观看| 午夜日韩欧美国产| 亚洲精品久久久久久婷婷小说| 精品人妻1区二区| 丰满迷人的少妇在线观看| 亚洲九九香蕉| 蜜桃国产av成人99| 日本猛色少妇xxxxx猛交久久| 国产在线免费精品| 少妇 在线观看| 国产成人91sexporn| 1024香蕉在线观看| 一本久久精品| 久久九九热精品免费| 精品一区二区三卡| 久久国产精品人妻蜜桃| 日韩,欧美,国产一区二区三区| 亚洲成人手机| 肉色欧美久久久久久久蜜桃| 曰老女人黄片| 亚洲中文av在线| 麻豆国产av国片精品| 18禁观看日本| 最新在线观看一区二区三区 | 国产不卡av网站在线观看| 丰满迷人的少妇在线观看| 精品少妇黑人巨大在线播放| 激情五月婷婷亚洲| 国产一卡二卡三卡精品| 国产一区二区 视频在线| 欧美黑人欧美精品刺激| 亚洲欧美日韩高清在线视频 | 制服人妻中文乱码| 亚洲视频免费观看视频| 校园人妻丝袜中文字幕| 亚洲成色77777| 日韩免费高清中文字幕av| 国产成人精品久久久久久| 黄色视频在线播放观看不卡| 国产精品国产三级专区第一集| 亚洲中文字幕日韩| 99香蕉大伊视频| 亚洲av成人不卡在线观看播放网 | 大码成人一级视频| 亚洲人成网站在线观看播放| 9色porny在线观看| 国产亚洲欧美在线一区二区| 美国免费a级毛片| 天天操日日干夜夜撸| 99热全是精品| 亚洲av成人精品一二三区| 少妇精品久久久久久久| 欧美精品av麻豆av| 欧美激情极品国产一区二区三区| 夜夜骑夜夜射夜夜干| 视频在线观看一区二区三区|