• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Method for detecting dragon fruit based on improved lightweight convolutional neural network

    2020-12-25 07:39:38WangJinpengGaoKaiJiangHongzheZhouHongping
    關(guān)鍵詞:檢測(cè)時(shí)間火龍果骨干

    Wang Jinpeng, Gao Kai, Jiang Hongzhe, Zhou Hongping

    Method for detecting dragon fruit based on improved lightweight convolutional neural network

    Wang Jinpeng, Gao Kai, Jiang Hongzhe, Zhou Hongping

    (210037,)

    The real-time detection of dragon fruit in the natural environment is one of the necessary conditions for dragon fruit automated picking. This paper proposed the lightweight convolutional neural network YOLOv4-LITE. YOLOv4 integrates multiple optimization strategies, and its detection accuracy is 10% higher than traditional YOLOv3. However, the YOLOv4 requires a large amount of memory storage because of the complexity of backbone network and huge calculation, so it is not suitable to be deployed in embedded devices for real-time detection. The Mobilenet-v3 network was selected to replace CSPDarknet-53 as the YOLOv4 backbone network because it can significantly improve the detection speed. Mobilenet-v3 extends the depth of separable convolution and introduces the attention mechanism, which reduces the computation of feature maps and speeds up the propagation speed of feature maps in the network. In order to improve the detection accuracy of small targets, up-sampling is carried out on the 39-layers and 46-layers respectively. The 39-layers feature map is combined with the feature map of the last bottleneck layer, and upsampling is applied twice. The fused feature map uses a 1×1 convolution to enhance the dimension of the feature map. Then, up-sampling is conducted on the 46-layer to fuse with the 11-layer feature map, and the feature map is fused for multi-scale prediction. The convolution is performed three times to obtain a 52×52 scale feature map for the detection of small targets. The 51-layer feature map is combined with the 44-layer feature map and convolution is applied three times, and a 26×26 feature map is obtained for the detection of medium-sized targets. The 59-layer feature map is combined with the 39-layer feature map, and convolution is applied three times, and a 13×13 feature map is obtained for the detection of medium-sized targets. 2513 images of dragon fruit under different occlusion environments were used as data sets for the training experiment. Results showed that the lightweight YOLOv4-LITE network proposed achieved an Average Precision (AP) value of 96.48%, the average of the accuracy and recall rates (1 score)of 95%, average Intersection over Union (IoU) of 81.09%, and model occupying 2.7 MB of memory. Meanwhile, by comparing and analyzing different backbone networks, the detection speed of Mobilenet-V3 was improved, and 160.32 ms reduced the average detection time compared with CSPDarknet-53. YOLOv4-LITE took only 2.28 ms to detect a 1 200×900 resolution image on the GPU. YOLOv4-LITE network can effectively identify dragon fruit in the natural environment, and has strong robustness. Compared with existing target detection algorithms, the detection speed of YOLOv4-LITE was approximately 9.5 times higher than that of SSD-300 and 14.3 times than that of Faster-RCNN. The influence of multi-scale prediction on model performance was further analyzed, and four feature maps with different scales were used for fusion prediction. The AP value was improved by 0.81% when four scales were used for prediction, but the average time was increased by 10.33 ms, and the model weight was increased by 7.4 MB. The overall results show that the lightweight YOLOv4-LITE proposed in this paper has significant advantages in terms of detection speed, detection accuracy and model size, and can be applied to the detection of dragon fruit in a natural environment.

    models; deep learning; fruitdetection;convolutional neural network; YOLOv4-LITE; real-time detection

    0 Introduction

    Recently, object detection technology have undergone continual innovation[1]. Target detection technology based on convolutional neural networks is divided into two main categories: one-stage methods such as YOLO[2]and SSD[3], and two-stage methods such as RCNN[4], Fast-RCNN[5], and Faster-RCNN[6]. Convolutional neural networks have improved recognition and classification capability because it addresses the problem of incomplete information extraction caused by artificial design features in traditional machine vision. At present, convolutional neural networks widely applied in fruit recognition[7], maturity detection[8], hyperspectral analysis[9-10], and crop growth prediction[11]of intelligent agriculture. The automatic harvesting in orchards is trending the development of intelligent agriculture. Some fruit with poor growth environments, mainly by handpicking, carry higher risk. Therefore, to improve labour productivity and reduce production costs, it is necessary to realize automatic fruit recognition and intelligent harvesting.

    The detection of fruit in a natural environment provides theoretical support for the development of intelligent picking[12]. In the unstructured natural environment, several factors affect recognition, such as light changes, leaf occlusion, weather conditions, and fruit growth. Certain clustered fruits, such as citrus[13], grapes[14], and kiwi[15], with high density and severe occlusion, are difficult for traditional image processing technology to be applied in these scenes. At the early stage, the artificial extraction of fruit colour, shape, outline, and other characteristics are used to identify and locate fruit in the natural environment. However, the accuracy of extraction methods is generally low due to the large error[16]. With the development of deep learning and convolutional neural networks, several researchers have proposed fruit recognition methods[17]. Peng et al.[18]improved upon the Faster-RCNN algorithm using the ResNet50 network instead of the VGG16 network to extract weed image features, enriching the output feature information and improving detection accuracy. Liang et al[19]proposed a method for detecting litchi in the natural environment at night using the YOLOv3 network model that achieved the Average Precision (AP) value of 96.52%. Zabawa et al.[20]proposed an automatic target framework based on image analysis for grape recognition that uses a convolutional neural network to perform semantic segmentation to detect single berry in images. They used a connected component algorithm to calculate the number of berries, achieving an accuracy rate of up to 94%, which solves the identification problems caused by clustered grapes.

    Sa et al.[21]used Faster-RCNN to carry out fruit detection through transfer learning. The Faster-RCNN algorithm is an advanced two-stage method that first generates a Regional Proposal Network (RPN) and then detects the target area. Faster-RCNN’s detection speed is slow, and it cannot produce real-time results with high-resolution images. The efficient single-stage methods YOLO and SSD achieve better detection results than Faster-RCNN in detection speed and detection accuracy. Tian et al[22]improved the dense network DenseNet and applied it to process feature maps with low resolution in the YOLO-V3 network to detect densely grown apples in the natural environment, achieving the average of the accuracy and recall rates (1 score) of 81.7%. Koirala et al.[23]proposed a network called YOLO-Mango for fruit detection and load estimation that achieves an1 score of 96.8%. At present, several advanced feature extraction networks are available, such as VGG16, ResNet, Darknet-19, and Darknet-53. However, when a harvesting robot is in operation, visual information is critical, which requires low latency and high precision model. The vision system of a harvesting robot is typically located in an embedded system or mobile device, which has high model and memory storage requirements. Thus, calculations cannot be prohibitively large. The existing target detection model has a deep network. As the network deepens, its accuracy rate gradually increases, but its detection speed decreases, which makes it challenging to meet the real-time requirements of harvesting equipment. Therefore, a deep network must be pruned to realize a lightweight model and further improve detection speed while ensuring detection accuracy. Lu et al.[24]proposed an improved YOLOv3 network for citrus recognition by optimizing the YOLOv3 backbone network and using MobileNet-v2 to extract features to reduce the number of network calculations. They reported a detection speed of 246 frames/s and achieved an AP of 91.13%. Shi et al.[25]optimized the YOLOv3-Tiny algorithm, and through their channel design and space mask, proposed an attribute partition method that removes part of the convolution kernel in the convolutional network. Their model required 68.7% less calculation and achieved 0.4% higher accuracy than the previous method.

    At present, there are no relevant references to research on the recognition of dragon fruit. An investigation of the growth of dragon fruit in the natural environment revealed that the growth environment of dragon fruit is relatively complicated. Dragon fruit branches and leaves are long and cover a large surface at the mature stage, and fruit occlusion is severe, which makes the fruit challenging to identify. Traditional image processing methods are not well suited to dragon fruit recognition. Therefore, the present study article uses deep learning, namely, the advanced one-stage method YOLOv4[26]algorithm, to detect dragon fruit. Based on this algorithm, a lightweight YOLOv4-LITE algorithm is proposed by adjusting the backbone network to integrates MobileNet-3[27]. This paper is organized as follow. First, the collection of dragon fruit images under different natural environment scenarios for network training is described. Then, several existing networks are analyzed and compared. Finally, the advantages and disadvantages of YOLOv4-LITE are objectively evaluated.

    1 Materials and Methods

    1.1 Data collection

    2 000dragon fruit images in theorchard and greenhouse wereobtained by a web crawler which ensured the diversity in dragon fruit dataset. 564 dragon fruit images were selected to form the dataset in occlusion, cloudy, sunny, backlighting, and intense light conditions.

    1.2 Dataset preparation

    The dataset was enhanced with OpenCV. The original images were rotated at a randomly chosen angle from -180° to 180°. The dataset was randomly flipped, and the data were enhanced by clipping and scaling, which prevented any overfitting caused by a single sample. The dataset was expanded by applying image processing techniques such as histogram equalization and adjusting saturation, hue, and brightness. The above methods were used to expand each image in the dataset, as shown in Fig.1.

    The dataset was expanded through data enhancement, and 2 513 images were selected as the data set in this paper. Finally, the IabelImg tool was used to label the dragon fruit in the images. The PASCAL VOC dataset format was used for storage. The uniform distribution dataset was divided into a training set of 70% images, a validation set of 10% images, and a test set of 20% images. There are 1 036 occlusion images, 738 images on cloudy and sunny days, and 739 images with backlighting and hard lighting. The number of images in each dataset was shown in Table 1.

    Table 1 Number of images in the datasets

    Fig.1 Results of data augmentation

    1.3 YOLO convolutional neural network

    The YOLO network is an end-to-end rapid detection method that converts the target detection problem into a regression problem. Input image prediction can be achieved by using the convolutional neural network structure of the YOLO model. The YOLOv1 model uses GoogLeNet as its backbone network[2]. The end of the network uses a fully connected layer to predict the target; however, the detection effect is small for small and intensive targets, and the recall rate is low. The YOLOv2 and YOLOv3 models respectively use Darknet-19 and Darknet-53 as the backbone network to achieve feature extraction. The YOLOv2 model borrows the RPN idea from Faster-RCNN, introduces an anchor box, and deletes the fully connected layer and pooling layer in the original network[27]. The YOLOv3 model improves detection accuracy, which consists of convolutional and residual layers[28]. With the YOLOv3 model, the single-label prediction is improved to multi-label prediction classification, and combined with the FPN method for fusion prediction of feature maps at different scales, the detection accuracy of small targets is improved.

    The YOLOv4 model integrates many existing algorithmic tricks, including Mosic, CIoU, DropBlock, PANet, NMS and Mish, and introduces an attention mechanism to enhance the feature map[26]. The detection accuracy of the YOLOv4 is 10% higher than that of the YOLOv3. The YOLOv4 proposes a new backbone network, CSPDarknet-53, as shown in Fig.2. The Cross Stage Partial Residual Network (CSPResNet) is added to the CSPDarknet-53 backbone network. CSPResNet divides the feature map into two components, one that passes through the residual block and another that passes through the convolutional layer and transfers the fused feature map to the next stage. The purpose of these components is to reduce the amount of calculation and achieve a richer representation. The gradient combination strengthens the learning ability of the backbone network and avoids the gradient disappearance problem caused by deep networks. At the end of CSPDarknet-53, the Space Pyramid Pool Network (SPP) is utilized to improve the sensing field and extract important feature information. The prediction layer in YOLOv3 adopts the Feature Pyramid Networks (FPN) network structure. FPN is mainly used to improve target detection by fusing high and low layer features, so the final output features better represent the information of the input picture’s various dimensions.

    Fig.2 YOLOv4 network structure

    However, in the deep network model, shallow feature information is crucial for semantic segmentation. FPN involves a bottom-up process; thus, the features of the shallow layer are transmitted through multiple layers, and the loss of information is more serious, resulting in decreased detection accuracy. YOLOv4 adds a bottom-up feature fusion layer called Path Aggregation Network (PANet) after the FPN network. Each layer of the PANet network is composed of five convolutional layers, which are used to retain more shallow feature information, make shallow feature information, and make in-depth feature information more complete to improve the detection of small targets significantly.

    A 416×416 input image is taken as an illustrative example. After five downsamples, the output of the backbone network is a 13×13 size feature map. After two upsamples and the feature map output by the residual layer are fused, the fused feature map input to the FPN network propagates from top to bottom and transmits shallow information to PANet at the bottom of the FPN. PANet fuses the feature map in FPN to predict, create output, and obtain three types of 13×13, 26×26, and 52×52 images. The large, medium, and small targets are detected by using the feature map of different receptive fields.

    1.4 YOLOv4-LITE lightweight neural network

    1.4.1 YOLOv4-LITE backbone network

    YOLOv4 improves detection accuracy but increases the calculation of the backbone network CSPDarknet-53 and has higher GPU memory and model storage requirements. On embedded platforms (Jetson Xavier NX), its video detection speed is only 25 frames/s. The lightweight YOLOv4-LITE network model for real-time detection was proposed to extract features by adjusting the backbone network and using Google’s MobileNet, a mobile feature extract network. MobileNet can realize real-time detection on embedded devices, and avoid insufficient memory and high latency caused by complicated models. MobileNet-v3[29]is obtained through the neural architecture search. There are two versions of MobileNet-v3, Small and Large. The Large version is 1.4% more accurate than the Small version on the ImageNet classification task, but its detection speed is reduced by 10%. In order to ensure real-time dragon fruit detection, the Small version was used as the backbone network of YOLOv4-LITE. MobileNet-v3 combines the deep separable convolution of MobileNet-v1[30]and with the inverse residual structure of MobileNet-v2[31], and adds an attention mechanism. MobileNet-v3 adds the squeeze and excite structure to the bottleneck layer, which modifies the expansion layer channel to 1/4, improving detection accuracy without increasing calculation time. In MobileNet-v2, a 1×1 convolution layer before average pooling improves the dimension of the feature map. However, in MobileNet-v3, the feature map is first reduced by 7×7 using average pooling to 1×1. Then, the feature map dimension is increased, reducing the amount of calculation by a factor of 49, and improving the speed of feature map calculation.

    1.4.2 YOLOv4-LITE prediction network

    YOLOv4 utilizes the same multi-scale prediction method as YOLOv3; however, YOLOv4 incorporates PANet at the prediction layer. In order to improve the detection accuracy of MobileNet-v3 on small targets, MobileNet-V3 is combined with PANet to realize multi-scale prediction. In this paper, 39-layer and 46-layer upsampling feature map fusion was featured. For example, 416×416 image as input, the 39-layers feature map was combined with the feature map of the last bottleneck layer, and upsampling was applied twice. The fused feature maps were used a 1×1 convolution to enhance the dimension of the feature map. Then, up-sampling was conducted on the 46-layer to fuse with the 11-layer feature map. The convolution was performed three times to obtain the 52×52 scale feature map for the detection of small targets. The 51-layer feature map was combined with the 44-layer feature map, convolution was applied three times, and the 26×26 feature map was obtained for the detection of medium-sized targets. The 59-layer feature map was combined with the 39-layer feature map, convolution was applied three times, and the 13×13 feature map was obtained for the detection of medium-sized targets. The YOLOv4-LITE backbone network structure and parameters were shown in Fig.3.

    Note: The numbers in bracket were image resolution, size/filters, stride respecyively.

    2 Experiment

    2.1 Experimental platform

    The training environment and test environment of the model was the same. The experimental setup used in this paper consisted of the Ubuntu 18.04 operating system, the darknet framework, the Intel (5) Core (TM) i5-9600KF, 3.7 GHz with six cores, 16 GB of memory, the NVIDIA GeForce GTX 1660Ti 6 GB video memory graphics card, and CUDA version 10.2 with CUDNN version 7.6 acceleration.

    2.2 Experimental parameters

    In order to compare different networks, the network batch size, learning rate, momentum, iteration, and initial weight parameters were the same. Consider the memory of the computer; the batch size was set to 64; the learning rate was set to 0.001. The learning rate decreased to 0.000 1 after 35 000 iterations and 0.000 01 after 40 000 iterations. Momentum was set to 0.95, decay was set to 0.000 5, and the iteration was set to 50 000 in this paper.

    Anchor box clustering was applied to the dragon fruit dataset in this paper, using K-means clustering and, nine anchor boxes of different sizes (19, 23), (34, 38), (57, 60), (68, 93), (115, 81), (94, 135), (127, 164), (185, 167), (216, 265), which were evenly distributed to three differently scaled feature maps for prediction.

    2.3 Model evaluation

    After training the model on the training set,1 score, AP value, average Intersection over Union (IoU), average time, and model weights as evaluation indicators to evaluate the model. The1 score and AP value were defined as follows:

    whererepresents the accuracy rate,represents the recall rate,1 represents the reconciled average of the accuracy and recall rates, and AP represents the average value of the positive sample recognition accuracy

    The IoU evaluates the performance of the model by calculating the overlapping ratio of the prediction and true bounding boxes. The IoU was defined as follows:

    whererepresents the prediction bounding box, andrepresents the true bounding box.

    3 Results and analysis

    3.1 Analysis of dragon fruit recognition results

    The YOLOv4-LITE model allowed for the rapid identification of dragon fruits in cloudy (Fig.4a) and hard light environments (Fig.4b), as well as and backlight environments (Fig.4c). Fig.4 showed that fruits were occluded in three different environments. Fruit surfaces occluded by 1/2 or even 2/3 could still be identified. Due to their complex environment, some fruit surfaces may not be recognized. Overall, the YOLOv4-LITE model exhibits performance and strong robustness and allowed for dragon fruits recognition in the natural environment.

    3.2 Analysis of different backbone networks

    The experimental results from training Darknet-19, Darknet-53, Tiny, MobileNet-v2, and MobileNet-v3 were shown in Table 3. The detection accuracy of CSPDarknet-53 was 6.57% higher than Darknet-53, reaching 97.91%. However, The CSPDarknet-53 backbone network was excessively complicated with high model weight of 256 MB and long average detection time of 162.60 ms. Therefore, it was difficult for deeper backbone networks to meet high-speed requirements on embedded devices. MobileNet-v3 simplifies the backbone network with an AP value of 96.48%, which was slightly lower than CSPDarknet-53, but its detection speed was improved. It took only 2.28 ms to process a 1 200×900 resolution image on the GPU, and 160.32 ms reduced the average detection time compared with CSPDarknet-53. With detection accuracy guaranteed, MobileNet-v3’s detection speed was approximately 71 times that of CSPDarknet-53. Furthermore, the model weight was only 2.7 MB, which was 253.3 MB less than CSPDarknet-53, which significantly reduced the operating cost of embedded devices. In summary, MobileNet-v3 has the characteristics of high detection accuracy, high detection speed, and low model memory consumption, making it distinctly advantageous for robotics and embedded devices.

    Note: A represents occluded fruit, B represents shadowing on the surface of the fruit, and C represents undetected fruit.

    Table 3 Performance of different backbone networks through 50 000 iterations

    Note:1 score represents the average of the accuracy and recall rates. Same as below.

    3.3 Analysis of multi-scale prediction results

    In order to further improve the detection accuracy of MobileNetv3, the YOLOv4-LITE used different scales feature maps for prediction. Table 4 showed four scales model (YOLOv4-LITE-4L) detection accuracy was 0.81% higher than that of three scales model (YOLOv4-LITE), but the average time was increased by 10.33 ms, and the model weight was increased by 7.4 MB. Multi-scale prediction increased the computation of feature map, and improved the detection accuracy but reduced the detection speed. Thus, YOLOv4-LITE’s detection speed was more suitable for real-time detection.

    Table 4 Comparision of different scales network models

    3.4 Analysis of different network models

    YOLOv4-LITE has a faster convergence rate. When iterating to 10 000, the loss of YOLOv4-LITE was approximately 0.2, that of SSD-300 was approximately 0.5, and that of Faster-RCNN was approximately 0.8. When iterating to 35 000, the learning rate decayed to 0.000 1, and the loss of all three models were decreased. After iterating to 40 000, all three models were converged, and YOLOv4-LITE had the lowest loss. The experimental results were shown in Table 5, where the AP value of the YOLOv4-LITE algorithm proposed in this paper was shown to be 8.29% higher than Faster-RCNN and 6.36% higher than SSD-300. YOLOv4-LITE also has significant advantages in terms of its detection speed, which was approximately 9.5 times that of SSD-300 and 14.3 times that of Faster-RCNN. The detection results of the three models were shown in Fig.5. To further verify the performance of the algorithm in this paper, it was compared with the fruit detection methods of YOLOv3[24], YOLOV3-dense[22], and R-FCN[32]. The YOLOv4-LITE’s1 score showed 2%, 13%, and 5% improvement over YOLOv3[24], YOLOV3-dense[22], and R-FCN[32], respectively. Besides, YOLOv4-LITE’s AP value was 5.35% higher than YOLOv3[24]and 1.38% higher than R-FCN[32]. Remarkably, YOLOv4-LITE’s average detection speed time was 327.72 ms faster than YOLOV3-dense[22]and 197.72 ms faster than R-FCN[32]. Base on the comparison of existing target detection models, the lightweight YOLOv4-LITE network model delivers a significant advantage in terms of detection speed and accuracy.

    Table 5 Performances of different network models

    Fig.5 Detection results of the three models

    4 Conclusions

    This study explored the detection of dragon fruit in the natural environment. The YOLOv4-LITE model proposed in this paper used the MobileNet-v3 feature extraction network to improve the YOLOv4’s detection speed. YOLOv4-LITE significantly reduced the feature extraction network's calculation and accelerated the detection speed of the model while ensuring detection accuracy. Experimental results showed that, compared with YOLOv4, the detection speed of YOLOv4-LITE was improved by 71 times; it takes only 2.28 ms to process a 1 200×900 resolution image on the GPU. YOLOv4-LITE AP value was 96.48%, the1 score was 95%. Furthermore, the robustness of YOLOv4-LITE was stronger than the Faster-RCNN and SSD-300. The YOLOv4-LITE model had the advantage of small weight and made it more suitable for use in embedded devices and mobile terminals. In general, YOLOv4-LITE can better solve the problem that dragon fruit is difficult to recognize in the natural environment, and this model can be applied in the field of fruit detection.

    [1] Rehman T U, Mahmud M S, Chang Y K, et al. Current and future applications of statistical machine learning algorithms for agricultural machine vision systems[J]. Computer and Electronics in Agriculture, 2019, 156: 585-605.

    [2] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//IEEE Conference onComputer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779-788.

    [3] Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector[C]//European Conference on ComputerVision. Cham: Springer, 2016: 21-37.

    [4] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//IEEE Conference on Computer Vision andPattern Recognition. Columbus: IEEE, 2014: 580-587.

    [5] Girshick R. Fast R-CNN[C]//IEEE International Conference onComputer Vision. Santiago: IEEE, 2015: 1440-1448.

    [6] Ren S, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks[C]//Advances in Neural Information ProcessingSystems.The United States: IEEE, 2017, 39: 1137-1149.

    [7] Bargoti S, Underwood J. Deep fruit detection in orchards[C]//IEEE International Conference on Robotics and Automation.Sydney: IEEE, 2017: 3626-3633.

    [8] Caladcad J A, Cabahug S, Catamco M R, et al. Determining philippine coconut maturity level using machine learning algorithms based on acoustic signal[J]. Computer and Electronics in Agriculture, 2020, 172: 105327.

    [9] Hong G, Abd El-Hamid H T. Hyperspectral imaging using multivariate analysis for simulation and prediction of agricultural crops in Ningxia, China[J]. Computer and Electronics in Agriculture, 2020, 172:105355.

    [10] Ni C, Li Z, Zhang X, et al. Film Sorting Algorithm in seed cotton based on near-infrared hyperspectral image and deep learning[J]. Transactions of the Chinese Society of Agricultural Machinery, 2019, 50(12): 170-179. (in Chinese with English abstract)

    [11] Zhu F, Zheng Z. Image-based assessment of growth vigor for phalaenopsis aphrodite seedlings using convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(9): 185-194. (in Chinese with English abstract)

    [12] Kamilaris A, Prenafeta-Boldú F X. Deep learning in agriculture: A survey[J]. Computer and Electronics in Agriculture, 2018, 147: 70-90.

    [13] Deng Y, Wu H, Zhu H. Recognition and counting of citrus flowers based on instance segmentation[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(7): 200-207.(in Chinese with English abstract)

    [14] Liu P, Zhu Y, Zhang T, et al. Algorithm for recognition and image segmentation of overlapping grape cluster in natural environment[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(6): 161-169. (in Chinese with English abstract)

    [15] Fu L, Feng Y, Eikamil T, et al. Image recognition method of multi-cluster kiwifruit in field based on convolutional neural networks[J].Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(2): 205-211. (in Chinese with English abstract)

    [16] Sun J, Tan W, Wu X, et al. Real-time recognition of sugar beet and weeds in complex backgrounds using multi-channel depth-wise separable convolution model[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(12): 184-190. (in Chinese with English abstract)

    [17] Liu X, Fan C, Li J, et al. Identification method of strawberry based on convolutional neural network[J]. Transactions of the Chinese Society for Agricultural Machinery, 2020, 51(2): 237-244. (in Chinese with English abstract)

    [18] Peng M X, Xia J F, Peng H. Efficient recognition of cotton and weed in field based on Faster R-CNN by integrating FPN[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(20): 202-209. (in Chinese with English abstract)

    [19] Liang C, Xiong J, Zheng Z, et al. A visual detection method for nighttime litchi fruits and fruiting stems[J]. Computer and Electronics in Agriculture, 2020, 169: 105192.

    [20] Zabawa L, Kicherer A, Klingbeil L, et al. Counting of grapevine berries in images via semantic segmentation using convolutional neural networks[J]. Journal of Photogrammetry and Remote Sensing, 2020, 164: 73-83.

    [21] Sa I, Ge Z, Dayoub F, et al. Deepfruits: A fruit detection system using deep neural networks[J]. Sensors, 2016, 16: 1-8.

    [22] Tian Y, Yang G, Wang Z, et al. Apple detection during different growth stages in orchards using the improved YOLO-V3 model[J]. Computer and Electronics in Agriculture, 2019, 157: 417-426.

    [23] Koirala A, Walsh K B, Wang Z, et al. Deep learning for real-time fruit detection and orchard fruit load estimation: Benchmarking of ‘MangoYOLO.’[J]. Precision Agriculture, 2019, 20: 1107–1135.

    [24] Lu A E, Iou A. Orange recognition method using improved YOLOv3-LITE lightweight neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(17): 205–214. (in Chinese with English abstract)

    [25] Shi R, Li T, Yamaguchi Y. An attribution-based pruning method for real-time mango detection with YOLO network[J]. Computer and Electronics in Agriculture, 2020, 169: 105214.

    [26] Bochkovskiy A, Wang C Y, Liao H. YOLOv4: Optimal speed and accuracy of object detection[Z]. 2020, [2020-07-03], https://arxiv.org/abs/2004.10934.

    [27] Redmon J, Farhadi A. YOLO9000: Better, faster, stronger[C]//IEEE Conference on Computer Vision andPattern Recognition. Honolulu: IEEE. 2017: 7263-7271.

    [28] Redmon J, Farhadi A. YOLOv3: An incremental improvement[Z]. [2020-07-03], https://arxiv.org/abs/1804.02767.

    [29] Howard A, Sandler M, Chen B, et al. Searching for mobileNetV3[C]//IEEE International Conference on Computer Vision. The United States: IEEE. 2019: 1314-1324.

    [30] Howard A.G, Zhu M, Chen B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[Z]. [2020-07-03], https://arxiv.org/abs/1704.04861.

    [31] Sandler M, Howard A, Zhu M, et al. MobileNetV2: Inverted residuals and linear bottlenecks[Z]. [2020-07-05], https://arxiv.org/abs/1801.04381.

    [32] Wang D, He D. Recognition of apple targets before fruits thinning by robot based on R-FCN deep convolution neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(3): 156-163. (in Chinese with English abstract)

    基于改進(jìn)的輕量化卷積神經(jīng)網(wǎng)絡(luò)火龍果檢測(cè)方法

    王金鵬,高 凱,姜洪喆,周宏平

    (南京林業(yè)大學(xué)機(jī)械電子工程學(xué)院,南京 210037)

    在自然環(huán)境下對(duì)火龍果進(jìn)行實(shí)時(shí)檢測(cè)是實(shí)現(xiàn)火龍果自動(dòng)化采摘的必要條件之一。該研究提出了一種輕量級(jí)卷積神經(jīng)網(wǎng)絡(luò)YOLOv4- LITE火龍果檢測(cè)方法。YOLOv4集成了多種優(yōu)化策略,YOLOv4的檢測(cè)準(zhǔn)確率比傳統(tǒng)的YOLOv3高出10%。但是YOLOv4的骨干網(wǎng)絡(luò)復(fù)雜,計(jì)算量大,模型體積較大,不適合部署在嵌入式設(shè)備中進(jìn)行實(shí)時(shí)檢測(cè)。將YOLOv4的骨干網(wǎng)絡(luò)CSPDarknet-53替換為MobileNet-v3,MobileNet-v3提取特征可以顯著提高YOLOv4的檢測(cè)速度。為了提高小目標(biāo)的檢測(cè)精度,分別設(shè)置在網(wǎng)絡(luò)第39層以及第46層進(jìn)行上采樣特征融合。使用2 513張不同遮擋環(huán)境下的火龍果圖像作為數(shù)據(jù)集進(jìn)行訓(xùn)練測(cè)試,試驗(yàn)結(jié)果表明,該研究提出的輕量級(jí)YOLOv4-LITE模型 Average Precision(AP)值為96.48%,1值為95%,平均交并比為81.09%,模型大小僅為2.7 MB。同時(shí)對(duì)比分析不同骨干網(wǎng)絡(luò),MobileNet-v3檢測(cè)速度大幅度提升,比YOLOv4的原CSPDarknet-53平均檢測(cè)時(shí)間減少了160.32 ms。YOLOv4-LITE在GPU上檢測(cè)一幅1 200×900的圖像只需要2.28 ms,可以在自然環(huán)境下實(shí)時(shí)檢測(cè),具有較強(qiáng)的魯棒性。相比現(xiàn)有的目標(biāo)檢測(cè)算法,YOLOv4-LITE的檢測(cè)速度是SSD-300的9.5倍,是Faster-RCNN的14.3倍。進(jìn)一步分析了多尺度預(yù)測(cè)對(duì)模型性能的影響,利用4個(gè)不同尺度特征圖融合預(yù)測(cè),相比YOLOv4-LITE平均檢測(cè)精度提高了0.81%,但是平均檢測(cè)時(shí)間增加了10.33 ms,模型大小增加了7.4 MB。因此,增加多尺度預(yù)測(cè)雖然提高了檢測(cè)精度,但是檢測(cè)時(shí)間也隨之增加。總體結(jié)果表明,該研究提出的輕量級(jí)YOLOv4-LITE在檢測(cè)速度、檢測(cè)精度和模型大小方面具有顯著優(yōu)勢(shì),可應(yīng)用于自然環(huán)境下火龍果檢測(cè)。

    模型;深度學(xué)習(xí);果實(shí)檢測(cè);卷積神經(jīng)網(wǎng)絡(luò);YOLOv4-LITE;實(shí)時(shí)檢測(cè)

    Wang Jinpeng, Gao Kai, Jiang Hongzhe, et al. Method for detecting dragon fruit based on improved lightweight convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(20): 218-225.doi:10.11975/j.issn.1002-6819.2020.20.026 http://www.tcsae.org

    王金鵬,高 凱,姜洪喆,等. 基于改進(jìn)的輕量化卷積神經(jīng)網(wǎng)絡(luò)火龍果檢測(cè)方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2020,36(20):218-225. (in English with Chinese abstract) doi:10.11975/j.issn.1002-6819.2020.20.026 http://www.tcsae.org

    date: 2020-07-24

    date: 2020-10-12

    Jiangsu Science and Technology Project (BE2018364); National Natural Science Foundation of China (51408311)

    Wang Jinpeng, associate professor, engaged in intelligent agricultural research. Email: jpwang@njfu.edu.cn

    10.11975/j.issn.1002-6819.2020.20.026

    TP301.6;TP181

    A

    1002-6819(2020)-20-0218-08

    猜你喜歡
    檢測(cè)時(shí)間火龍果骨干
    紅心火龍果不神奇
    對(duì)兩種細(xì)菌鑒定法在血液檢驗(yàn)中的應(yīng)用效果進(jìn)行分析
    核心研發(fā)骨干均16年以上!創(chuàng)美克在產(chǎn)品研發(fā)上再發(fā)力
    新型溶血素與傳統(tǒng)溶血素在臨床血常規(guī)檢驗(yàn)中的應(yīng)用研究
    ABL90血?dú)夥治鰞x在急診科的應(yīng)用研究
    骨干風(fēng)采展示
    不同檢測(cè)時(shí)長(zhǎng)對(duì)粉煤灰砌塊放射性檢測(cè)結(jié)果的影響
    火龍果
    小布老虎(2016年18期)2016-12-01 05:47:41
    美味的火龍果
    關(guān)于組建“一線話題”骨干隊(duì)伍的通知
    国产午夜精品久久久久久| 免费看美女性在线毛片视频| 久久久久久久精品吃奶| 国产色视频综合| 狠狠狠狠99中文字幕| 视频在线观看一区二区三区| x7x7x7水蜜桃| av天堂久久9| 精品福利观看| 男人舔女人的私密视频| 视频区欧美日本亚洲| tocl精华| 国产单亲对白刺激| 亚洲av成人一区二区三| 男女午夜视频在线观看| 亚洲av第一区精品v没综合| 操美女的视频在线观看| 欧美日本视频| 一级a爱片免费观看的视频| 国产成人精品在线电影| tocl精华| 久99久视频精品免费| 99re在线观看精品视频| 麻豆成人av在线观看| 久久久久久免费高清国产稀缺| 久久婷婷成人综合色麻豆| 亚洲自偷自拍图片 自拍| 国产精品99久久99久久久不卡| 麻豆成人av在线观看| 一进一出抽搐gif免费好疼| 97碰自拍视频| 国产主播在线观看一区二区| 亚洲久久久国产精品| 久久久国产精品麻豆| 电影成人av| 亚洲五月婷婷丁香| 国产精品秋霞免费鲁丝片| 午夜成年电影在线免费观看| 怎么达到女性高潮| av电影中文网址| 99久久99久久久精品蜜桃| 女性被躁到高潮视频| 日韩国内少妇激情av| 天天一区二区日本电影三级 | 欧美绝顶高潮抽搐喷水| 精品久久久久久久人妻蜜臀av | 曰老女人黄片| 国语自产精品视频在线第100页| 亚洲最大成人中文| 777久久人妻少妇嫩草av网站| 国产亚洲精品久久久久久毛片| 日韩欧美国产一区二区入口| 精品人妻在线不人妻| 国产精品一区二区三区四区久久 | 亚洲av成人不卡在线观看播放网| 涩涩av久久男人的天堂| av视频免费观看在线观看| 18禁国产床啪视频网站| 欧美日韩中文字幕国产精品一区二区三区 | 一本综合久久免费| 亚洲av片天天在线观看| 中文字幕精品免费在线观看视频| 欧美 亚洲 国产 日韩一| 九色亚洲精品在线播放| 亚洲成人久久性| 国产av在哪里看| 亚洲,欧美精品.| 久久人妻福利社区极品人妻图片| 日韩大码丰满熟妇| e午夜精品久久久久久久| 亚洲一区高清亚洲精品| 欧美另类亚洲清纯唯美| 人妻丰满熟妇av一区二区三区| 国产精品乱码一区二三区的特点 | а√天堂www在线а√下载| 男人操女人黄网站| 亚洲av电影不卡..在线观看| 亚洲在线自拍视频| 黄色a级毛片大全视频| 91大片在线观看| 高潮久久久久久久久久久不卡| 99久久99久久久精品蜜桃| 成人国产综合亚洲| 不卡一级毛片| 日本欧美视频一区| 深夜精品福利| 色综合站精品国产| 欧美在线一区亚洲| 一本久久中文字幕| 两人在一起打扑克的视频| 丰满人妻熟妇乱又伦精品不卡| 国产亚洲精品综合一区在线观看 | 大香蕉久久成人网| 成年女人毛片免费观看观看9| 国产在线精品亚洲第一网站| 色综合婷婷激情| 曰老女人黄片| 此物有八面人人有两片| 免费在线观看完整版高清| 后天国语完整版免费观看| 窝窝影院91人妻| 日韩精品青青久久久久久| 国内毛片毛片毛片毛片毛片| 国内久久婷婷六月综合欲色啪| 亚洲色图av天堂| 欧美老熟妇乱子伦牲交| 美女 人体艺术 gogo| 国产亚洲精品av在线| 97人妻精品一区二区三区麻豆 | 午夜精品在线福利| 99精品在免费线老司机午夜| 亚洲国产精品999在线| 人成视频在线观看免费观看| 人妻丰满熟妇av一区二区三区| 亚洲 国产 在线| 黄色视频,在线免费观看| 婷婷丁香在线五月| 麻豆久久精品国产亚洲av| 成人免费观看视频高清| 午夜久久久在线观看| 又紧又爽又黄一区二区| 亚洲av成人av| 欧美一级毛片孕妇| 亚洲成人久久性| 黑丝袜美女国产一区| 99久久综合精品五月天人人| 成人特级黄色片久久久久久久| 国产精品九九99| 亚洲欧美激情综合另类| 国产精品99久久99久久久不卡| 午夜福利,免费看| 久久香蕉激情| 国产伦一二天堂av在线观看| 一a级毛片在线观看| 亚洲人成电影观看| 窝窝影院91人妻| 无遮挡黄片免费观看| 色播亚洲综合网| 中文字幕最新亚洲高清| 日本五十路高清| 高清黄色对白视频在线免费看| 午夜久久久久精精品| 12—13女人毛片做爰片一| or卡值多少钱| www.熟女人妻精品国产| 国产成人欧美| 99国产综合亚洲精品| 免费久久久久久久精品成人欧美视频| 亚洲电影在线观看av| 欧美激情极品国产一区二区三区| 美女扒开内裤让男人捅视频| 久久久久久人人人人人| 久久久久久久精品吃奶| 99国产精品99久久久久| 午夜福利一区二区在线看| 国产熟女午夜一区二区三区| 手机成人av网站| 美女高潮喷水抽搐中文字幕| 夜夜夜夜夜久久久久| 亚洲午夜精品一区,二区,三区| 美女高潮喷水抽搐中文字幕| 99久久国产精品久久久| 亚洲va日本ⅴa欧美va伊人久久| 国产精品久久久久久精品电影 | 波多野结衣巨乳人妻| 成人三级黄色视频| 又大又爽又粗| 欧美 亚洲 国产 日韩一| 国产私拍福利视频在线观看| 在线av久久热| 欧美日韩黄片免| 国内毛片毛片毛片毛片毛片| 亚洲欧美日韩高清在线视频| 亚洲少妇的诱惑av| 国产av又大| 咕卡用的链子| 欧美激情 高清一区二区三区| 午夜老司机福利片| 亚洲欧美激情在线| 少妇裸体淫交视频免费看高清 | 国产成人影院久久av| 日日摸夜夜添夜夜添小说| 人人妻人人澡欧美一区二区 | 久久人妻av系列| 女性被躁到高潮视频| 伊人久久大香线蕉亚洲五| 亚洲精品美女久久久久99蜜臀| 久久精品成人免费网站| 法律面前人人平等表现在哪些方面| 午夜福利影视在线免费观看| 此物有八面人人有两片| 亚洲精品国产精品久久久不卡| 青草久久国产| 校园春色视频在线观看| 精品卡一卡二卡四卡免费| 精品国产亚洲在线| 精品国产一区二区三区四区第35| 极品教师在线免费播放| 久久久精品国产亚洲av高清涩受| 久久国产亚洲av麻豆专区| 久久久久久人人人人人| 免费高清视频大片| 国产色视频综合| 亚洲第一电影网av| 一级毛片精品| www.自偷自拍.com| 亚洲少妇的诱惑av| 性色av乱码一区二区三区2| 精品午夜福利视频在线观看一区| 亚洲 欧美一区二区三区| 久久精品亚洲精品国产色婷小说| 亚洲三区欧美一区| 亚洲欧美日韩无卡精品| 一个人免费在线观看的高清视频| 国产av又大| 欧美另类亚洲清纯唯美| 国产伦一二天堂av在线观看| 亚洲欧美激情在线| 日韩欧美国产一区二区入口| 亚洲av成人av| 90打野战视频偷拍视频| 男人操女人黄网站| 成人18禁在线播放| 美女国产高潮福利片在线看| 免费少妇av软件| 日韩欧美免费精品| 一区在线观看完整版| 精品久久久久久久久久免费视频| 亚洲三区欧美一区| 久久天堂一区二区三区四区| 高清毛片免费观看视频网站| 国内久久婷婷六月综合欲色啪| 亚洲精品在线美女| 黑丝袜美女国产一区| 欧美另类亚洲清纯唯美| 激情视频va一区二区三区| 亚洲熟妇熟女久久| 欧美人与性动交α欧美精品济南到| 国产精品野战在线观看| 欧美+亚洲+日韩+国产| 黄色视频,在线免费观看| 大码成人一级视频| 少妇熟女aⅴ在线视频| 国产精品久久久久久亚洲av鲁大| 男人舔女人下体高潮全视频| 国产一区二区三区综合在线观看| 国产精品秋霞免费鲁丝片| 欧美性长视频在线观看| 婷婷六月久久综合丁香| 欧美乱妇无乱码| 成人欧美大片| 亚洲 欧美 日韩 在线 免费| 一区福利在线观看| 两个人视频免费观看高清| 老司机福利观看| 777久久人妻少妇嫩草av网站| 国产成人系列免费观看| 黄频高清免费视频| 午夜久久久久精精品| 悠悠久久av| 90打野战视频偷拍视频| 久久精品国产99精品国产亚洲性色 | 香蕉国产在线看| 性欧美人与动物交配| 午夜免费鲁丝| 欧美日韩瑟瑟在线播放| 国产又爽黄色视频| 母亲3免费完整高清在线观看| 精品久久久久久成人av| 亚洲欧美日韩另类电影网站| 亚洲电影在线观看av| 精品少妇一区二区三区视频日本电影| 色婷婷久久久亚洲欧美| 在线国产一区二区在线| 我的亚洲天堂| 久久久久精品国产欧美久久久| 91麻豆av在线| 国产区一区二久久| av电影中文网址| 亚洲片人在线观看| 国产精品久久久av美女十八| 亚洲精品在线美女| 岛国在线观看网站| 咕卡用的链子| 国产成人精品久久二区二区免费| 欧美久久黑人一区二区| 法律面前人人平等表现在哪些方面| 最新在线观看一区二区三区| 黄色片一级片一级黄色片| 如日韩欧美国产精品一区二区三区| 国产精品久久视频播放| 久久久久久久久久久久大奶| 精品久久久久久久毛片微露脸| 久久精品影院6| 亚洲成av片中文字幕在线观看| 色综合亚洲欧美另类图片| 男人舔女人下体高潮全视频| 天天躁狠狠躁夜夜躁狠狠躁| 欧美+亚洲+日韩+国产| 精品电影一区二区在线| 色哟哟哟哟哟哟| 国产一级毛片七仙女欲春2 | 午夜免费鲁丝| 久久久久久久久免费视频了| 国产人伦9x9x在线观看| 欧美国产日韩亚洲一区| 熟女少妇亚洲综合色aaa.| 成年女人毛片免费观看观看9| 女人精品久久久久毛片| 免费不卡黄色视频| 女人精品久久久久毛片| 性少妇av在线| 两人在一起打扑克的视频| av片东京热男人的天堂| aaaaa片日本免费| av片东京热男人的天堂| 女人精品久久久久毛片| 欧美黄色淫秽网站| 欧美中文日本在线观看视频| 成年女人毛片免费观看观看9| 中出人妻视频一区二区| 欧美一级毛片孕妇| 欧美日韩亚洲国产一区二区在线观看| 欧美成人免费av一区二区三区| 色av中文字幕| 欧美中文综合在线视频| 国产精品久久电影中文字幕| 欧美国产日韩亚洲一区| 亚洲天堂国产精品一区在线| 欧美久久黑人一区二区| 50天的宝宝边吃奶边哭怎么回事| 高潮久久久久久久久久久不卡| 亚洲 欧美一区二区三区| 一区二区三区国产精品乱码| 丝袜美足系列| 9191精品国产免费久久| 精品久久久精品久久久| 无限看片的www在线观看| 欧美日韩瑟瑟在线播放| 亚洲精品一卡2卡三卡4卡5卡| 久久国产精品影院| 亚洲 欧美 日韩 在线 免费| 女人被躁到高潮嗷嗷叫费观| 自线自在国产av| 亚洲国产精品999在线| 欧美黑人精品巨大| 国产精品,欧美在线| 久久精品影院6| 黑丝袜美女国产一区| 亚洲精品中文字幕一二三四区| 亚洲精华国产精华精| 国产成人影院久久av| 中文字幕人妻熟女乱码| 性少妇av在线| 熟女少妇亚洲综合色aaa.| 正在播放国产对白刺激| 免费在线观看日本一区| 欧美乱码精品一区二区三区| 色婷婷久久久亚洲欧美| 精品不卡国产一区二区三区| 亚洲精品在线观看二区| 狠狠狠狠99中文字幕| 精品欧美一区二区三区在线| 欧美绝顶高潮抽搐喷水| 日本精品一区二区三区蜜桃| 久久久久国产一级毛片高清牌| 欧美亚洲日本最大视频资源| 人人妻人人澡欧美一区二区 | 91成年电影在线观看| 亚洲第一电影网av| 欧美+亚洲+日韩+国产| 一区二区三区精品91| 欧美日韩瑟瑟在线播放| 欧美黄色片欧美黄色片| 人人妻人人澡人人看| 女同久久另类99精品国产91| 亚洲精品国产区一区二| 欧美一区二区精品小视频在线| 国产午夜精品久久久久久| 亚洲一卡2卡3卡4卡5卡精品中文| 国产亚洲精品综合一区在线观看 | 午夜影院日韩av| 两个人免费观看高清视频| 亚洲中文av在线| 免费搜索国产男女视频| 欧美日韩一级在线毛片| 一卡2卡三卡四卡精品乱码亚洲| 精品一区二区三区av网在线观看| 亚洲午夜理论影院| 国产亚洲精品久久久久久毛片| 97碰自拍视频| 国产一卡二卡三卡精品| av在线天堂中文字幕| 十八禁人妻一区二区| 咕卡用的链子| 十八禁网站免费在线| 欧美日韩福利视频一区二区| 久久精品国产综合久久久| 99re在线观看精品视频| 在线观看日韩欧美| 免费无遮挡裸体视频| 亚洲av日韩精品久久久久久密| 夜夜夜夜夜久久久久| 黄网站色视频无遮挡免费观看| 国产成人精品久久二区二区91| 一级毛片女人18水好多| 美女大奶头视频| 日本 欧美在线| 久久人人爽av亚洲精品天堂| 每晚都被弄得嗷嗷叫到高潮| 国产精品日韩av在线免费观看 | 亚洲电影在线观看av| 久久亚洲真实| 两性午夜刺激爽爽歪歪视频在线观看 | av有码第一页| 1024视频免费在线观看| 国产单亲对白刺激| 一级毛片高清免费大全| 亚洲熟女毛片儿| 国产在线观看jvid| 精品久久久久久久毛片微露脸| 亚洲五月婷婷丁香| 亚洲男人的天堂狠狠| 一本综合久久免费| 日韩免费av在线播放| 欧美黄色片欧美黄色片| 免费人成视频x8x8入口观看| 99久久国产精品久久久| 黄片播放在线免费| 国产精品98久久久久久宅男小说| 国产精品久久久久久亚洲av鲁大| 国产成年人精品一区二区| 老鸭窝网址在线观看| 成人国语在线视频| 精品久久久久久久人妻蜜臀av | 亚洲精品一区av在线观看| 色播在线永久视频| 精品第一国产精品| 亚洲色图av天堂| 97人妻精品一区二区三区麻豆 | 又黄又粗又硬又大视频| 天天躁夜夜躁狠狠躁躁| 亚洲久久久国产精品| 国产成人av激情在线播放| 成人国产综合亚洲| 欧美av亚洲av综合av国产av| 国产成人精品在线电影| 两性夫妻黄色片| 国产精品一区二区在线不卡| 亚洲午夜精品一区,二区,三区| 久久人人精品亚洲av| 亚洲人成77777在线视频| 欧美日韩一级在线毛片| 人妻丰满熟妇av一区二区三区| 亚洲欧美激情在线| 色综合亚洲欧美另类图片| 啪啪无遮挡十八禁网站| 亚洲欧美精品综合久久99| 搡老熟女国产l中国老女人| 黑人巨大精品欧美一区二区蜜桃| 国产国语露脸激情在线看| 91老司机精品| 亚洲欧美激情在线| 久久天堂一区二区三区四区| 亚洲精华国产精华精| 法律面前人人平等表现在哪些方面| 少妇熟女aⅴ在线视频| 欧美av亚洲av综合av国产av| 国产精品av久久久久免费| x7x7x7水蜜桃| 黑人操中国人逼视频| 日韩欧美国产一区二区入口| 免费女性裸体啪啪无遮挡网站| 精品国产国语对白av| 亚洲av日韩精品久久久久久密| 精品国内亚洲2022精品成人| 国产精品久久久久久亚洲av鲁大| 视频区欧美日本亚洲| 亚洲天堂国产精品一区在线| 一区二区日韩欧美中文字幕| 电影成人av| 亚洲第一电影网av| 日韩欧美一区视频在线观看| 日本 欧美在线| av天堂久久9| 国产不卡一卡二| 99久久综合精品五月天人人| 国产精品98久久久久久宅男小说| 日韩欧美一区二区三区在线观看| 国内精品久久久久久久电影| 在线观看66精品国产| 国产一卡二卡三卡精品| 日本vs欧美在线观看视频| 国产亚洲av高清不卡| 中文字幕人妻丝袜一区二区| 99香蕉大伊视频| 亚洲中文日韩欧美视频| 亚洲伊人色综图| 男女之事视频高清在线观看| 成人亚洲精品av一区二区| 国产精品一区二区精品视频观看| 欧美午夜高清在线| 亚洲欧美精品综合久久99| 婷婷精品国产亚洲av在线| 亚洲五月天丁香| 国产精品亚洲美女久久久| 欧美日本中文国产一区发布| 亚洲国产精品久久男人天堂| 精品人妻1区二区| 真人一进一出gif抽搐免费| 亚洲国产精品合色在线| 免费观看人在逋| 久久伊人香网站| 少妇粗大呻吟视频| 国产精品自产拍在线观看55亚洲| 91麻豆av在线| 妹子高潮喷水视频| 亚洲狠狠婷婷综合久久图片| 精品午夜福利视频在线观看一区| 国产亚洲av高清不卡| 女生性感内裤真人,穿戴方法视频| 操出白浆在线播放| 日韩国内少妇激情av| 他把我摸到了高潮在线观看| 亚洲人成电影免费在线| 免费在线观看视频国产中文字幕亚洲| 久久精品国产亚洲av香蕉五月| 欧美色视频一区免费| 国产伦一二天堂av在线观看| 国产av一区二区精品久久| 69精品国产乱码久久久| 午夜福利影视在线免费观看| 国产成人精品在线电影| 中文字幕精品免费在线观看视频| 搡老妇女老女人老熟妇| 看黄色毛片网站| 久久香蕉国产精品| 亚洲 欧美一区二区三区| 欧美黑人欧美精品刺激| 午夜久久久久精精品| 久久精品国产亚洲av香蕉五月| 国产片内射在线| 亚洲视频免费观看视频| 日本三级黄在线观看| 啦啦啦免费观看视频1| 两人在一起打扑克的视频| 久久久久久人人人人人| 亚洲成av人片免费观看| 在线观看66精品国产| 日韩av在线大香蕉| 天天一区二区日本电影三级 | 亚洲在线自拍视频| 亚洲熟妇熟女久久| 成人18禁在线播放| 最好的美女福利视频网| 青草久久国产| 亚洲最大成人中文| 亚洲精品粉嫩美女一区| 久久伊人香网站| 女人高潮潮喷娇喘18禁视频| 日韩高清综合在线| 久久精品亚洲精品国产色婷小说| 日本精品一区二区三区蜜桃| 91麻豆av在线| 伊人久久大香线蕉亚洲五| 国产精品 国内视频| 精品高清国产在线一区| 日韩欧美一区二区三区在线观看| 亚洲欧美激情综合另类| 精品一区二区三区视频在线观看免费| 男女之事视频高清在线观看| 亚洲色图综合在线观看| av欧美777| 亚洲免费av在线视频| 亚洲精品国产区一区二| 免费人成视频x8x8入口观看| 99久久99久久久精品蜜桃| 国产成年人精品一区二区| 午夜福利成人在线免费观看| 好男人电影高清在线观看| 一级,二级,三级黄色视频| 韩国av一区二区三区四区| 亚洲久久久国产精品| 久久伊人香网站| 丁香欧美五月| 极品人妻少妇av视频| 精品国产超薄肉色丝袜足j| 人成视频在线观看免费观看| 国产亚洲精品av在线| 亚洲精品美女久久久久99蜜臀| 法律面前人人平等表现在哪些方面| 免费看a级黄色片| 免费在线观看黄色视频的| 国产亚洲av高清不卡| 一夜夜www| 韩国av一区二区三区四区| 国产黄a三级三级三级人| av天堂久久9| 一区二区日韩欧美中文字幕| 国产精品亚洲av一区麻豆| 午夜精品国产一区二区电影| 给我免费播放毛片高清在线观看| 亚洲欧美精品综合一区二区三区| 岛国视频午夜一区免费看| 午夜成年电影在线免费观看| 国产午夜福利久久久久久| 国产精品久久视频播放| 又黄又爽又免费观看的视频| 久久人妻福利社区极品人妻图片| 久久青草综合色| 88av欧美| 黑人欧美特级aaaaaa片| 精品久久久久久久毛片微露脸| 成人国产综合亚洲| 满18在线观看网站|