• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fine-grained Ship Image Recognition Based on BCNN with Inception and AM-Softmax

    2022-11-10 02:31:34ZhilinZhangTingZhangZhaoyingLiuPeijieZhangShanshanTuYujianLiandMuhammadWaqas
    Computers Materials&Continua 2022年10期

    Zhilin Zhang,Ting Zhang,Zhaoying Liu,*,Peijie Zhang,Shanshan Tu,Yujian Li and Muhammad Waqas

    1Faculty of Information Technology,Beijing University of Technology,Beijing 100124,China

    2School of Artificial Intelligence,Guilin University of Electronic Technology,Guilin,541004,China

    3School of Engineering,Edith Cowan University,Perth WA 6027,Australia

    Abstract:The fine-grained ship image recognition task aims to identify various classes of ships.However,small inter-class,large intra-class differences between ships,and lacking of training samples are the reasons that make the task difficult.Therefore,to enhance the accuracy of the fine-grained ship image recognition,we design a fine-grained ship image recognition network based on bilinear convolutional neural network(BCNN)with Inception and additive margin Softmax(AM-Softmax).This network improves the BCNN in two aspects.Firstly,by introducing Inception branches to the BCNN network,it is helpful to enhance the ability of extracting comprehensive features from ships.Secondly,by adding margin values to the decision boundary,the AM-Softmax function can better extend the inter-class differences and reduce the intra-class differences.In addition,as there are few publicly available datasets for fine-grained ship image recognition,we construct a Ship-43 dataset containing 47,300 ship images belonging to 43 categories.Experimental results on the constructed Ship-43 dataset demonstrate that our method can effectively improve the accuracy of ship image recognition,which is 4.08% higher than the BCNN model.Moreover,comparison results on the other three public fine-grained datasets(Cub,Cars,and Aircraft)further validate the effectiveness of the proposed method.

    Keywords:Fine-grained ship image recognition;Inception;AM-softmax;BCNN

    1 Introduction

    Fine-grained image recognition (FGIR) refers to the recognition of different subclasses of the same category[1],for example,the recognition of “freighters” and “merchant ships”.Currently,traditional image recognition tasks have achieved great success.However,due to the small inter-class and large intra-class differences,the performance of fine-grained image recognition is not so satisfying.As a major carrier of marine traffic and transport,fine-grained ship image recognition has attracted more and more attention,it has been widely applied for maintaining maritime safety,such as maritime traffic monitoring and maritime search,thus to improve the capability of coastal defense and early warning[2,3].However,for ship targets,the shapes and structures are similar from one category to another,and there are also rich diversity components between the same classes,thus making finegrained ship recognition a very challenging task.

    Traditional methods of fine-grained image recognition of ships mainly use manually designed feature extraction algorithms for feature matching[4,5].They cannot fully utilize the information contained in the dataset to extract the distinctive features of the objects,resulting in the performance of fine-grained recognition is limited.Furthermore,all of these methods have a low generalization capacity.With the development of deep learning techniques,many deep learning models have been developed to improve the accuracy by learning better feature representations from the dataset automatically based on convolutional neural networks (CNN)[6].Among those deep models,the bilinear convolutional neural network (BCNN)[7]demonstrates satisfying performance for finegrained image recognition.The BCNN typically utilizes two parallel branches of the VGGNet network[8]to retrieve the features of each image position,then an outer product operation to integrate features,and update the training network by end-to-end.However,BCNN has the following two deficiencies.1)The two branches of the network only consist of 3×3 convolutional kernels,and generally small convolutional kernels ignore certain global information[9].2) The BCNN uses the Softmax loss function,which has a weak ability to activate subtle features,and it is likely to misclassify certain images with particularly small inter-class differences[10,11].

    To enhance the performance of fine-grained recognition of ships image,we developed a finegrained image recognition network for ships based on BCNN with Inception and AM-Softmax,which improved the BCNN from two perspectives.First,to gather global information,we replaced one branch of the BCNN with Inception module,this is helpful to aggregate feature information on a large scale and increase the ability of global information extraction.Second,to activate the distinctive characteristics between different classes,and extend the inter-class distance while reducing the intraclass distance,we introduced the AM-Softmax function,which effectively activate the differences between the ship classes by adding an additive margin to different decision boundaries.Moreover,we construct a fine-grained ship image dataset containing 47,300 images belonging to 43 categories.The key advantages and major contributions of the proposed method are:

    ? To extract global information,we design Inception modules and use them to replace a branch of the BCNN network.

    ? To better activate the features between fine-grained images,we introduce AM-Softmax,which can by adding an additive margin to different decision boundaries.

    ? Based on the existing dataset,we construct a richer ship dataset.

    The rest of the paper is organized as follows.Section 2 summarizes related work.The proposed method is described in Section 3.Detailed experiments and analysis are conducted in Section 4.Section 5 concludes the paper.

    2 Related Work

    In recent years,many fine-grained image recognition methods have been developed,and these methods can be roughly classified into three main paradigms,i.e.,fine-grained recognition with localization-classification subnetworks,with end-to-end feature encoding and with external information.Fine-grained with localization-classification subnetworks approaches design a localization subnetwork for locating these key parts[12],while later,a classification subnetwork follows and is employed for recognition of the key parts,such as Part-based CNN[13],Mask-CNN[14].Those approaches are more likely to find distinguished parts[15,16],and require more annotation information.End-to-end feature coding methods,by designing powerful models,learn a more discriminative feature representation.The most representative method among them is BCNN.Beyond the two paradigms,another paradigm is to leverage external information,such as web data and multi-modality data,to further assist fine-gained recognition[17,18].

    The BCNN extracts feature via a network of two parallel branches,each of which is VGG16,and an outer product operation is performed on the two outputs.The outer product operation completes the feature fusion at each location,which can capture discriminative features.The structure of the VGG16 network is relatively simple,and each layer of the network uses small-sized convolutional kernels.By increasing the depth of the network,rich feature information can be obtained and the overall performance of the network can be improved.However,small convolutional kernels ignore some global information when extracting feature layer by layer,and just increasing the depth of the network brings a few problems,such as overfitting,gradient vanishing,and training difficulties.The Inception network[19]proposed by Szegedy is wider and more efficient,it uses larger scale convolutional kernels to extract global information,and reduces the number of parameters by exploring the factorization of convolutional kernels.

    In recent years,besides the commonly used Softmax,there are various loss functions[20]have been proposed that can optimize the distance between classes.The L-Softmax[21]was first proposed as an angle-based loss function,which can reduce the angle between the feature vector and the corresponding weight vector by introducing a parameterm.The A-Softmax[22]performs a normalization operation on the weights,and by adding a large angle margin,the network more focused on optimizing the angles of features and vectors.The Cosface[23]reformulates the Softmax as a cosine loss,then it can remove radial variations byL2normalizing both features and weight vectors,and can further maximize the decision margin in the angular space by introducing a cosine margin term.By introducing additive angles to the decision boundary,the Arc-Softmax[24]maximizes the classification bounds in the angle space.

    3 The Proposed Method

    In this paper,based on the BCNN framework,we design a fine-grained ship image recognition network by introducing Inception and AM-Softmax.By adding the Inception module to a branch of BCNN,it is helpful to enhance the ability of the whole network to extract global information.Meanwhile,the network uses the AM-Softmax function to learn decision boundaries among the different classes,which can increase the inter-class distance and reduce the intra-class distance.

    The architecture of the proposed method is illustrated in Fig.1.There are two parallel branches,one of them uses VGG16 to extract features from local information,and the other branch introduces the Inception module to extract features from global information.The results of both branches are combined using the outer product and average pooled to get the bilinear feature representation.Then the bilinear vector is passed through a linear classifier and AM-Softmax layer to obtain class predictions.Finally,the cross-entropy loss function is used to guide and optimize the training of the network.

    3.1 The Inception Branch

    In the VGG16 network,small scale convolutional kernels are used to get local feature information more easily,but it is difficult to extract global information features.Meanwhile,network with larger scale convolutional kernels usually requires a large amount of calculation.According to the literatures,we know that the Inception network can extract global information and reduce the calculation consumption while using larger scale convolutional kernels during the network.Inspired by the Inception network,we design three modules,IncepA,IncepB,and IncepC,to extract global information.These modules,as shown in Fig.2,have a convolutional kernel size of 3×3,5×5,and 7×7,respectively.

    In all three modules,1×1 convolutional kernel and a pool operation are performed,which can help the modules reduce the amount of calculation.Then 3×3 convolutional kernel is decomposed into 1×3 and 3×1 vector kernels.In the IncepB and IncepC,1×5 or 1×7 vector kernels are stacked two times.Finally,all the three components are concatenated.Those cascaded vector kernels can roughly achieve the effect of large-scale convolution kernels.By decomposing the large-scale kernels,it can effectively reduce the total number of parameters without increasing the calculation consumption.

    Once the three modules have been designed,the Inception branch network is built as shown in Fig.3.The VGG16 branch network is shown in Fig.4,and it consists of 13 convolutional layers and 3 fully connected layers.In the Inception branch network,we also used 13 convolutional layers,just using 3 modules instead of 9 convolutional layers in VGG16.

    By using Inception modules in the Inception branch,large-scale convolutional kernels are added to this network.Furthermore,the decomposed kernels help the network to extract much richer global features without increasing the overall computational effort.

    3.2 Additive Margin Softmax

    The BCNN uses the original Softmax loss function.If ignoring the bias,the formulation of the original Softmax loss is defined as

    If a two-dimensional feature is used as an example and the feature is represented in a circle,a geometric interpretation of the above equation can be clearly illustrated as shown in Fig.5.Where,W1andW2can be considered as center vectors of the two class;θ1andθ2represent the angles between sample vectorxand the two center vectors.P0represents the decision boundary generated by the Softmax function for the two classes,and accordingly,P1andP2are generated by AM-Softmax.If cos(θ1) <cos(θ2),the feature can be identified as category 1.As a result,the decision boundary between the two classes has only oneP0,i.e.,cos(θ1)=cos(θ2).If there is only one decision boundary,some special samples,with the intra-class distance being larger than the inter-class distance,can be easily misclassified.

    The ship images are characterized by small differences between classes and large differences within classes.It is necessary to design an activation function which increases the distance between classes and decreases the distance within classes.

    To increase the inter-class distance and decrease the intra-class distance,a marginmcan be explicitly added to the decision boundaries of the categories.That is,based on the Softmax loss function,the decision boundary has two decisional surfacesP1andP2.The boundary for category 1 is cosθ1-m=cosθ2,category 2 is cosθ1=cosθ2-m.Then,in this paper,we assume that the norm of bothWyiandxiare normalized to 1,the Additive Margin Softmax loss function can be designed as,which is denoted as AM-Softmax loss function,

    3.3 The Overall Procedure of the Proposed Method

    By adding the Inception branch network and the AM-Softmax loss function,the network can extract features with local and global information,and optimize the differences between different classes of ships.The whole procedure of the proposed method is described in details as below.

    (1) The input imageIis cropped to 448×448,and it is horizontally flipped,randomly rotated,randomly cropped,etc.The processed image is noted asX.

    (2) The processed images are input to the proposed network,and the feature extraction process is denoted asW*X,where*represents a series of convolutional,Relu and pooling operations,WAandWBrepresent all the parameters of the two branches.fAandfBare the extracted feature maps,and each of shape is 28×28×512.

    (3) At the same positionlof the two feature maps,fAandfBhave a 1×512 vectors,i.e.,fA(l,X)andfB(l,X),the outer product operation yields a 512×512 matrixb(l,X).

    (4) Perform an average pooling operation on the matrixesbX.

    (5) The bilinear vectorBXis obtained by vectorisingbX.

    (6) The obtained bilinear vectorBpis normalized as follows.

    (7) Then L2normalization of the above features is performed as follows,zis the input of the next layer.

    (8)zis input into the fully connected layer and is predicted tosjby using the AM-Softmax function.

    (9) The loss is calculated using the AM-Softmax loss function,then the loss is back propagation for network optimization,and the network parameters are updated.

    4 Experimental Results

    To validate the performance of the proposed method,we conduct experiments on the constructed dataset and three other public datasets.And comparison experiments are performed with other four popular methods to further verify the effectiveness of the proposed method.In the following parts,we will present the dataset,details of the training process,the ablation experiments,and the comparison results in details.

    4.1 Dataset

    The Ship-43 dataset is a fine-grained image dataset constructed by our group independently.Some of its images and labels come from the website CNSS(www.cnss.com.cn).The Ship-43 dataset contains 43 categories,each containing approximately 1,100 images.Some examples are shown in the Fig.6.In each category,1,000 images were used for training and the other 100 images were used for testing.In addition,to validate the generalization capacity of the proposed method,three commonly used public datasets for fine-grained image recognition are also used,including the Cub dataset[25],the Car dataset[26],and the Aircraft dataset[27].The Cub dataset contains 11,788 images of 200 bird species,where each category contains a relatively balanced set of 30 training images and 29 test images.The Car dataset contains 16,185 images of 196 categories of cars,the cars’key features include vehicle manufacturer,car make,and model,etc.The Aircraft dataset contains 102 categories,there are 100 images in each class,of which two-thirds are used for training and the other images are used for testing.

    4.2 Training Details

    Experimental frameworks and devices.This paper chooses PyTorch framework for experiments.The experiments use four NVIDIA Tesla V100,each of them with a memory size of 32G.

    Figure 6:Examples of ship images in Ship-43

    Network training.This paper adopts the transfer learning approach for the training of the model,and the network is pre-trained on ImageNet.In the first stage,all parameters of the network except the fully connected layer are frozen,and the parameters of the fully connected layer are learned in the fine-grained dataset using a larger learning rate.In the second stage,the entire network is fine-tuned with a smaller learning rate.

    Image size.Image size will affect the ultimate accuracy of the experiment.Taking into account the machine memory,the image size is set to 448×448.

    Learning rate.The learning rate is set to 1e-2 in the first stage,then it is set to 1e-3 when the network is fine-tuned.

    Batch Size.When defining this parameter,we consider the size of the dataset and the computer’s memory,so the batch size is 256 in this paper.

    4.3 Fine-grained Ship Recognition Results

    To evaluate the performance of the proposed model,ablation experiments and comparison experiments are carried out on the above datasets.Firstly,ablation experiments are performed to verify the influence of the Inception branch and the AM-Softmax for fine-grained ship image recognition,respectively.Then,comparison experiments with four well-known methods are performed to validate the effectiveness of the proposed method.

    4.3.1 Ablation Experiment for Inception Branch

    Based on the Softmax loss function,to verify the efficiency of the Inception branch network,we design three different networks:(1) BCNN:the two branch networks use the VGG16 network.(2) BCNN-I:both networks use the Inception branch.(3) BCNN-II:one branch uses the VGG16 network,and another uses the Inception branch network.The experimental results are presented in Fig.7.

    From the experimental results,the network merging the Inception branch and the VGG branch has the highest accuracy in all datasets.On the Ship-43 dataset,our method improves by 2.06%compared to the BCNN network.It also improves by 1.14% compared to the network of the two Inception branches.It states that the network extracted simultaneously the global information and local information is more appropriate.Meanwhile,it can be seen that our proposed network also improves the accuracy on three general fine-grained image datasets.

    Figure 7:Accuracy of different networks on different datasets

    4.3.2 Ablation Experiment for AM-Softmax

    To properly assess the influence of the AM-Softmax,the benchmark network for all the experiments in this section is BCNN.The influence of AM-Softmax function on ship fine-grained recognition will be analyzed in two parts:the influence of different additive margin valuesm,and the comparison of accuracy between different loss functions.

    A.The influence of different additive marginmvalues

    To explore how a margin can be added manually to assist the network in achieving better accuracy,themvalues are set to 0.1,0.2,0.3,0.4,0.5,and 0.6 in this section,and the accuracy is shown in Tab.1.

    Table 1:Results of different m values

    From Tab.1,we can see that margin value is a hyperparameters,and different margin values result in different accuracy.Moreover,for different dataset,the best margin value is different.For example,whenm=0.5,three datasets,Ship-43,Cub,and Aircraft,obtain the best result compared with other margin values.Compared with the Cub and Car dataset,different values has less effect on the Ship-43 datasets,this may be because the Ship-43 dataset has a relatively small number of categories and a large number of pictures.

    B.The comparison of accuracy between different loss functions

    Based on the analysis of different margin values in AM-Softmax,m=0.5 is selected for the following experiments.Meanwhile,the default optimal hyper-parameters are used for A-Softmax and Arc-Softmax,respectively.To demonstrate the advantages of the AM-Softmax loss function,comparison experiments are conducted with other commonly used loss functions,and the comparison results are shown in Tab.2.

    Table 2:Comparison results of different loss functions

    On the Ship-43 dataset,compared to the Softmax function,these modified functions(A-Softmax,Arc-Softmax and AM-Softmax)improve the recognition accuracy,and especially the model with the AM-Softmax function improves the accuracy by 2.78%.Moreover,AM-Softmax achieves the highest accuracy on both the Cub and Aircraft datasets.

    The Fig.8 shows the trend of the loss value of AM-Softmax and Softmax.

    We can see that the loss value of the AM-Softmax is always much smaller than the loss value of the Softmax during the training process.In addition,using AM-Softmax loss function,the network can not only improve the convergence speed but also improve the accuracy.Meanwhile,during the validation process,the AM-Softmax loss value is also smaller than the Softmax loss value,which further indicates that AM-Softmax is more accurate for fine-grained recognition.Due to the two-step training method,the learning rate decay is used to fine-tune the network in the later stage,which makes the loss value of the loss function further decrease.

    4.3.3 Comparison Results

    To further verify the effectiveness of the proposed method.We conducted comparison experiments with four popular models for fine-grained image recognition,that are the compact bilinear pooling network (CBP)[28],the low-rank bilinear pooling network (LRBP)[29],the BCNN with Softmax function and BCNN with AM-Softmax function.The CBP obtains feature representation by designing novel convolutional kernel based on BCNN.The LRBP can compress the model through the codecomposition of the larger classifiers.These networks are also frequently used for fine-grained recognition tasks.Because the benchmark framework of this paper is BCNN,in order to be fair,on the same framework,using the variant network of BCNN and our method to compare,it can be reflected that the improvement for different links has significantly different effects.

    As shown in Tab.3,our method achieves the highest accuracy on the Ship-43 dataset.Compared with CBP and LRBP,it exceeds 2.29% and 0.88%,respectively.Compared to the BCNN with the Softmax or the AM-Softmax,the maximum improvement is 4.08%.Our method achieves the highest accuracy on the Cub and Aircraft datasets.In addition,our method only improves on the backbone network and loss function,so our method has a similar computational cost to BCNN.Overall,the effectiveness and generalizability of the method described in this paper for fine-grained recognition is further validated.

    Table 3:Recognition accuracy of different models

    5 Conclusion

    In this paper,to improve the performance of fine-grained ship image recognition,we modify the BCNN network in two aspects.Firstly,by adding Inception branches to the feature extraction network,the network can merge local and global feature information from different scale kernels.Secondly,by adding margin values to the decision boundary,the AM-Softmax function can optimize the difference between ship classes,and can better activate different categories.Moreover,we construct a fine-grained ship image dataset.Ablation experiments and comparison result on the fine-grained ship dataset and three other fine-grained datasets demonstrate that our method is effective and has high generalization ability.And the proposed method can be applied to many fine-grained applications,such as bird species identification,cars identification,aircraft type identification,online plant identification.Our future work will focus on designing end-to-end models that can extract more distinguishable details to further improve the accuracy of fine-grained ship image recognition.

    Acknowledgement:We express our thanks to Professor Li Yujian for providing devices.

    Funding Statement:This work is supported by the National Natural Science Foundation of China(61806013,61876010,62176009,and 61906005),General project of Science and Technology Plan of Beijing Municipal Education Commission (KM202110005028),Beijing Municipal Education Commission Project (KZ201910005008),Project of Interdisciplinary Research Institute of Beijing University of Technology(2021020101)and International Research Cooperation Seed Fund of Beijing University of Technology(2021A01).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲欧美一区二区三区久久| 午夜福利在线免费观看网站| 午夜精品在线福利| 国产精品一区二区免费欧美| 一级a爱视频在线免费观看| 亚洲少妇的诱惑av| 久久中文字幕一级| 久久精品国产99精品国产亚洲性色 | 免费一级毛片在线播放高清视频 | 手机成人av网站| 国产亚洲精品第一综合不卡| 免费女性裸体啪啪无遮挡网站| 18禁黄网站禁片午夜丰满| 淫妇啪啪啪对白视频| 国产精品一区二区精品视频观看| 日韩欧美一区视频在线观看| 最近最新中文字幕大全免费视频| 亚洲国产中文字幕在线视频| 欧美国产精品va在线观看不卡| 国产精品影院久久| 欧美日韩中文字幕国产精品一区二区三区 | 久久久久久久国产电影| 成人免费观看视频高清| 99香蕉大伊视频| 午夜免费鲁丝| 嫩草影视91久久| 在线天堂中文资源库| 大香蕉久久成人网| 国产激情欧美一区二区| 国产视频一区二区在线看| 中文字幕制服av| 久久精品国产综合久久久| 制服人妻中文乱码| 精品高清国产在线一区| 一本综合久久免费| 国产精品亚洲一级av第二区| 国产深夜福利视频在线观看| 精品亚洲成a人片在线观看| 亚洲成人免费av在线播放| 国产又色又爽无遮挡免费看| 宅男免费午夜| 国产免费现黄频在线看| 亚洲欧美一区二区三区黑人| 午夜福利在线观看吧| 校园春色视频在线观看| 亚洲三区欧美一区| 啦啦啦 在线观看视频| 一区在线观看完整版| 欧美久久黑人一区二区| ponron亚洲| 国产欧美亚洲国产| 免费不卡黄色视频| 美女国产高潮福利片在线看| 久久精品国产清高在天天线| 久久精品成人免费网站| 99精品在免费线老司机午夜| 精品人妻1区二区| 国产男女超爽视频在线观看| 成人三级做爰电影| 国产精品亚洲一级av第二区| 国产三级黄色录像| 欧美日韩一级在线毛片| 亚洲成人手机| 国产又爽黄色视频| 亚洲一区二区三区欧美精品| av一本久久久久| 侵犯人妻中文字幕一二三四区| 精品人妻在线不人妻| 精品福利永久在线观看| 91大片在线观看| 亚洲欧美激情在线| 欧美激情极品国产一区二区三区| 精品久久久久久电影网| 久久久久国产精品人妻aⅴ院 | 国产精品亚洲av一区麻豆| 好看av亚洲va欧美ⅴa在| 国产精品影院久久| 啪啪无遮挡十八禁网站| 夜夜躁狠狠躁天天躁| 国产无遮挡羞羞视频在线观看| 美女高潮喷水抽搐中文字幕| 人人妻人人澡人人看| 久久青草综合色| 免费在线观看影片大全网站| 精品久久久久久久毛片微露脸| 日本vs欧美在线观看视频| 91在线观看av| 精品久久久久久,| 最新在线观看一区二区三区| 欧美精品啪啪一区二区三区| 在线永久观看黄色视频| 别揉我奶头~嗯~啊~动态视频| 99精品在免费线老司机午夜| 精品亚洲成a人片在线观看| 黄色视频,在线免费观看| 性少妇av在线| av免费在线观看网站| 亚洲视频免费观看视频| 亚洲熟女精品中文字幕| 搡老熟女国产l中国老女人| 国产男靠女视频免费网站| 香蕉丝袜av| 91麻豆av在线| 亚洲精品在线美女| 日韩视频一区二区在线观看| 久久这里只有精品19| 韩国av一区二区三区四区| 午夜福利视频在线观看免费| 亚洲国产欧美一区二区综合| 高潮久久久久久久久久久不卡| 12—13女人毛片做爰片一| 大香蕉久久网| 精品欧美一区二区三区在线| 97人妻天天添夜夜摸| 久久天堂一区二区三区四区| 视频区图区小说| 香蕉久久夜色| 一进一出好大好爽视频| 日本vs欧美在线观看视频| 欧美 亚洲 国产 日韩一| 亚洲国产看品久久| 嫁个100分男人电影在线观看| 黄色怎么调成土黄色| 亚洲全国av大片| 大陆偷拍与自拍| 国产成人欧美在线观看 | 欧美成狂野欧美在线观看| 丁香六月欧美| 一区福利在线观看| 在线观看免费午夜福利视频| 岛国在线观看网站| 老汉色av国产亚洲站长工具| 可以免费在线观看a视频的电影网站| 亚洲色图av天堂| 久久中文看片网| 最近最新免费中文字幕在线| 黄色毛片三级朝国网站| 一级a爱片免费观看的视频| av网站免费在线观看视频| 丰满迷人的少妇在线观看| 中文字幕制服av| 国产有黄有色有爽视频| 久久热在线av| 国产亚洲精品第一综合不卡| 91国产中文字幕| 成人精品一区二区免费| 精品人妻在线不人妻| 如日韩欧美国产精品一区二区三区| 国产成人免费无遮挡视频| videos熟女内射| cao死你这个sao货| 精品一区二区三区av网在线观看| 亚洲成人免费av在线播放| 亚洲国产欧美一区二区综合| 丝袜美足系列| 国产欧美日韩一区二区三| 欧美人与性动交α欧美精品济南到| 精品卡一卡二卡四卡免费| 国产又爽黄色视频| 国产欧美日韩综合在线一区二区| 成年版毛片免费区| 亚洲av成人av| 欧美另类亚洲清纯唯美| 久久久久久久国产电影| 亚洲一区二区三区欧美精品| 国产1区2区3区精品| 亚洲国产欧美日韩在线播放| 男女床上黄色一级片免费看| 狠狠狠狠99中文字幕| 热99re8久久精品国产| 午夜免费鲁丝| 亚洲欧美激情在线| 日韩欧美一区二区三区在线观看 | 亚洲精品国产一区二区精华液| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲第一欧美日韩一区二区三区| 丝袜美足系列| 日韩欧美国产一区二区入口| 久久精品人人爽人人爽视色| 亚洲一卡2卡3卡4卡5卡精品中文| 色老头精品视频在线观看| 欧美黄色片欧美黄色片| 99久久综合精品五月天人人| 18禁观看日本| 久久精品成人免费网站| 免费在线观看视频国产中文字幕亚洲| 高清视频免费观看一区二区| 久久久国产一区二区| 亚洲精品国产区一区二| 老鸭窝网址在线观看| 国产精品电影一区二区三区 | 精品第一国产精品| 美女午夜性视频免费| 青草久久国产| 国产精品久久久久成人av| 国产欧美日韩一区二区三| 国产成人一区二区三区免费视频网站| 亚洲中文字幕日韩| 久久中文字幕人妻熟女| 久久ye,这里只有精品| 欧美激情久久久久久爽电影 | 一二三四在线观看免费中文在| 精品国产美女av久久久久小说| 国产欧美日韩综合在线一区二区| 成人特级黄色片久久久久久久| 国产男女超爽视频在线观看| 免费在线观看日本一区| 国产欧美日韩一区二区三| 精品一区二区三区四区五区乱码| 热re99久久精品国产66热6| 国产av精品麻豆| 久久午夜综合久久蜜桃| 久久国产精品男人的天堂亚洲| 一区在线观看完整版| 女性被躁到高潮视频| 精品人妻熟女毛片av久久网站| 在线av久久热| 精品一区二区三区视频在线观看免费 | 亚洲av美国av| 午夜久久久在线观看| 少妇粗大呻吟视频| 一边摸一边抽搐一进一小说 | 久久久久国内视频| 热99国产精品久久久久久7| 亚洲精品美女久久久久99蜜臀| 欧美成人免费av一区二区三区 | 免费看十八禁软件| 成年人午夜在线观看视频| 九色亚洲精品在线播放| 极品教师在线免费播放| 日日爽夜夜爽网站| 人妻一区二区av| 久久草成人影院| 老司机靠b影院| 欧美在线一区亚洲| 老司机深夜福利视频在线观看| 一二三四社区在线视频社区8| 日韩一卡2卡3卡4卡2021年| 香蕉久久夜色| 国产乱人伦免费视频| 国产精品一区二区精品视频观看| 精品亚洲成a人片在线观看| 亚洲精品国产精品久久久不卡| 国产麻豆69| 韩国av一区二区三区四区| 亚洲少妇的诱惑av| 欧美激情极品国产一区二区三区| 欧美日韩一级在线毛片| 色综合婷婷激情| 国产一区有黄有色的免费视频| 国产成人精品久久二区二区91| svipshipincom国产片| 亚洲精品成人av观看孕妇| 免费在线观看完整版高清| 国产精品国产av在线观看| 久久国产精品大桥未久av| 十分钟在线观看高清视频www| 757午夜福利合集在线观看| 18禁国产床啪视频网站| 成年人黄色毛片网站| 真人做人爱边吃奶动态| 巨乳人妻的诱惑在线观看| 男人的好看免费观看在线视频 | 久久午夜亚洲精品久久| netflix在线观看网站| 日韩欧美一区视频在线观看| 99riav亚洲国产免费| 男男h啪啪无遮挡| 午夜福利欧美成人| 麻豆国产av国片精品| 亚洲va日本ⅴa欧美va伊人久久| a级片在线免费高清观看视频| 一级作爱视频免费观看| 午夜日韩欧美国产| 精品亚洲成国产av| 母亲3免费完整高清在线观看| 亚洲情色 制服丝袜| av线在线观看网站| 亚洲aⅴ乱码一区二区在线播放 | 欧美成狂野欧美在线观看| 国产人伦9x9x在线观看| 亚洲成人免费电影在线观看| 捣出白浆h1v1| 日韩一卡2卡3卡4卡2021年| 色婷婷久久久亚洲欧美| 午夜免费观看网址| 波多野结衣一区麻豆| 热re99久久国产66热| 亚洲国产精品sss在线观看 | 亚洲九九香蕉| 大片电影免费在线观看免费| 麻豆成人av在线观看| 亚洲精品美女久久久久99蜜臀| av片东京热男人的天堂| 国产精品99久久99久久久不卡| 十八禁网站免费在线| 性少妇av在线| 麻豆成人av在线观看| 满18在线观看网站| 欧美国产精品va在线观看不卡| 一个人免费在线观看的高清视频| 真人做人爱边吃奶动态| 成人免费观看视频高清| 成人精品一区二区免费| 成人亚洲精品一区在线观看| 一区二区三区国产精品乱码| 国产一区二区激情短视频| 黄片播放在线免费| 丰满的人妻完整版| 亚洲精品国产色婷婷电影| 天天影视国产精品| 国产成人欧美在线观看 | 欧美中文综合在线视频| 日韩 欧美 亚洲 中文字幕| 日本撒尿小便嘘嘘汇集6| 中文字幕制服av| 欧美成狂野欧美在线观看| 麻豆成人av在线观看| 免费人成视频x8x8入口观看| 成年版毛片免费区| 亚洲五月天丁香| 国产精品国产高清国产av | 女同久久另类99精品国产91| 亚洲一区二区三区欧美精品| 黑人巨大精品欧美一区二区mp4| 精品福利永久在线观看| 欧美精品亚洲一区二区| 国产精品久久视频播放| 一级a爱视频在线免费观看| 69av精品久久久久久| 亚洲自偷自拍图片 自拍| 国产三级黄色录像| 国产激情久久老熟女| 中文字幕高清在线视频| 搡老熟女国产l中国老女人| 久久狼人影院| 久久久国产成人免费| 午夜激情av网站| 日韩熟女老妇一区二区性免费视频| www.熟女人妻精品国产| av天堂在线播放| 免费看十八禁软件| 免费在线观看日本一区| 国产一区有黄有色的免费视频| 国产精品秋霞免费鲁丝片| 飞空精品影院首页| 久99久视频精品免费| 12—13女人毛片做爰片一| 国产av又大| 成人特级黄色片久久久久久久| 一边摸一边抽搐一进一小说 | 天天操日日干夜夜撸| 亚洲国产精品合色在线| 电影成人av| 亚洲色图av天堂| 国产精品久久久久久人妻精品电影| 欧美性长视频在线观看| 人成视频在线观看免费观看| 亚洲专区国产一区二区| 十八禁人妻一区二区| 女同久久另类99精品国产91| 欧美乱妇无乱码| 妹子高潮喷水视频| 久99久视频精品免费| 久久久久久久国产电影| 亚洲欧美激情综合另类| 欧美精品高潮呻吟av久久| 成人国语在线视频| 亚洲国产看品久久| 国产片内射在线| 丰满的人妻完整版| 黄色a级毛片大全视频| 久久中文看片网| 精品第一国产精品| 国产欧美亚洲国产| 精品高清国产在线一区| 黄色视频,在线免费观看| 搡老熟女国产l中国老女人| 久久中文字幕一级| 国产精品国产高清国产av | 悠悠久久av| 亚洲人成电影免费在线| 婷婷精品国产亚洲av在线 | 午夜91福利影院| av有码第一页| 黑人操中国人逼视频| 午夜福利在线免费观看网站| 91九色精品人成在线观看| 日韩视频一区二区在线观看| av不卡在线播放| 亚洲av第一区精品v没综合| 在线十欧美十亚洲十日本专区| 男女下面插进去视频免费观看| 激情在线观看视频在线高清 | 欧美成人午夜精品| 欧美色视频一区免费| 成人特级黄色片久久久久久久| 欧美乱妇无乱码| 国产高清视频在线播放一区| 日本vs欧美在线观看视频| 午夜福利视频在线观看免费| 午夜久久久在线观看| 国产精品久久久人人做人人爽| 搡老乐熟女国产| 日本精品一区二区三区蜜桃| 91精品国产国语对白视频| 国产一区二区三区在线臀色熟女 | 亚洲欧美色中文字幕在线| 欧美成人免费av一区二区三区 | 亚洲成国产人片在线观看| 国产xxxxx性猛交| 国产精品 欧美亚洲| 18禁国产床啪视频网站| 国产激情欧美一区二区| 国产精品av久久久久免费| 久久精品亚洲精品国产色婷小说| 69av精品久久久久久| 亚洲专区国产一区二区| 麻豆国产av国片精品| 自线自在国产av| 国产亚洲欧美98| 国产一区有黄有色的免费视频| 高清av免费在线| 午夜免费观看网址| 一级毛片精品| 黄频高清免费视频| 亚洲精品成人av观看孕妇| 欧美成狂野欧美在线观看| 三上悠亚av全集在线观看| 国产精品秋霞免费鲁丝片| 免费黄频网站在线观看国产| 国产精品 欧美亚洲| 免费在线观看完整版高清| 窝窝影院91人妻| 热99国产精品久久久久久7| 久久人妻熟女aⅴ| 亚洲一区中文字幕在线| 日韩大码丰满熟妇| 大型黄色视频在线免费观看| 中文字幕色久视频| 热99久久久久精品小说推荐| av国产精品久久久久影院| 午夜福利在线免费观看网站| 国产精品国产高清国产av | 午夜免费成人在线视频| 美女扒开内裤让男人捅视频| 欧美黄色片欧美黄色片| 色播在线永久视频| cao死你这个sao货| 欧美日韩黄片免| 日本wwww免费看| 久久国产精品男人的天堂亚洲| 在线播放国产精品三级| 大香蕉久久网| 在线十欧美十亚洲十日本专区| 麻豆av在线久日| 黄色 视频免费看| 99久久精品国产亚洲精品| 国产不卡一卡二| 精品第一国产精品| 日韩视频一区二区在线观看| 老司机在亚洲福利影院| 国产欧美日韩一区二区精品| 久久久国产欧美日韩av| 国产精品欧美亚洲77777| 亚洲男人天堂网一区| 一区二区三区国产精品乱码| 国产成人精品久久二区二区91| 亚洲熟女毛片儿| 欧美日韩一级在线毛片| 一级a爱片免费观看的视频| 亚洲精品国产色婷婷电影| 成年人免费黄色播放视频| 成人永久免费在线观看视频| 18在线观看网站| 91字幕亚洲| 婷婷丁香在线五月| 高清视频免费观看一区二区| 亚洲国产欧美网| 天天操日日干夜夜撸| 超碰97精品在线观看| 日本vs欧美在线观看视频| 亚洲精品久久成人aⅴ小说| 人妻丰满熟妇av一区二区三区 | 精品国内亚洲2022精品成人 | 真人做人爱边吃奶动态| 大型av网站在线播放| 热re99久久精品国产66热6| 老司机午夜十八禁免费视频| 精品人妻在线不人妻| 脱女人内裤的视频| 亚洲av第一区精品v没综合| 亚洲熟女毛片儿| x7x7x7水蜜桃| 国产免费av片在线观看野外av| av国产精品久久久久影院| 一夜夜www| 国产视频一区二区在线看| 热99re8久久精品国产| av在线播放免费不卡| 人妻丰满熟妇av一区二区三区 | 国产成人精品久久二区二区免费| 久久99一区二区三区| 黄网站色视频无遮挡免费观看| 精品国产一区二区久久| 国产av一区二区精品久久| 国精品久久久久久国模美| 天堂中文最新版在线下载| 建设人人有责人人尽责人人享有的| 精品一区二区三区四区五区乱码| 午夜福利一区二区在线看| 久久国产亚洲av麻豆专区| 亚洲欧美一区二区三区黑人| 老汉色∧v一级毛片| 精品久久蜜臀av无| 中文字幕人妻丝袜一区二区| 老司机深夜福利视频在线观看| 在线观看免费日韩欧美大片| 女人高潮潮喷娇喘18禁视频| 亚洲欧洲精品一区二区精品久久久| 国产免费现黄频在线看| 少妇 在线观看| 热99re8久久精品国产| 久久亚洲精品不卡| 一区二区三区国产精品乱码| 最近最新免费中文字幕在线| 老熟女久久久| 国产人伦9x9x在线观看| 亚洲性夜色夜夜综合| 亚洲精品成人av观看孕妇| 极品人妻少妇av视频| 久久精品国产亚洲av香蕉五月 | 夜夜爽天天搞| 欧美+亚洲+日韩+国产| 欧美另类亚洲清纯唯美| 五月开心婷婷网| 国产精品99久久99久久久不卡| bbb黄色大片| 欧美黑人精品巨大| 精品高清国产在线一区| 视频在线观看一区二区三区| 大型黄色视频在线免费观看| 人妻久久中文字幕网| 天堂动漫精品| 日韩免费高清中文字幕av| a级毛片在线看网站| 成人av一区二区三区在线看| 久久精品国产99精品国产亚洲性色 | 国产av精品麻豆| 777久久人妻少妇嫩草av网站| 男人操女人黄网站| 午夜影院日韩av| 正在播放国产对白刺激| 久久精品人人爽人人爽视色| 手机成人av网站| 亚洲熟女精品中文字幕| 色在线成人网| 国内久久婷婷六月综合欲色啪| 精品人妻熟女毛片av久久网站| 国产亚洲精品一区二区www | 亚洲欧美激情在线| 精品久久久久久电影网| 亚洲国产毛片av蜜桃av| 国产单亲对白刺激| 在线观看一区二区三区激情| 99久久精品国产亚洲精品| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲第一欧美日韩一区二区三区| 两个人免费观看高清视频| 啦啦啦在线免费观看视频4| 免费女性裸体啪啪无遮挡网站| 国产亚洲精品久久久久久毛片 | 十八禁高潮呻吟视频| 欧洲精品卡2卡3卡4卡5卡区| 亚洲精品国产精品久久久不卡| 欧美日韩瑟瑟在线播放| 亚洲熟女精品中文字幕| 欧美精品高潮呻吟av久久| 亚洲欧美日韩高清在线视频| 丝袜美腿诱惑在线| 在线观看66精品国产| 午夜老司机福利片| 久久精品国产亚洲av香蕉五月 | 亚洲性夜色夜夜综合| 久久国产精品影院| 亚洲人成伊人成综合网2020| 少妇被粗大的猛进出69影院| 中文字幕高清在线视频| av超薄肉色丝袜交足视频| ponron亚洲| 美女扒开内裤让男人捅视频| 国产精品九九99| 又黄又爽又免费观看的视频| 日本wwww免费看| 午夜福利,免费看| 国产精品国产高清国产av | 精品亚洲成a人片在线观看| 国产高清videossex| 亚洲人成77777在线视频| 亚洲欧美日韩另类电影网站| 免费在线观看亚洲国产| 亚洲视频免费观看视频| 亚洲欧美一区二区三区久久| 久久久久久久精品吃奶| 一进一出抽搐gif免费好疼 | 黄色毛片三级朝国网站| 母亲3免费完整高清在线观看| 好男人电影高清在线观看| 欧美国产精品一级二级三级| 成人亚洲精品一区在线观看| 少妇 在线观看| 操美女的视频在线观看| 欧美黄色片欧美黄色片| 欧美人与性动交α欧美软件| 香蕉国产在线看|