• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Learning Applications Based on WISE Infrared Data: Classification of Stars,Galaxies and Quasars

    2023-09-03 01:36:34GuiyuZhaoBoQiuLiLuoXiaoyuGuoLinYaoKunWangandYuanboLiu

    Guiyu Zhao ,Bo Qiu,* ,A-Li Luo ,Xiaoyu Guo ,Lin Yao ,Kun Wang ,and Yuanbo Liu

    1 School of Electronic and Information Engineering,Hebei University of Technology,Tianjin 300401,China;1263730840@qq.com,qiubo@hebut.edu.cn,1799507446@qq.com,1286789387@qq.com,1848896968@qq.com,1220617881@qq.com

    2 CAS Key Laboratory of Optical Astronomy,National Astronomical Observatories,Beijing 100101,China;lal@bao.ac.cn

    3 University of Chinese Academy of Sciences,Beijing 100049,China

    Abstract The Wide-field Infrared Survey Explorer(WISE)has detected hundreds of millions of sources over the entire sky.However,classifying them reliably is a great challenge due to degeneracies in WISE multicolor space and low detection levels in its two longest-wavelength bandpasses.In this paper,the deep learning classification network,IICnet (Infrared Image Classification network),is designed to classify sources from WISE images to achieve a more accurate classification goal.IICnet shows good ability on the feature extraction of the WISE sources.Experiments demonstrate that the classification results of IICnet are superior to some other methods;it has obtained 96.2%accuracy for galaxies,97.9%accuracy for quasars,and 96.4%accuracy for stars,and the Area Under Curve of the IICnet classifier can reach more than 99%.In addition,the superiority of IICnet in processing infrared images has been demonstrated in the comparisons with VGG16,GoogleNet,ResNet34,MobileNet,EfficientNetV2,and RepVGG-fewer parameters and faster inference.The above proves that IICnet is an effective method to classify infrared sources.

    Key words: methods: data analysis– techniques: image processing– infrared: general

    1.Introduction

    Infrared astronomical observation is one of the most important branches of observational astronomy today,which mainly focus on the study of various types of celestial sources in the universe through observations in the infrared band(Glass 1999),and the objects which are too dim in the visible band can also be detected in the infrared band.

    The Earth is surrounded by a thick layer of atmosphere that contains many substances,such as water vapor,carbon dioxide,oxygen,and ozone.They have a strong scattering and absorption effect on celestial radiation from outer space at infrared wavelengths (Liou 2002),which limits ground-based infrared astronomical observations.Some initial observatories,such as the Kuiper Airborne Observatory(Erickson et al.1985)and Stratospheric Observatory for Infrared Astronomy(Erickson 1992),developed to infrared space telescopes,such as the Infrared Astronomical Satellite (Duxbury &Soifer 1980),the Infrared Space Observatory (Kessler et al.1996),and the Wide-field Infrared Survey Explorer(WISE)(Wright et al.2010).

    Classification is an essential means for humans to acquire knowledge,and the problem of classifying celestial targets has been studied for a long time (Lintott et al.2008).The classification scheme of galaxies,quasars,and stars is one of the most fundamental classification tasks in astronomy(Kim&Brunner 2016;Ethiraj &Bolla 2022).The classification of celestial objects usually includes spectral classification and morphological image classification.

    Spectral classification is very popular and there are many reported works.The classification of stars,galaxies,and quasars by spectroscopy has been studied commonly,but generally it requires a large workload by comparing the observed spectra with a template.Later,a random forest method was also used to do the same task,but the classification accuracy of quasars was only 94% (Bai et al.2018).

    The morphological classification is also a common experiment.A self-supervised learning method was used to classify the three classes based on photometric images,and the accuracy could only reach 88% (Martinazzo et al.2021).Some researchers have classified sources into stars,galaxies,and quasars with high accuracy based on Sloan Digital Sky Survey (SDSS) photometric images using deep learning methods,which is instructive for our work (He et al.2021).

    A support vector machine (SVM) (Steinwart &Christmann 2008) method was used to classify three classes based on WISE and SDSS with information from the W1 band (Kurcz et al.2016).Classification of galaxy morphology based on WISE infrared images has been previously investigated (Guo et al.2022),and we have taken the classification of infrared images a step further.

    In this paper,the data used with their pre-processing details are introduced in Section 2;the Infrared Image Classification Network (IICnet) with the modules is introduced described in Section 3;the classification results are presented,and some comparison experiments are performed in Section 4;the experimental results are analyzed in Section 5;and the summary in Section 6.

    2.Data

    The data set is constructed on some selected infrared image data from WISE.4https://irsa.ipac.caltech.edu/applications/wise/

    2.1.Data Preparation

    WISE has four bands,W1,W2,W3,and W4,at wavelengths of 3.4 μm,4.6 μm,12 μm,and 22 μm,respectively (Wright et al.2010).The WISE all-sky images and source catalog,released in 2012 March,contain over 563 million objects and provide a massive amount of information on mid-infared(MIR) properties of many different types of celestial objects and their related phenomena (Wright et al.2010;Tu &Wang 2013).By 2013,WISE had detected over 747 million objects with SNR >5 and publicly released in the AllWISE source catalog (Cutri et al.2013).

    When acquiring raw data in WISE,if the image size is set to 600〞 (the default value),there will be too many sources in the image,as shown in Figure 1(a).To find the specific source corresponding to the R.A.and decl.,the image size is set to 50〞,as shown in Figure 1(b).The data corresponding to each R.A.and decl.in this paper was obtained in INFRARED SCIENCE ARCHIVE (IRSA).5https://irsa.ipac.caltech.edu/frontpage/The band information of W1,W2,W3,and W4 of the corresponding sources are obtained from WISE after the crossover between SDSS6http://skyserver.sdss.org/CasJobs/SubmitJob.aspxand WISE to form the experimental database of this project.

    Figure 1.Images corresponding to different arcseconds.We chose 50〞 for processing,as WISE website defaults to 600〞.

    2.2.Image Pre-processing

    WISE image classification can be adversely affected by excessive dust around the sources,and the presence of more dust in the W4 band and the lower signal-to-noise ratio (SNR)compared to the other three bands are shown in Figure 2.W4 exhibits a significantly lower SNR than the other three bands,therefore,in this paper,the W1,W2,and W3 bands have been used as the three channels of the RGB image to synthesize the infrared image,as shown in Figure 3.

    Figure 2.Statistical and probability distribution figures of SNR for the four bands.(a) Statistical figure of SNR.(b) Probability distribution of SNR.

    Figure 3.A galaxy image of W1,W2,W3,W4 bands and an RGB infrared image synthesized by W1,W2,W3.

    Further more,7298 galaxy images,7215 quasar images,and 7223 star images are chosen to form the data set finally.Their numbers are approximately equal to each other to ensure data balance between different classes for satisfying the demands of deep learning algorithms.The data set is randomly divided into training,validation,and test sets with a ratio of 8:1:1,as shown in Table 1.

    Table 1Datasets Division of Three Types of Celestial Bodies

    One of the difficulties of the classification is that some infrared images of galaxies,quasars,and stars look highly similar.As shown in Figure 4,they all have a brighter light source in the image center and lack obvious image features that can clearly distinguish them from each other by human eyes.This paper introduces the IICnet method to do the classification automatically.The basis of this method is that convolutional neural networks can extract image features that human eyes cannot distinguish (Egmont-Petersen et al.2002).

    Figure 4.Sample images for each type.The three types of objects have confusing features.(a) A galaxy.(b) A star.(c) A quasar.

    Figure 5.3D waterfall of galaxies,stars and quasars.On the left there are RGB histograms of sample images of a star,a quasar,and a galaxy,respectively,and on the right it is a 3D waterfall combination of the left.

    When the RGB histogram is used to distinguish the three images in Figure 4,the results are shown in Figure 5.It can be found that three histograms are similar to each other.So simple image features like histogram cannot distinguish the three types,the deep learning method is designed to do the classification.

    In the low-redshift universe,the stars and galaxies of W1-W2 exhibit very similar colors (Kurcz et al.2016).If the color–color diagram composed of W1,W2,and W3 is used to analyze the distributions between stars,galaxies,and quasars(Wright et al.2010)(Figure 6),it can be found that there are large overlap regions among the three types,especially the overlap between stars and galaxies is very obvious.This illustrates that it is difficult to accomplish the infrared image classification task by conventional means.

    Figure 6.Color–Color diagram showing the locations of three types.There are large areas of overlap between the three types of objects.

    3.Methods

    In this paper,a new deep learning algorithm IICnet is designed to accomplish the task of infrared image classification.For this task,experiments are conducted based on the Pytorch architecture and the Python programming language.An NVIDIA TESLA V100 GPU (5120 CUDA cores and 32 GB of video memory) is used for training.

    3.1.Infrared Image Classification Network: IICnet

    The structure of IICnet is shown in Figure 7.The network includes five convolutional layers,three down-sampling layers(pooling layers),one feature extraction module (Receptive Field Block,RFB) (Liu et al.2018),and two convolutional block attention modules (CBAM) (Woo et al.2018) at the beginning and the end.

    In IICnet,the first block is a large convolutional kernel of 5×5,it has been demonstrated by several researchers that large convolutional kernels are more capable of extracting semantic information(Peng et al.2017).It extracts information from an image’s more extensive neighborhood range to ensure its relative integrity after it starts convolution.The subsequent addition of the BN layer and ReLU can suppress gradient explosions and help extract deeper semantic information.The experiments demonstrate that the 5×5 convolutional kernel for this task outperforms the 3×3 kernel.As shown in Figure 8,the validation accuracy of the network using the 5×5 convolutional kernel is significantly higher than that of the 3×3 kernel.

    Figure 8.Verification accuracy of different convolution kernels.The accuracy of the 5×5 convolution kernel is significantly higher than that of the 3×3.

    After the first layer of convolution,the raw feature map is generated and in the following fed to the Receptive Field Block(RFB) (the first module) for further processing.As shown in Figure 9,RFB is a feature extraction module that can enhance the feature extraction capability of the network by simulating the perceptual field of human vision.The first half of the module is similar to GoogleNet in which it can simulate group receptive fields of various sizes and adds dilated convolution to increase the receptive fields effectively.The latter half reproduces the relationship between the size and eccentricity of the population receptive field (pRF) (Wandell &Winawer 2015) in the human visual system,increasing the distinguishability and robustness of the features.

    Figure 9.The architectures of RFB.

    An attention module,the Convolutional Block Attention Module (CBAM) (the second module),is connected after the RFB and at the last layer of the network,respectively.CBAM not only indicates the direction of attention but also improves the representation of regions of interest.IICnet aims to improve feature representation by focusing on essential features and suppressing unnecessary ones.The channel and spatial attention modules are combined by CBAM,as shown in Figure 10.The Channel Attention Module (CAM) is shown in Figure 10(a).After the feature map is input,the onedimensional vector of channel attention is first obtained through the global MaxPool and the global AvgPool;the respective one-bit vector is obtained after a shared Multi-Layer Perception (MLP) for element addition.Finally the spatial attention vector is obtained through sigmoid activation.Through the above process,the CAM can focus on the meaningful information in the image.The Spatial Attention Module (SAM) is shown in Figure 10(b),which is complementary to channel attention which focuses on the target’s location information.The Spatial SAM first uses MaxPool and AvgPool to obtain the channel-refined features in CAM,concatenates them and generates a feature descriptor,and finally activates them by sigmoid to obtain the feature map of SAM.The joint use of the two modules can achieve better results.The equations for CAM and SAM are expressed as follows:

    Figure 10.Diagram of each attention sub-module.CAM makes use of average and maximum pooling in simultaneously.SAM connects two feature layers together to create one feature layer.

    where σ denotes the sigmoid function andf7×7represents a convolution operation with the filter size of 7×7.

    A softmax function is used at the end of the network to calculate the probability distribution of each class (Liu et al.2016),which ultimately classifies the targets into stars,galaxies,and quasars.

    The IICNet plays an essential role in improving the classification accuracy by performing feature extraction through each convolutional layer and downsampling layer.The RFB and CBAM modules can improve attention to the key position of the image,and the performance is significantly improved.Adam(Kingma&Ba 2014)is one of the optimizers that uses hyperparameter computation efficiently,usually requires no tuning,and is simple to be implemented.It is used during training.In the training process,it is set to 200 epochs,and the initial value of the learning rate is set to 10?4,and after 50 epochs,it is set to half of the initial value (5×10?5),to ensure reasonable convergence of the training.

    3.2.Feature Visualization of Network Layers

    When analyzed with our data set,the image central source is the most important part to be focused on.There are different information around different sources,such as the predominance of red around stars,black and red around galaxies,and the more complex colors shown around quasars,with some blue and green mixed.The region of interest generated by IICnet can be observed by visualizing the features of the middle layer of the network,as shown in Figure 11.The feature maps are processed by the first layer convolution,RFB and CBAM respectively,and the regions of interest are more and more concentrated,which proves the importance of the feature extraction capability of RFB and the attention mechanism of CBAM for classification.

    Figure 11.Middle layer visualization of the IICnet model.After the image passes through RFB and CBAM,the middle layer shows the focus on the central source.

    4.Result

    4.1.Influence of Image Size

    In the network of Convolutional Neural Network(CNN),the input image size is an essential factor affecting the network’s performance(Touvron et al.2019).To obtain the optimal input size of the image,this paper tested the accuracy from 64×64 to 128×128,spanning 8,using 64×64 as the starting size.The relationship between different input sizes and accuracy is shown in Figure 12.The image size achieves the highest accuracy at 80×80.The accuracy gradually decreases as the image size increases,so 80×80 is the most adaptable size for IICnet.

    Figure 12.A plot of the relationship between input image size and IICnet accuracy.The accuracy achieves a maximum value of 0.9521 when the input image size is 80×80 pixels.

    4.2.Influence of Epoch

    In this paper,the pre-processed infrared images of galaxies,stars,and quasars are input into IICnet,and the accuracy and loss obtained through the experiments are shown in Figure 13.In this experiment,accuracy and loss were analyzed through 200 epochs.The accuracy increased with the increase of epochs and then leveled off.The loss decreases as the epoch increases and then levels off.The accuracy of the validation set can reach up to 95% or more.IICnet’s ability to get better results on infrared image classification is proven.

    Figure 13.(a)The curve of IICnet?s loss against training set and validation set with epoch.(b)The curve of IICnet?s accuracy against training set and validation set with epoch.

    4.3.Evaluation Indices

    For the classification task,the following statistical metrics are used in this paper: precision,recall (Harrington 2012),specificity,F1-score (Chinchor &Sundheim 1993),and accuracy,and the specific values are shown in Table 2.Precision indicates the number of correctly classified positive samples as a proportion of the total number of samples predicted to be positive,and recall indicates the number of correctly classified positive samples as a proportion of the actual total number of positive samples.The higher these two metrics are,the better,but they are a pair of contradictory metrics,so we use the F1-score (the summed average of precision and recall) to evaluate the classification results,and the formula is shown below.

    Table 2The Classification Index of IICnet Including Precision,Recall,Specificity,F1-score,and Accuracy

    Specificity measures the classifier’s ability to recognize positive examples;sensitivity measures the classifier’s ability to recognize negative ones,which is calculated similarly to recall.The Receiver Operating Characteristic curve (ROC)(Chawla et al.2002) can also prove the superiority of the classifier in this paper,as shown in Figure 14.The ROCs of galaxies,quasars,and stars all rise rapidly to around 1,effectively demonstrating that the algorithm in this paper has good classification results for all types of objects.

    Figure 14.ROC for galaxies,quasars and stars.

    4.4.Comparative Experiment

    This section compares IICnet with some classic novel classification networks,including VGG16(2014)(Simonyan&Zisserman 2014),GoogleNet (2015) (Szegedy et al.2015),ResNet34 (2016) (He et al.2016),Mobilenet (2017) (Howard et al.2017),EfficientNetV2 (2021) (Tan &Le 2021),and RepVGG (2021) (Ding et al.2021) (EfficientNetV2 and RepVGG are the latest CNN-based networks we could find so far).The accuracy curves on the validation set for each network are shown in Figure 15(a).Except for the comparison experiments using 7 models,this work is also experimented on different data sets (infrared images,spectra,color–color and“infrared images+color–color”).

    Figure 15.(a) Comparison results of IICnet and other image classification networks validation accuracy.(b) Comparison results of 3-channel (W1,W2,W3) and 4-channel (W1,W2,W3,W4) validation accuracy.

    It seems that the results of IICnet are better than the other mainstream classification networks.As shown in Figure 15(a),only IICnet can achieve more than 95% accuracy.Besides of this,it can maintain a small computational and parametric volume while improving accuracy,as shown in Table 3.IICnet can obviously reduce the amount of computation by more than a half and the number of parameters by 1.47M compared to Mobilenet,which is the least computationally intensive way in Table 3.

    Table 3Comparison of Flops and Params in the Seven Networks

    As mentioned in Section 2.2,only W1,W2 and W3 bands are used to synthesize the images,due to the lower SNR of the W4 band.The performance of using 3-channel and 4-channel images are conducted,which shows that the former are slightly better than the latter,as shown in Figure 15(b).

    Color-color classification and “infrared image+color–color” classification are based on rvised IICnet,as shown in Figure 16,where the upper part covered with blue shading is the color–color classification network,and the composition of the upper and the lower form the “infrared image+color–color” classification network.

    Figure 16.“Infrared image+color–color” classification network.The upper part,covered by the blue shade,is the color–color classification network.

    The accuracy curves of the validation sets,which are respectively obtained by IICnet and rvised IICnet,are shown in Figure 17.Spectral classification has the highest accuracy,but it is difficult to obtain.The image classification accuracy can exceed 95%,so using image classification will be a more common way.The color–color classification results are the worst,which also corresponds to the results shown in Figure 6.The results of“infrared image+color–color classification”are about 1%higher than infrared image classification results.The reason is that some color information is lost when extracting features from infrared images,which can be alleviated by adding the magnitude information.The fused features will be further investigated in the subsequent work.

    Figure 17.The accuracy curves of the validation sets corresponding to different data sets (infrared images,spectra,color–color,“infrared images +color–color”).

    4.5.Confusion Matrix

    The confusion matrix can be used to demonstrate the classification effect.The confusion matrix drawn for the test set in this paper is shown in Figure 18.Of these,the number of misclassified samples is tiny,with the vast majority concentrated on the diagonal.

    Figure 18.Confusion Matrix of IICnet.Each column of the confusion matrix represents the number of true labels for each class,and each row represents the number of predicted labels for each class.

    The histogram in Section 2.2 (Figure 5) cannot distinguish the types to which the three images in Figure 4,but inputting the three images into the IICnet model gives evident confidence in the classification,as shown in Table 4.All three images are classified correctly,with a confidence level close to 1.

    Table 4The Confidence of the Three Samples in Figure 4

    5.Discussion

    5.1.Analysis of Misclassified Samples

    In Figure 18,there are 104 misclassified images,which are divided into four classes,namely Class 1 (37 images),Class 2(13 images),Class 3 (45 images),and Class 4 (nine images).Some examples of misclassified images are shown in Figure 19,and the analysis is as follows.

    Figure 19.A few misclassified images.Class 1,2,and 3 are the three types obtained by K-means.Class 4 involves some images in which the source is obscured or absent entirely from the center.

    K-means clustered the misclassified samples to obtain three classes of images:Class 1,Class 2,and Class 3.Visually,it can be seen that the images in Class 1 are darker,mainly showing the confusion of galaxies and quasars;in Class 2,the colors are complex,so the misclassification is more complicated;and in Class 3,the colors are brighter,mainly showing the confusion of galaxies and stars.How to further distinguish these images requires more effort in future work.Class 4 is a particular type found in misclassified samples,because its sample center has no source,which is unfavorable for feature extraction in IICnet.IICnet is more concerned with central sources,as evidenced by Section 3.2,so how to handle such images is to be considered in the subsequent work.

    5.2.Analysis of Outlier Samples

    In addition to the misclassified samples,some images are correctly classified but have low confidence in the classification,which are called outlier samples in this paper.These samples have features easily confused with other types,so it is necessary to analyze them.

    When the test set is inputted into IICnet for testing,the classification confidence for each image is obtained.By filtering the classification confidence,the filtering condition is the images with a confidence below 0.6,although the classification is correct.A total of 14 images were chosen,as shown in Figure 20,and combined with Figure 6 to facilitate viewing the distribution.According to the image characteristics,the analysis of these samples are presented in Figure 21 and are divided into six cases.Each case has its unique characteristics.The classifier in this paper obtains a lower confidence level in distinguishing images whose features need to be clarified but still obtains correct classification results.The above proves the superiority of IICnet.

    Figure 20.Samples with correct classification but low confidence.The blue triangle represents the outlier sample.

    Figure 21.Analysis of outlier samples.Manually watching and labeling the outlier samples,and giving the characteristics of the images based on color distribution and textures,and summarizing the images’ characteristics for each case.

    6.Conclusions

    The task of the infrared image classification of galaxies,quasars,and stars has been rarely reported in past literatures.For many images it becomes extremely difficult owing to the complexity of the images and similarities between different types.This paper uses W1,W2,and W3 for WISE to synthesize RGB images and specifically designs the IICnet to classify infrared images into galaxies,quasars,and stars.IICnet intergrates RFB and CBAM (Section 3.1),which improve feature extraction for the sources and enable higher classification accuracy rates.In the experiments,by comparing IICnet with VGG16,GoogleNet,Resnet34,MobileNet,EfficientNetV2,and RepVGG,it is proved that IICnet outperforms all the other methods for the classification of infrared images.

    For the analysis of misclassification samples,K-means clustering is used and four cases are discussed.Case 1,2,3 are misclassified because the images’ features are highly similar.Case 4 misclassified because the source is off-center and cannot be extracted efficiently.

    Outliers are also analyzed which are the correctly classified images but with low confidence.Outliers are at the borders of the types.Because the confidence level is low,it seems to be lucky that they can be classified correctly by the current method,IICnet.In the future,an SVM mechanism may be considered to be used because the outliers here are like support vectors.

    In summary,experiments have proven that IICnet is very effective in classifying infrared images.It may provide a new tool for astronomers.It can be further enhanced by a better feature extraction block,a new post-processing block like SVM,etc.

    Acknowledgments

    This work is supported by the Natural Science Foundation of Tianjin (22JCYBJC00410) and the Joint Research Fund in Astronomy,National Natural Science Foundation of China(U1931134).We are grateful for the Sloan Digital Sky Survey(SDSS) and the Wide-field Infrared Survey Explorer (WISE)that provide us with open data.

    ORCID iDs

    老司机午夜十八禁免费视频| 精品亚洲成国产av| 99国产精品一区二区三区| 大片免费播放器 马上看| 精品久久久精品久久久| 日本黄色视频三级网站网址 | 国产精品一区二区精品视频观看| 久久久久久久大尺度免费视频| 搡老岳熟女国产| 久久久精品免费免费高清| 人人妻人人爽人人添夜夜欢视频| 一区二区三区精品91| 一区二区av电影网| 黄片播放在线免费| 中文字幕精品免费在线观看视频| 免费在线观看影片大全网站| 亚洲视频免费观看视频| 狠狠狠狠99中文字幕| 国产色视频综合| 国产欧美日韩综合在线一区二区| 欧美大码av| 天堂中文最新版在线下载| 午夜福利在线观看吧| 久久这里只有精品19| 黄色丝袜av网址大全| 热re99久久国产66热| 搡老熟女国产l中国老女人| 国产亚洲午夜精品一区二区久久| 成人精品一区二区免费| 亚洲欧美激情在线| 一个人免费看片子| 精品久久久久久久毛片微露脸| 欧美人与性动交α欧美精品济南到| 亚洲成国产人片在线观看| 最新美女视频免费是黄的| 99久久99久久久精品蜜桃| 侵犯人妻中文字幕一二三四区| 久久久久久久大尺度免费视频| 日韩欧美一区视频在线观看| 性色av乱码一区二区三区2| 99精品久久久久人妻精品| 亚洲精品国产一区二区精华液| 亚洲情色 制服丝袜| 99国产精品一区二区三区| 在线亚洲精品国产二区图片欧美| 久久久欧美国产精品| 亚洲精品美女久久久久99蜜臀| 大型av网站在线播放| 国产欧美日韩综合在线一区二区| 国产人伦9x9x在线观看| 曰老女人黄片| 亚洲第一av免费看| 极品教师在线免费播放| 波多野结衣一区麻豆| 2018国产大陆天天弄谢| 欧美日韩国产mv在线观看视频| 黄网站色视频无遮挡免费观看| av免费在线观看网站| 伊人久久大香线蕉亚洲五| 他把我摸到了高潮在线观看 | 丁香六月欧美| 在线观看www视频免费| 国产精品久久久av美女十八| 叶爱在线成人免费视频播放| 亚洲国产av新网站| 国产99久久九九免费精品| 下体分泌物呈黄色| 亚洲成人免费电影在线观看| 午夜福利一区二区在线看| 久久中文字幕一级| 国产精品秋霞免费鲁丝片| 国产成人啪精品午夜网站| 国产熟女午夜一区二区三区| 亚洲专区中文字幕在线| 国产无遮挡羞羞视频在线观看| 在线播放国产精品三级| www.999成人在线观看| 人妻 亚洲 视频| 一区福利在线观看| av视频免费观看在线观看| 两个人免费观看高清视频| 啦啦啦 在线观看视频| 日韩熟女老妇一区二区性免费视频| 亚洲精品美女久久av网站| 日韩视频一区二区在线观看| 超色免费av| www.精华液| 怎么达到女性高潮| 五月天丁香电影| 激情视频va一区二区三区| 日韩 欧美 亚洲 中文字幕| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲免费av在线视频| aaaaa片日本免费| cao死你这个sao货| 欧美在线一区亚洲| 国产伦人伦偷精品视频| 桃红色精品国产亚洲av| 国产一区二区在线观看av| 精品久久久久久电影网| 免费人妻精品一区二区三区视频| 一区二区av电影网| 精品一区二区三区四区五区乱码| 制服诱惑二区| 精品一区二区三区四区五区乱码| 高清在线国产一区| 成人影院久久| 精品国产乱子伦一区二区三区| 啦啦啦在线免费观看视频4| 老熟妇乱子伦视频在线观看| 国产一区二区三区视频了| 天天躁夜夜躁狠狠躁躁| 久久久精品区二区三区| 老司机在亚洲福利影院| 女同久久另类99精品国产91| 亚洲精品国产区一区二| 日本黄色视频三级网站网址 | 国产成人一区二区三区免费视频网站| 久久精品成人免费网站| 69精品国产乱码久久久| 国产在线观看jvid| tube8黄色片| 亚洲色图av天堂| 午夜两性在线视频| 国精品久久久久久国模美| 久久热在线av| 国产淫语在线视频| 热99国产精品久久久久久7| 18禁裸乳无遮挡动漫免费视频| 国产淫语在线视频| 国产单亲对白刺激| 狠狠狠狠99中文字幕| 日韩三级视频一区二区三区| 久久毛片免费看一区二区三区| 十八禁网站网址无遮挡| 久久久国产欧美日韩av| 在线亚洲精品国产二区图片欧美| 淫妇啪啪啪对白视频| 大码成人一级视频| 亚洲国产欧美一区二区综合| 国产av一区二区精品久久| 天天影视国产精品| 国产日韩欧美亚洲二区| 一边摸一边抽搐一进一出视频| 国产成人免费无遮挡视频| 啪啪无遮挡十八禁网站| 精品一区二区三区视频在线观看免费 | av线在线观看网站| 国产97色在线日韩免费| 久久精品亚洲av国产电影网| 欧美日本中文国产一区发布| 国产精品一区二区在线观看99| 天天操日日干夜夜撸| 十八禁人妻一区二区| 亚洲伊人久久精品综合| 热99re8久久精品国产| 国产区一区二久久| 欧美久久黑人一区二区| 一边摸一边抽搐一进一出视频| av免费在线观看网站| 久久国产亚洲av麻豆专区| 免费av中文字幕在线| 亚洲人成伊人成综合网2020| 久久人人97超碰香蕉20202| 国精品久久久久久国模美| 首页视频小说图片口味搜索| 国产成人啪精品午夜网站| 亚洲午夜理论影院| 高清欧美精品videossex| 精品高清国产在线一区| 成人永久免费在线观看视频 | 欧美黄色片欧美黄色片| 高清黄色对白视频在线免费看| 久久这里只有精品19| 免费在线观看日本一区| 建设人人有责人人尽责人人享有的| 久热这里只有精品99| 交换朋友夫妻互换小说| 十分钟在线观看高清视频www| 日韩三级视频一区二区三区| 在线观看一区二区三区激情| 亚洲精品粉嫩美女一区| 久久精品91无色码中文字幕| 午夜久久久在线观看| 欧美性长视频在线观看| 国产精品久久久av美女十八| 少妇裸体淫交视频免费看高清 | 黄色视频在线播放观看不卡| 97人妻天天添夜夜摸| 久久久久久人人人人人| 欧美av亚洲av综合av国产av| 精品人妻熟女毛片av久久网站| 午夜福利在线观看吧| 亚洲国产欧美在线一区| 超碰成人久久| 亚洲 国产 在线| 19禁男女啪啪无遮挡网站| 亚洲欧美一区二区三区黑人| 久久精品亚洲av国产电影网| 欧美大码av| 黄色成人免费大全| 国产男靠女视频免费网站| 捣出白浆h1v1| 交换朋友夫妻互换小说| 777米奇影视久久| 国产日韩一区二区三区精品不卡| 免费在线观看视频国产中文字幕亚洲| www.自偷自拍.com| 黄色视频不卡| 中文欧美无线码| 搡老乐熟女国产| 一二三四在线观看免费中文在| 久久天堂一区二区三区四区| 香蕉国产在线看| 国产亚洲午夜精品一区二区久久| 美国免费a级毛片| 国产精品久久久久久精品电影小说| 免费在线观看影片大全网站| 日韩欧美一区二区三区在线观看 | 色在线成人网| www.999成人在线观看| 在线永久观看黄色视频| 久久久欧美国产精品| 午夜福利在线观看吧| 人妻久久中文字幕网| 一区二区三区国产精品乱码| 777久久人妻少妇嫩草av网站| 亚洲精品自拍成人| 大香蕉久久成人网| bbb黄色大片| 日本av免费视频播放| 两个人看的免费小视频| 97人妻天天添夜夜摸| 午夜福利一区二区在线看| 香蕉国产在线看| 日日夜夜操网爽| 老司机靠b影院| 女人久久www免费人成看片| 制服诱惑二区| 王馨瑶露胸无遮挡在线观看| 99国产极品粉嫩在线观看| 久久久水蜜桃国产精品网| 桃花免费在线播放| 亚洲自偷自拍图片 自拍| 午夜福利乱码中文字幕| 69av精品久久久久久 | 亚洲国产中文字幕在线视频| 国内毛片毛片毛片毛片毛片| 日韩有码中文字幕| 老司机福利观看| 欧美日韩av久久| 久久国产精品男人的天堂亚洲| 色精品久久人妻99蜜桃| 淫妇啪啪啪对白视频| 999精品在线视频| 亚洲第一青青草原| 女警被强在线播放| 真人做人爱边吃奶动态| 日韩中文字幕欧美一区二区| a级片在线免费高清观看视频| 99国产精品一区二区蜜桃av | 精品亚洲成国产av| 91麻豆精品激情在线观看国产 | 777久久人妻少妇嫩草av网站| 97在线人人人人妻| 深夜精品福利| 视频在线观看一区二区三区| av国产精品久久久久影院| 亚洲少妇的诱惑av| 大陆偷拍与自拍| 亚洲熟女精品中文字幕| 亚洲成人手机| 精品亚洲成a人片在线观看| 亚洲av欧美aⅴ国产| 美女扒开内裤让男人捅视频| 在线 av 中文字幕| 日韩视频在线欧美| 欧美性长视频在线观看| 欧美精品亚洲一区二区| 日本五十路高清| 露出奶头的视频| 久久久精品国产亚洲av高清涩受| 丰满迷人的少妇在线观看| 咕卡用的链子| 黑人巨大精品欧美一区二区mp4| 捣出白浆h1v1| 一区在线观看完整版| 一边摸一边抽搐一进一出视频| 久久国产精品男人的天堂亚洲| 少妇猛男粗大的猛烈进出视频| 手机成人av网站| 捣出白浆h1v1| 操美女的视频在线观看| 欧美精品亚洲一区二区| 成人亚洲精品一区在线观看| www.精华液| 中文字幕制服av| 国产男女超爽视频在线观看| 欧美日韩视频精品一区| 怎么达到女性高潮| 亚洲欧美日韩另类电影网站| 天天躁夜夜躁狠狠躁躁| 天堂8中文在线网| 激情视频va一区二区三区| 欧美日韩亚洲综合一区二区三区_| 精品一区二区三区四区五区乱码| 精品第一国产精品| 18禁观看日本| 亚洲精品自拍成人| 国产日韩一区二区三区精品不卡| 日韩一区二区三区影片| 宅男免费午夜| 18禁美女被吸乳视频| 国产成人av教育| 欧美精品av麻豆av| 亚洲熟女毛片儿| 亚洲精品美女久久av网站| av片东京热男人的天堂| 老司机在亚洲福利影院| 精品人妻熟女毛片av久久网站| 欧美日韩成人在线一区二区| 丁香六月天网| 亚洲精品一二三| 午夜福利在线观看吧| 久久热在线av| 午夜精品国产一区二区电影| 男女之事视频高清在线观看| 日本wwww免费看| 超碰成人久久| 免费黄频网站在线观看国产| 99久久99久久久精品蜜桃| 亚洲欧美激情在线| 日本黄色日本黄色录像| 亚洲第一欧美日韩一区二区三区 | 黄色成人免费大全| 日韩熟女老妇一区二区性免费视频| 人妻久久中文字幕网| 老熟女久久久| 久久久久久久久久久久大奶| 99九九在线精品视频| 人妻一区二区av| 夜夜夜夜夜久久久久| 国产视频一区二区在线看| 日韩视频一区二区在线观看| 久久ye,这里只有精品| 无遮挡黄片免费观看| 国内毛片毛片毛片毛片毛片| 最新在线观看一区二区三区| 又紧又爽又黄一区二区| kizo精华| 亚洲免费av在线视频| 99精品久久久久人妻精品| 99热国产这里只有精品6| 亚洲专区字幕在线| 欧美黑人精品巨大| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲色图av天堂| 久久久久精品人妻al黑| 久久久精品94久久精品| 少妇精品久久久久久久| 91国产中文字幕| 啦啦啦在线免费观看视频4| 少妇粗大呻吟视频| 午夜福利免费观看在线| 亚洲avbb在线观看| 精品福利观看| 亚洲第一av免费看| 两性夫妻黄色片| 国产精品98久久久久久宅男小说| 中文字幕人妻丝袜制服| 亚洲精品久久午夜乱码| av视频免费观看在线观看| 最近最新免费中文字幕在线| 国产成+人综合+亚洲专区| 性高湖久久久久久久久免费观看| 一级毛片电影观看| 每晚都被弄得嗷嗷叫到高潮| av网站免费在线观看视频| 精品少妇内射三级| 国产不卡一卡二| 日韩有码中文字幕| 国产成人精品久久二区二区免费| 男男h啪啪无遮挡| 亚洲色图综合在线观看| 亚洲伊人色综图| 国产免费福利视频在线观看| 精品少妇久久久久久888优播| 天天影视国产精品| 女警被强在线播放| 成年人黄色毛片网站| 男男h啪啪无遮挡| 亚洲男人天堂网一区| 日本一区二区免费在线视频| 夜夜骑夜夜射夜夜干| 欧美黑人精品巨大| 啪啪无遮挡十八禁网站| 久久久国产精品麻豆| 成人国语在线视频| 成人特级黄色片久久久久久久 | 亚洲欧美一区二区三区黑人| 国产精品电影一区二区三区 | 免费女性裸体啪啪无遮挡网站| 午夜免费鲁丝| 国产真人三级小视频在线观看| 纵有疾风起免费观看全集完整版| 欧美乱码精品一区二区三区| 纯流量卡能插随身wifi吗| 青草久久国产| 久久精品亚洲熟妇少妇任你| 国产男女超爽视频在线观看| 国精品久久久久久国模美| 国产真人三级小视频在线观看| tocl精华| 国产亚洲精品一区二区www | 日本黄色视频三级网站网址 | 人人妻人人澡人人爽人人夜夜| 美女扒开内裤让男人捅视频| 999久久久国产精品视频| 男女下面插进去视频免费观看| 一级毛片电影观看| 午夜免费成人在线视频| av网站免费在线观看视频| 免费在线观看视频国产中文字幕亚洲| 1024香蕉在线观看| 亚洲成a人片在线一区二区| 午夜两性在线视频| av超薄肉色丝袜交足视频| 亚洲自偷自拍图片 自拍| 精品久久久久久久毛片微露脸| 亚洲 欧美一区二区三区| 国产无遮挡羞羞视频在线观看| 少妇裸体淫交视频免费看高清 | 欧美精品亚洲一区二区| 考比视频在线观看| 亚洲国产av新网站| 国产精品1区2区在线观看. | 黄色 视频免费看| 亚洲性夜色夜夜综合| a级毛片在线看网站| 国产精品免费视频内射| 在线观看www视频免费| 叶爱在线成人免费视频播放| 欧美精品av麻豆av| 欧美在线黄色| bbb黄色大片| 天堂俺去俺来也www色官网| 天天操日日干夜夜撸| 丝袜美足系列| 宅男免费午夜| 日韩中文字幕视频在线看片| 国产精品久久久久久精品古装| 美女高潮到喷水免费观看| 久热这里只有精品99| 欧美日韩福利视频一区二区| 变态另类成人亚洲欧美熟女 | 久久精品国产亚洲av高清一级| 亚洲欧美一区二区三区久久| 国产精品一区二区在线观看99| 少妇猛男粗大的猛烈进出视频| 天堂动漫精品| 在线播放国产精品三级| 啦啦啦免费观看视频1| 亚洲精品乱久久久久久| 国产男靠女视频免费网站| 国产成人av教育| 悠悠久久av| 亚洲五月婷婷丁香| 一级黄色大片毛片| 极品教师在线免费播放| 国产亚洲精品久久久久5区| 18禁裸乳无遮挡动漫免费视频| 午夜精品国产一区二区电影| 男人操女人黄网站| 曰老女人黄片| 亚洲九九香蕉| 午夜福利视频精品| 19禁男女啪啪无遮挡网站| 我的亚洲天堂| 亚洲成人国产一区在线观看| 欧美日韩亚洲国产一区二区在线观看 | 下体分泌物呈黄色| 90打野战视频偷拍视频| 久久国产精品人妻蜜桃| 午夜福利一区二区在线看| 丰满人妻熟妇乱又伦精品不卡| 日韩欧美一区二区三区在线观看 | 国产成人免费观看mmmm| 波多野结衣av一区二区av| 操美女的视频在线观看| 新久久久久国产一级毛片| 性少妇av在线| 丁香欧美五月| 一本—道久久a久久精品蜜桃钙片| 亚洲少妇的诱惑av| 女人精品久久久久毛片| 久久久久久久大尺度免费视频| 亚洲一区二区三区欧美精品| 757午夜福利合集在线观看| 免费观看人在逋| 亚洲色图 男人天堂 中文字幕| 女警被强在线播放| 亚洲专区国产一区二区| 日本a在线网址| 国产亚洲精品一区二区www | www.自偷自拍.com| 男女边摸边吃奶| 国产精品1区2区在线观看. | 欧美+亚洲+日韩+国产| 日韩制服丝袜自拍偷拍| 桃红色精品国产亚洲av| 午夜两性在线视频| 啪啪无遮挡十八禁网站| 美女国产高潮福利片在线看| 在线看a的网站| 五月天丁香电影| 亚洲av第一区精品v没综合| a级毛片黄视频| 欧美精品人与动牲交sv欧美| 亚洲va日本ⅴa欧美va伊人久久| 高清黄色对白视频在线免费看| 亚洲va日本ⅴa欧美va伊人久久| 99国产综合亚洲精品| 在线观看舔阴道视频| 少妇裸体淫交视频免费看高清 | 久久精品国产综合久久久| 19禁男女啪啪无遮挡网站| 啦啦啦中文免费视频观看日本| 视频区欧美日本亚洲| 大片免费播放器 马上看| 免费高清在线观看日韩| 精品国产国语对白av| 久久国产精品大桥未久av| 热re99久久国产66热| 久久精品人人爽人人爽视色| 国产成人免费无遮挡视频| 精品久久久久久久毛片微露脸| av电影中文网址| 肉色欧美久久久久久久蜜桃| 757午夜福利合集在线观看| 欧美精品人与动牲交sv欧美| 精品少妇一区二区三区视频日本电影| 免费人妻精品一区二区三区视频| 精品人妻在线不人妻| 高清av免费在线| 欧美午夜高清在线| av天堂久久9| 久久久久精品人妻al黑| 啦啦啦在线免费观看视频4| 亚洲人成77777在线视频| 中文字幕精品免费在线观看视频| cao死你这个sao货| 日韩一区二区三区影片| 久久香蕉激情| 乱人伦中国视频| 中文字幕色久视频| 久久精品国产亚洲av香蕉五月 | av天堂久久9| 成年人免费黄色播放视频| 美女福利国产在线| 亚洲九九香蕉| 国产精品99久久99久久久不卡| 日韩人妻精品一区2区三区| 欧美成人午夜精品| 日本av免费视频播放| 三上悠亚av全集在线观看| 精品熟女少妇八av免费久了| 三上悠亚av全集在线观看| 国产成人影院久久av| 国产免费福利视频在线观看| 亚洲国产欧美网| 国产免费现黄频在线看| 搡老熟女国产l中国老女人| 天堂中文最新版在线下载| 黄色成人免费大全| 久久精品成人免费网站| 超碰97精品在线观看| 操出白浆在线播放| 免费看a级黄色片| 一级片'在线观看视频| 两个人免费观看高清视频| 久久影院123| 啦啦啦 在线观看视频| 色播在线永久视频| 考比视频在线观看| 人人妻人人澡人人爽人人夜夜| 91精品国产国语对白视频| 黑丝袜美女国产一区| 色婷婷久久久亚洲欧美| 久久精品成人免费网站| 超碰97精品在线观看| 亚洲人成伊人成综合网2020| 嫩草影视91久久| 国产97色在线日韩免费| 精品国产一区二区三区四区第35| 高清在线国产一区| 亚洲成av片中文字幕在线观看| 美女福利国产在线| 日韩欧美免费精品| 在线 av 中文字幕| 又紧又爽又黄一区二区| 一个人免费在线观看的高清视频| 欧美激情久久久久久爽电影 | 亚洲av国产av综合av卡| 久久久精品区二区三区| 国产极品粉嫩免费观看在线| 亚洲精品在线观看二区| 老司机午夜福利在线观看视频 | 中国美女看黄片| 久久久久久免费高清国产稀缺| 国产成人一区二区三区免费视频网站| 天天躁狠狠躁夜夜躁狠狠躁| 国产亚洲午夜精品一区二区久久| a级片在线免费高清观看视频| 亚洲av日韩在线播放| 丰满人妻熟妇乱又伦精品不卡|