• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Classifying Galaxy Morphologies with Few-shot Learning

    2022-05-24 06:33:46ZhiruiZhangZhiqiangZouNanLiandYanliChen

    Zhirui Zhang ,Zhiqiang Zou ,Nan Li ,and Yanli Chen

    1 College of Computer,Nanjing University of Posts and Telecommunications,Nanjing 210023,China

    2 Jiangsu Key Laboratory of Big Data Security and Intelligent Processing,Nanjing 210023,China

    3 Key Laboratory of Optical Astronomy,National Astronomical Observatories,Chinese Academy of Sciences,Beijing 100101,China;nan.li@nao.cas.cn

    4 University of Chinese Academy of Sciences,Beijing 100049,China

    Abstract The taxonomy of galaxy morphology is critical in astrophysics as the morphological properties are powerful tracers of galaxy evolution.With the upcoming Large-scale Imaging Surveys,billions of galaxy images challenge astronomers to accomplish the classification task by applying traditional methods or human inspection.Consequently,machine learning,in particular supervised deep learning,has been widely employed to classify galaxy morphologies recently due to its exceptional automation,efficiency,and accuracy.However,supervised deep learning requires extensive training sets,which causes considerable workloads;also,the results are strongly dependent on the characteristics of training sets,which leads to biased outcomes potentially.In this study,we attempt Few-shot Learning to bypass the two issues.Our research adopts the data set from the Galaxy Zoo Challenge Project on Kaggle,and we divide it into five categories according to the corresponding truth table.By classifying the above data set utilizing few-shot learning based on Siamese Networks and supervised deep learning based on AlexNet,VGG_16,and ResNet_50 trained with different volumes of training sets separately,we find that few-shot learning achieves the highest accuracy in most cases,and the most significant improvement is 21% compared to AlexNet when the training sets contain 1000 images.In addition,to guarantee the accuracy is no less than 90%,few-shot learning needs~6300 images for training,while ResNet_50 requires~13,000 images.Considering the advantages stated above,foreseeably,few-shot learning is suitable for the taxonomy of galaxy morphology and even for identifying rare astrophysical objects,despite limited training sets consisting of observational data only.

    Key words:Galaxies– Galaxy:morphological classification– Method:neural networks

    1.Introduction

    Galaxy morphology is considered a powerful tracer to infer the formation history and evolution of galaxies,and it is correlated with many physical properties of galaxies,such as stellar populations,mass distribution,and dynamics.Hubble invented a morphological classification scheme for galaxies(Hubble 1926) and pioneeringly revealed the correlation between galaxy evolutionary stages and their appearance in optical bands.Hubble sequence principally includes early-type galaxies (ETGs) and late-type galaxies (LTGs);ETGs mostly contain older stellar populations and have few spiral structures,while LTGs hold younger stellar populations and usually present spiral arms-like features.The above correlation has been studied widely and deeply in the past decades with increasing observational data of galaxies.Predictably,relevant investigations will be significantly advanced with enormous data from the upcoming large-scale imaging surveys,such as LSST,6https://www.lsst.orgEuclid7https://www.euclid-ec.org/and CSST.8http://www.bao.ac.cn/csst/

    Galaxy morphology classification is started with visual assessment (de Vaucouleurs 1959,1964;Sandage 1961;Fukugita et al.2007;Nair &Abraham 2010;Baillard et al.2011) and has lasted for decades as the mainstream approach in the field.In the 21st century,the volume and complexity of astronomical imaging data have increased significantly with the capability of the new observational instruments,such as the Sloan Digital Sky Survey9https://www.sdss.org/(SDSS) and the Hubble Space Telescopes10https://hubblesite.org/(HST).To make the classification more efficient and accurate,astronomers developed non-parametric methods to extract morphological features of galaxies,such as the concentration-asymmetry-smoothness/clumpiness (CAS) system,the Gini coefficient,and the M20 parameter (Abraham et al.2003;Conselice 2003;Lotz et al.2004;Law et al.2007).Sets of evidence demonstrate the success of utilizing these approaches to represent galaxy morphologies,outperforming traditional human inspection because they eliminate subjective biases.However,encountering hundreds of millions or even billions of galaxy images from future surveys,the performance of the above CPU-based algorithms is inefficient.Hence,more effective techniques for Galaxy morphology classification in an automated manner,e.g.,machine learning,are necessary.

    Machine learning algorithms have been widely used to classify galaxy morphology in the past years,for instance,Artificial Neural Network (Naim et al.1995),NN+local weighted regression (De la Calleja &Fuentes 2004),Random Forest (Gauci et al.2010),linear discriminant analysis (LDA,Ferrari et al.2015).Recently,deep learning has become more and more popular for classifying galaxy morphology (Lukic et al.2019;Zhu et al.2019;Cheng et al.2020;Gupta et al.2022) as its success has been proved adequately in industries,especially for pattern recognition,image description,and anomalies detection.Most cases for classifying galaxy morphologies are based on supervised deep learning due to its high efficiency and accuracy.Successful cases include generating the catalogs of galaxy morphologies for SDSS,the Dark Energy Survey11https://www.darkenergysurvey.org/(DES),and the Hyper Suprime-Cam12https://www.naoj.org/Projects/HSC/(HSC)Surveys(Dieleman et al.2015;Flaugher et al.2015;Aihara et al.2018).However,the results of supervised deep learning are strongly dependent on the volume and characteristics of the training set.First,requiring a large volume of data for training is determined by the complexity of the convolution neural networks,which typically comprise millions of trainable parameters.Hence,to make the training procedure converge correctly,one has to provide data points with a comparable amount to the number of parameters of the Convolutional Neural Network (CNN).Second,the best trained CNN model reflects the properties of the feature space covered by the training set.Thus,if the training set(simulated or selected by astronomers)is considerably biased from the real universe,supervised deep learning may consequently give biased results.Unsupervised learning has been adopted to avoid those disadvantages,but the corresponding classification accuracy is~10% worse than that of supervised manners (Cheng et al.2020,2021).

    In this study,we attempt few-shot learning(Wang et al.2019)to classifying galaxy morphologies by proposing a model named SC-Net inspired by CNNs and the siamese network model (Chopra et al.2005).Concisely speaking,our method pairs images and compares the metrics between features of input images,which expands the sample size of the training set compared to feeding images directly into the CNN.Furthermore,the region of feature space covered by the training set can be enlarged more effectively by involving rare objects and pairing them with other objects.Thus,in principle,SC-Net model simultaneously improves the two drawbacks of traditional supervised deep learning.To quantify the improvements,we designed an experiment with adopting galaxy images from the Galaxy Zoo Data Challenge Project on Kaggle13https://www.kaggle.com/c/galaxy-zoo-the-galaxy-challengebased on Galaxy Zoo 2 Project (Willett et al.2013),then compared the classification results to those with AlexNet (Krizhevsky et al.2012),VGG_16 (Simonyan &Zisserman 2014),and ResNet_50 (He et al.2015).The outcomes show that our method achieves the highest accuracy in most cases and requests the most miniature training set to satisfy a given accuracy threshold (see Section 5 for more details).Therefore,foreseeably,SC-Net model is suitable for classifying galaxy morphology and even for identifying rare astrophysical objects in the upcoming gigantic astronomical data sets.The code and data set used in this study are publicly available online.14https://github.com/JavaBirda/Galaxy-Morphologies-

    The paper is organized as follows:Section 2 introduces the data sets and data enhancement.Deep learning models,including CNNs and siamese Network,are described in Section 3.Section 4 presents the experimental process of this study.Results of this work are analyzed and summarized in Section 5.Finally,we draw discussion and conclusions in Section 6.

    2.Data Sets

    The SDSS captured around one million galaxy images.To classify the galaxy morphology,the Galaxy Zoo Project was launched (Lintott et al.2008),which is a crowd-sourced astronomy project inviting people to assist in the morphological classification of large numbers of galaxies.The data set we adopted is one of the legacies of the Galaxy Zoo Project,and it is publicly available online for the Galaxy-zoo Data Challenge Project.

    The data set provides 28,793 galaxy morphology images with middle filters available in SDSS (g,r,and i) and a truth table including 37 parameters for describing the morphology of each galaxy.The 37 parameters are between 0 and 1 to represent the probability distribution of galaxy morphology in 11 tasks and 37 responses(Willett et al.2013).Higher response values indicate that more people recognize the corresponding features in the images of given galaxies.The catalog is further debiased to match a more consistent question tree of galaxy morphology classification (Hart et al.2016).

    To simply the classification problem,we reorganize 28,793 images into five categories:completely round smooth,inbetween smooth,cigar-shaped smooth,edge-on,and spiral,according to the 37 parameters in the truth table.The filtering method refers to the threshold discrimination criteria in Zhu et al.(2019).For instance,when selecting the completely round smooth,values are chosen as follows:fsmoothmore than 0.469,fcomplete,roundmore than 0.50,as shown in Table 1.

    We then build six training sets with different numbers of images to test the dependence of the performance ofclassification algorithms on the volume of training sets,details of the training sets are shown in Table 2.In Section 4,we will train all the deep learning models with 28,793,20,000,15,000,10,000,5000,and 1000 images,respectively,and compare their performances thoroughly.

    Table 1 The Classification of 28,793 Samples

    Table 2 The Amount of Data in Each Category Under Different Data Sizes

    3.Methodology

    The few-shot learning proposed in this study is based on a model named SC-Net,including a CNN and a siamese network.We use CNNs to extract features,and then train the model according to the idea of the siamese network for classifying galaxy morphologies.Explicitly,the CNNs section introduces the feature extraction process and several traditional CNNs(LeCun et al.1998;Krizhevsky et al.2012;Simonyan&Zisserman 2014;He et al.2015) for classification;the siamese network section describes the few-shot learning method and the structure of the siamese network.

    3.1.Convolution Neural Networks

    CNN is a feed forward neural networks which includes convolutional computation and deep structure,and is one of the representative algorithms of deep learning.CNN is essentially input-to-output mappings that learn mapping relationships between inputs and outputs without requiring any precise mathematical expressions so that CNN has been widely used in the field of computer vision in recent years.

    The schematic of image feature extraction with CNN mainly consists of the following three main layers:convolution layer,pooling layer and fully connected layer.The convolution layer for feature extraction of the image is built by dot multiplication operation of the image and convolution kernel.Each pixel ofthe image and the weight of the convolutional kernel are computed through convolution layer and the calculation process is shown in Equation (1)

    where the f function is an activation function.We usually use Rectified Linear Units (Relu) (Glorot et al.2011) as the activation function defined in Equation (2).wm,nmeans the weight,xi+m,j+nmeans input data of current layer,wbrepresents bias,and ai,jdescribes the output data of current layer.

    Relu turns a negative value into zero.The image size obtained by the convolution operation is related to certain factors,such as the size of the convolution kernel,the convolution step size,the expansion method and the image size before convolution.The formula description of convolution operation is shown in Equation (3)

    where W1means the width of input data,H1means the height of input data,F represents the size of convolution kernels,P describes the padding size and S means stride,W2and H2denote the value of W1and H1after being calculated.The pooling layer is applied to reduce the image size while retaining important information.Max-pooling retains the maximum value of feature map as the resulting pixel value,while average-pooling retains the average value of feature map as the resulting pixel value.The fully connected layer acts as a“classifier” for the entire CNN after convolution,activation function,pooling and other deep networks.The classifying results are identified by the fully connected layer.

    In the past 20 yr,traditional CNN algorithms for image classification have made breakthroughs (LeCun et al.1998).AlexNet for ImageNet competition was proposed by Hinton’s student Alex Krizhevsky (Krizhevsky et al.2012),which established the status of CNNs in computer vision.VGGNet(Simonyan &Zisserman 2014) was proposed by the Oxford University Computer Vision Group in 2014,which has good generalization ability and can be easily migrated to other image recognition projects.Kaiming He et al.proposed the ResNet(He et al.2015)in 2015,which solves the problem of gradient explosion due to depth of model layers.

    Although the development of deep learning has made great achievements,deep learning models are strongly dependent on the size and quality of the data set.Traditional deep learning models cannot get a better result when lacking plenty of samples.To solve this problem,some researchers introduced data augmentation methods and generate simulated samples,such as GAN (Goodfellow et al.2014),which alleviates the difficulty of insufficient samples to a certain extent.However,its result is not very ideal because of the deviation between the real world data and simulated samples.Therefore,a new method is needed to solve this problem.

    3.2.Siamese Network

    To solve the problem of lacking of enormous samples with high quality mentioned in Section 3.1,this study introduces the few-shot learning (Wang et al.2019).Few-shot learning is an application of Meta Learning (Schweighofer &Doya 2003) in the field of supervised learning,which is mainly used to solve the problem of model training with a small number of classified samples.Few-shot learning is divided into three categories:model-based method,optimization-based method (Wang et al.2019) and metric-based method.

    The model-based methods aim to learn the parameters quickly over a small number of samples through the design of model structure,and directly establish the mapping function between the input value and the predicted value,such as memory-enhancing neural network (Santoro et al.2016),meta networks (Munkhdalai &Yu 2017).The optimization-based methods consider that ordinary gradient descents are inappropriate under few-shot scenarios,so they optimize learning strategies to complete the task of small sample classification,such as LSTM-based meta-learner model (Ravi &Larochelle 2016).The metric-based methods measure the distance between samples in the batch set and samples in the support set by using the idea of the nearest neighbor,such as the siamese network (Koch et al.2015).Considering the universality and conciseness of metric distance,this study chooses the metric-based method.

    The siamese network is a metric-based model in few-shot learning,which was first proposed in 2005(Chopra et al.2005)for face recognition.The basic idea of the siamese network is to map the original image to a low-dimensional space,and then get feature vectors.The distance between the feature vectors is calculated through the Euclidean distance.In our study,the distance between the feature vectors from the same galaxy morphology should be as small as possible,while the distance between the feature vectors from the different galaxy morphology should be as large as possible.The framework of the siamese network is shown in Figure 1 (Chopra et al.2005).

    Figure 1.Siamese Architecture.The left and right input different data,and calculate the similarity between them after feature extraction.

    In the siamese network,the structures of two networks on the left and right share the same weights (W).The input data,denoting(X1,X2,Y),are two galaxy morphology and the label that measures the difference between them.The label Y will be set to 0 when X1and X2belong to the same galaxy morphology,and it will be set to 1 when X1and X2belong to different galaxy morphology.The feature vectors Gw(X1)and Gw(X2) of low-dimensional space are generated by mapping X1and X2,and then their similarity is computed by Equation (5)

    The SC-Net makes Ew(X1,X2) as small as possible when Y=0 and makes Ew(X1,X2) as large as possible when Y=1.Contrastive Loss(Hadsell et al.2006) is selected in SC-Net as the loss function,which makes the originally similar samples are still similar after dimensionality reduction and the original dissimilar samples are still dissimilar after dimensionality reduction.The formula for the contrast loss function is shown in Equation (6)

    When the input images belong to the same galaxy morphology,the final loss function depends only on LG(EW),and when the input images belong to different galaxy morphology,the final loss function depends on LI(EW).LG(EW) and LI(EW) are defined in Equations (7) and (8).The constant Q is set to the upper bound of EW

    As so far,we can train the SC-Net model according to the architecture and loss function as described above.Then the classified results will be obtained.The advantage of this method is to fade the labels,making the network have good extension.Moreover,this approach increases the size of the data set by pairing data operation,so that deep learning network will achieve better effect with the small amount of data.For the above reasons,we adopt the siamese network and put forward the SC-Net model.

    4.Experiments

    The workflow of our SC-Net model is shown in Figure 2.The whole procedure includes four stages:the first stage is to preprocess data with the method introduced in Section 4.1;the second stage is to generate the training set via re-sampling or sub-sampling the preprocessed data;the third stage is to train model based on the networks described in Section 4.2;the last stage is to classify the images using the trained model.Section 4.3 describes the details of the implementation of these experiments.

    4.1.Data Pre-processing

    The experiment data sets consist of 28,793 images with 424×424×3 pixels in size.The training time is sensitive to the size of images,and the features of galaxies are primarily concentrated at the centers of the original images.Therefore,we crop and scale the original images first,then arrange them to training sets.The workflow of data preprocessing is shown in Figure 3,which is the same as shown by Zhu et al.(2019).We first crop the original images from 424×424 pixels to 170×170 pixels,considering the image centers as origins.Then,the images with 170×170 pixels are resized to 80×80 pixels.Finally,we repeat the first step to crop the images with 80×80 pixels to 64×64 pixels.

    As is mentioned in Section 2,we have divided the 28,793 images into five categories according to the truth table with the approach used in Zhu et al.(2019) and organized six data sets to implement comparative experiments for quantifying the advantages of the SC-Net over traditional CNNs.The six data sets contain 1000,5000,10,000,15,000,20,000,and 28,793 images.The preprocessed data sets have the same organization but images with 64×64×3 pixels,and examples from each category are shown in Figure 4.

    The data form that the SC-Net model takes is (X1,X2,Y),where X1and X2represent a pair of images,and Y is the label of the correlation between X1and X2.For example,X1is in category edge-on,and X2is selected in the same category,then Y was set to 0.To balance positives and negatives in training sets,every time we create a positive data point,we create a negative data point by randomly selecting an image from other categories.

    4.2.Deep Learning Models

    For comparison,we first build three approaches based on traditional CNNs:(1) AlexNet (Krizhevsky et al.2012),(2)VGG_16 (Simonyan &Zisserman 2014),and (3) ResNet_50(He et al.2015).(1) AlexNet consists of five convolutional layers and three fully connected layers.The network structure is successively conv11-96,max pool,conv5-256,maxpool,conv3-384,conv3-384,conv3-256,maxpool.(2)The network structure of VGG_16 is constructed by modularization.The first and second modules are divided into two convolutional layers and a max-pooling layer,and the last three modules are composed of three convolutional layers and a max-pooling layer.The number of channels in the convolutional kernel increases from 64 to 512,and finally,three fully connected layers are added,with the number of neurons being 4096,4096,and 1000 successively.There are 13 convolution layers and three full connection layers in total.(3)Resnet_50 consists of a convolutional layer,16 residual modules,and a full connection layer.The residual module has an identity block and a convolutional block composed of three convolutional layers and a shortcut.The difference lies in that the identity block ensures the consistency of input and output data.The input images of all CNN models are of 64×64 pixels in three channels,and the outputs are vectors of 1×5.The remaining parameters,such as the network hierarchy and hyperparameters,were referred to in the original papers.

    The architecture of SC-Net is shown in Figure 5,which consists of two parts.The first part is for extracting features with a CNN,and the second part calculates the similarity between the feature vectors obtained from the first part.The outputs of the SC-Net are the Euclidean Distances between feature vectors of input images in the feature space,which are to be used in the classification stage of Figure 2.

    Figure 2.The workflow of the SC-Net model,including data preprocessing,sample generating,model training,and image classifying.

    Figure 3.Data processing for the image (ID 11,244) from 424×424 to 64×64.

    Figure 4.Images sample from five categories.

    The feature extraction module consists of six convolutional layers and two fully connected layers,details are shown in Table 3.The convolutional layers all use 3×3 convolutional kernels.To avoid overfitting,we inserted BatchNormalization layers following each convolution layer,which shrinks the neuron inputs to a normal distribution with a mean of 0 and a variance of 1,rather than a wider random distribution.After every two convolution layers,maximum pooling and dropout layers are added to reduce input data size for the next block.Details of the maximum pooling layers are given in Section 3.1,and the essence of dropout layers is to randomly discard a certain number of neurons to improve the generalization ability of the model.The output of the fully connected layer is a feature vector in the form of 128×1,which will be passed to the second part for the distance calculation.

    The Euclidean distance of two feature vectors is given by

    where xiis the ith element of the first feature vector x,and yiis the ith element of the second feature vector y.When Dvis less than 0.5,the two images are identified to be sufficiently similar,then classified to be “from the same category.” Otherwise,the images are classified to be“from different categories.”O(jiān)verall,one needs to train about 9 million parameters in the entire SC-Net model,including the modules of feature extraction and classification.

    4.3.Implementation Details

    The hardware system utilized in this study contains:Intel(R)Core(TM) i5-9300H CPU @2.40 GHz 2.40 GHz;NVIDIA GeForce RTX 2060 6 GB.Software environment comprises python 3.7.3,Keras 2.3.1,NumPy 1.16.2,Matplotlib 3.0.3,OpenCV 3.4.2.16.The total runtime is about 128 h for ten replicates of 30 experiments.

    In each epoch,the batch-size is set to 32;the loss function is contrastive-loss introduced in Section 3.2;the optimizers adopted in the methods based on CNNs are Adam,while we use both Adam and rms for the SC-Net;the initial learning rate is 0.01,which decreases by ten every ten iterations.Each group experiment was iterated 100 times,and the chosen model was selected according to the ACC and Loss curve.Figure 6 shows ACC and Loss curves of the SC-Net model under the Adam optimizer and 20,000 samples in the data sets,we choose a model between 40 and 50 epochs because,at that time,the distance between the validation-loss and training-loss begins to grow,and validation-loss becomes stable,as descried in Figure 6(a).Likewise,the chosen models based on deep CNNs are selected between 30 and 40 epochs,and the chosen model based on the SC-Net with rms optimizer is selected between 50 and 60 epochs.

    Figure 5.Architecture of the SC-Net model.The meaning of each icon is explained at the bottom of the figure.

    Figure 6.The ACC (a) and Loss (b) curves of the SC-Net model under the Adam optimizer with 20,000 samples in the data set.In 30 groups of experiments,the iteration times of each experiment was determined according to this figure.We chose the position between 40 and 50 where val-loss gradually flattened with iteration time increasing.

    5.Results

    The experiment is performed under six data sets and five models,including three traditional CNNs (AlexNet,VGG_16,ResNet_50) and two SC-Net models,i.e.,30 experiments in total.Specifically,the sizes of training sets are 1000,5000,10,000,15,000,20,000 and 28,793,respectively.The details of organizing the data sets with different sizes are introduced in Section 2.The five methods are AlexNet,VGG_16,ResNet_50,SC-Net rms,and SC-Net Adam.We adopt accuracy(ACC)as the metric for quantifying the classification performance,which is defined as

    where NTPstands for the number of true-positives,NTNstands for the number of true-negatives,NFPdenotes the number of false-positives,and NFNdenotes the number of false-negatives.

    As is shown in Table 4 and Figure 7,the SC-Net model achieves the highest accuracy results with all experimental data sets.The most significant gap is 21% compared to AlexNet when the training set contains 1000 images.Considering the results displayed in Figure 8,when the training set size is 28,793,the ACC of SC-Net is 6%higher than that of AlexNet,one can conclude that less training data lead to more significant excellence of the SC-Net model.This reveals the superiority of the SC-Net model compared to traditional CNNs because the SC-Net model takes paired images and labels (Krizhevsky et al.2012;Simonyan &Zisserman 2014;He et al.2015),but the CNNs take images and labels directly.Taking paired images and labels enlarges the size of data sets and magnifies the difference between the images from different morphological categories.In addition,the ACC given by the SC-Net rms method trained by 10,000 images is as high as that given by ResNet_50 trained by 28,793 images.If one plans to acquire a classification ACC of no less than 90%,the SC-Net model needs~6300 images for training,while ResNet_50 requires~13,000 images.The reduction of the requirements of training sets enables the usability of the SC-Net model to detect rare objects (such as strong lenses) potentially.

    Table 3 SC-Net Structure in Feature Extraction Process

    Table 4 30 Groups of Experiments,Each Group of Experiments was Carried out 10 Times,and the Median and Standard Deviation were Taken as the Final Results

    We additionally explore the dependence of the classification performance of the SC-Net model on the characteristics of the data.Figure 8 presents the confusion matrix of the SC-Net model,which shows that the SC-Net model can achieve 97.85%,97.34%,and 98.50% ACC in the three categories of completely round smooth,in-between smooth,and spiral,because these galaxies have well-identified features.However,the ACC decreases to 78.33%and 82.59%for cigar-shaped and edge-on galaxies because of their similarity in the case of the Point Spread Functions (PSFs) of SDSS.As is mentioned above,the SC-Net model takes paired images and labels to measure their similarity.Thus,its performance may be suppressed when the features of categories are alike.Expectedly,this issue will be less noteworthy when the image qualityis improved.For instance,with the data from space-born telescopes,smeared substructures in galaxies will be well resolved,such as bugles,disks,and clumps.Then,the SC-Net model can still separate the cigar-shaped and edge-on galaxies.

    Figure 7.Comparison of experimental results of five methods of ACC.The vertical axis represents the classification performance,the horizontal axis represents the size of the data set,and the broken lines with different colors represent different methods.

    Moreover,we draw Figure 9 to analyze the correlation between the classification predictions and the distance between testing images and those in different categories in the training sets in feature space.Five panels,(a),(b),(c),(d),and(e),stand for that:the images listed along the column are completely round smooth,in-between smooth,cigar-shaped,edge-on,and spiral,respectively.The values describe the similarity between the images in the columns and rows.The smaller the value is,the more similar the two images are.In each panel,the first row shows an example of correct classification;the second row shows an example of incorrect classification.Blue boxes denote the ground truth,and fonts in red represent the predicted labels.When the classification is correct in the three categories with apparent features,the similarity between the testing and training images in the corresponding category is quite different from that between the testing images and training images in other categories.For instance,the similarity score in Figure 9(a) is 0.06 in the case of correct classification,while the other distance of feature space scores are above 0.80.However,for the types of cigar-shaped and edge-on galaxies,the differences are only 0.005 and 0.036,which can be calculated by 0.116–0.111 and 0.465–0.429,see panel (c).These outcomes further prove that similarity between the data points in different categories in the training set considerably influences the accuracy of galaxy morphology classification.Hence,it is critical to organize training sets sensibly to avoid such similarities as much as possible when one plans to adopt the SC-Net to solve their problems.

    Figure 8.Confusion matrix for classifciation results.The ordinate is the real category of the data,and the ordinate is the category predicted by the model.The diagonal lines represent the percentage of correctly predicted data in each category.The remaining values represent the percentage of predictions that were wrong.

    Figure 9.The illustration of the distance of feature space between images measured by the SC-Net.Here(a)–(e)stand for that,the images listed along the column are completely round smooth,in-between smooth,cigar-shaped,edgeon,and spiral,respectively.The values describe the similarity between the images in the columns and rows.The smaller the value is,the more similar the two images are.In each image matrix,the first row shows an example of correct classification;the second row shows an example of incorrect classification.Blue boxes denote the category of the data itself,and the fonts in red represent the category predicted by the SC-Net model.

    6.Discussion and Conclusions

    Traditional supervised deep learning methods are currently the mainstream for the morphological classifciation of galaxies,which request a considerable volume of training sets.Suppose it demands simulations to create suffciient training sets,which potentially brings model-dependence problems.Thus,we introduce few-shot learning based on the SC-Net model to avoid these drawbacks.Our results present that few-shot learning reduces the requirement of the size of training sets and provides an effciient way to extend the coverage of the training sets in latent space,which can be used to avoid the model-dependence problem.

    To illustrate the improvements of our method,we conduct comparative experiments between few-shot learning and approaches based on traditional CNNs,such as AlexNet,VGG_16,and ResNet_50.The results show that few-shot learning achieves the highest accuracy in most cases,and the most significant improvement is 21% compared to AlexNet when the training sets contain 1000 images.In addition,to guarantee the accuracy is no less than 90%,few-shot learning needs~6300 images for training,while ResNet_50 requires~13,000 images.The request for fewer training data can avoid simulation as much as possible when constructing training sets,which bypasses the model dependence problem.Further,suppose we design a recursive strategy to enlarge the training set for galaxy morphology classification by starting with a small training set.Then,few-shot learning can start with extensively fewer data points with known labels than those based on traditional CNNs,which remarkably decreases the workload on creating the primary training set,especially for the case of labeling images by human inspection.

    Notably,the performance of few-shots learning is sensitive to the similarity between the images with different labels,though it is still better than that of the methods based on CNNs.For instance,the classification accuracy of completely round smooth,in-between smooth,and spiral are higher than that of cigar-shaped and edge-on.Specifically,the classification accuracy reaches 97.85%,97.34%,and 98.50% in completely round smooth,in-between smooth,and spiral.However,in the two categories of cigar-shaped and edge-on,the accuracy is 78.33% and 82.59%,respectively.It is reasonable because the SC-Net adopts the Euclian distances between images in latent space as the classification metric,while higher similarity leads to shorter distances,which causes mis-classification.This issue is primarily due to the limitation of Galaxy Zoo images observed by ground-based telescopes,presenting few smallscale structures because of large PSFs.After all,the difference between cigar-shaped and edge-on is also hard to identify by human inspection.Expectedly,future high-quality images with detailed structures captured by space-born telescopes can improve the classification performance significantly.

    In summary,this study presents the feasibility of few-shot learning on galaxy morphology classification,and it has certain advantages compared to traditional CNNs.Next,we plan to apply the method to observations such as DESI Legacy Imaging Surveys,15https://www.legacysurvey.org/the Dark Energy Survey,and the Kilo-Degree Survey.16https://kids.strw.leidenuniv.nl/Also,to further improve the performance of this approach,we will optimize its architecture and hyperparameters while implementing the above applications.Besides,considering the characteristic of the SC-Net,few-shot learning can also be utilized to identify rare objects,e.g.,merging galaxies,ring galaxies,and strong lensing systems,which draws our interests intensively as well.

    Acknowledgments

    The data set used in this work is collected from the Galaxy-Zoo-Challenge-Project posted on the Kaggle platform.We acknowledge the science research grants from the China Manned Space Project with No.CMS-CSST-2021-A01.Z.R.Z.,Z.Q.Z.,and Y.L.C.are thankful for the funding and technical support from the Jiangsu Key Laboratory of Big Data Security and Intelligent Processing.The authors are also highly grateful for the constructive suggestions given by Han Yang and Yang Wenyu for improving the manuscript.

    日本av手机在线免费观看| 国产成人免费无遮挡视频| 我要看日韩黄色一级片| 亚洲欧美一区二区三区国产| 内地一区二区视频在线| 91精品一卡2卡3卡4卡| 久久女婷五月综合色啪小说| freevideosex欧美| 久久人人爽人人爽人人片va| 一本大道久久a久久精品| 国产无遮挡羞羞视频在线观看| 日本欧美视频一区| 久久久久久久久久人人人人人人| 成人国产麻豆网| 成人18禁高潮啪啪吃奶动态图 | 激情五月婷婷亚洲| 男人爽女人下面视频在线观看| 久久精品国产亚洲av天美| 欧美人与善性xxx| 在线观看av片永久免费下载| 精品久久国产蜜桃| 黄色视频在线播放观看不卡| 男女无遮挡免费网站观看| 成人亚洲欧美一区二区av| 国产真实伦视频高清在线观看| 夫妻午夜视频| 亚洲欧美一区二区三区国产| 少妇被粗大的猛进出69影院 | 在线观看人妻少妇| 肉色欧美久久久久久久蜜桃| 18禁裸乳无遮挡动漫免费视频| 日日摸夜夜添夜夜爱| 国产无遮挡羞羞视频在线观看| 人人妻人人看人人澡| 在线天堂最新版资源| 久久久久精品久久久久真实原创| 久久韩国三级中文字幕| 日日啪夜夜爽| 欧美区成人在线视频| 免费播放大片免费观看视频在线观看| 久久午夜福利片| 丝袜在线中文字幕| 免费看光身美女| av在线观看视频网站免费| 亚洲欧洲日产国产| 国产中年淑女户外野战色| 精品一区二区三卡| 在线看a的网站| 色婷婷av一区二区三区视频| 免费观看的影片在线观看| 免费黄网站久久成人精品| 99国产精品免费福利视频| 男女啪啪激烈高潮av片| 我要看日韩黄色一级片| 嘟嘟电影网在线观看| 日韩成人伦理影院| 亚洲一区二区三区欧美精品| 成人18禁高潮啪啪吃奶动态图 | 丝瓜视频免费看黄片| 少妇高潮的动态图| 欧美日韩视频精品一区| 欧美性感艳星| 久久久久久久国产电影| 国产熟女欧美一区二区| 免费播放大片免费观看视频在线观看| 一本久久精品| 欧美一级a爱片免费观看看| 熟女电影av网| 国产男女内射视频| 国产精品99久久99久久久不卡 | 国产熟女午夜一区二区三区 | 久久精品久久久久久久性| 久久久欧美国产精品| a级毛片在线看网站| 精品午夜福利在线看| 高清欧美精品videossex| 十八禁高潮呻吟视频 | 麻豆成人午夜福利视频| 两个人免费观看高清视频 | 超碰97精品在线观看| 蜜桃在线观看..| 亚洲精品日韩在线中文字幕| 9色porny在线观看| 久久久久久久大尺度免费视频| 伦精品一区二区三区| 国产亚洲5aaaaa淫片| 日韩强制内射视频| 男人狂女人下面高潮的视频| 成年人免费黄色播放视频 | 欧美xxxx性猛交bbbb| 精品视频人人做人人爽| 在线观看av片永久免费下载| 99九九线精品视频在线观看视频| 狠狠精品人妻久久久久久综合| 国产成人freesex在线| 国产一区二区三区av在线| 免费大片18禁| 免费黄网站久久成人精品| 国产av一区二区精品久久| 日本黄色日本黄色录像| 97在线人人人人妻| 伊人久久国产一区二区| av在线老鸭窝| 亚洲第一av免费看| 色5月婷婷丁香| 中文字幕人妻丝袜制服| 成年美女黄网站色视频大全免费 | 亚洲av日韩在线播放| 国产成人91sexporn| 好男人视频免费观看在线| 久久精品国产鲁丝片午夜精品| 性高湖久久久久久久久免费观看| 99久久中文字幕三级久久日本| 高清欧美精品videossex| 在线观看一区二区三区激情| 精品人妻偷拍中文字幕| 少妇裸体淫交视频免费看高清| 麻豆精品久久久久久蜜桃| 国产成人午夜福利电影在线观看| 国产一区二区三区av在线| 最近中文字幕高清免费大全6| 国产一级毛片在线| 欧美日韩综合久久久久久| 桃花免费在线播放| 麻豆成人午夜福利视频| 高清视频免费观看一区二区| 国产极品粉嫩免费观看在线 | 在线 av 中文字幕| 亚洲性久久影院| 精品久久久久久久久亚洲| 91在线精品国自产拍蜜月| 特大巨黑吊av在线直播| 国产av码专区亚洲av| 亚州av有码| 一本一本综合久久| 亚洲无线观看免费| 国产一区二区在线观看av| 最后的刺客免费高清国语| 国产高清三级在线| 国产精品偷伦视频观看了| 人人澡人人妻人| 国产一级毛片在线| 亚洲欧美一区二区三区黑人 | 日日摸夜夜添夜夜添av毛片| 久久久久精品久久久久真实原创| 久久久久网色| 欧美日本中文国产一区发布| 女人精品久久久久毛片| 少妇熟女欧美另类| 久久久久久久久久人人人人人人| 九色成人免费人妻av| 少妇 在线观看| 97超碰精品成人国产| 国产熟女午夜一区二区三区 | 内地一区二区视频在线| 22中文网久久字幕| 国精品久久久久久国模美| av线在线观看网站| 成人毛片a级毛片在线播放| 国产视频首页在线观看| 久久久久久久大尺度免费视频| 高清午夜精品一区二区三区| 狂野欧美激情性xxxx在线观看| 欧美丝袜亚洲另类| 亚洲欧美一区二区三区黑人 | 久久国产精品大桥未久av | 久久精品国产亚洲av涩爱| av福利片在线| 亚洲精品日本国产第一区| 另类亚洲欧美激情| 美女国产视频在线观看| 久久人人爽人人片av| 精品人妻熟女毛片av久久网站| 少妇人妻 视频| 中文字幕精品免费在线观看视频 | 国产日韩欧美在线精品| 亚洲真实伦在线观看| 人人妻人人添人人爽欧美一区卜| 精品亚洲成a人片在线观看| 纵有疾风起免费观看全集完整版| 天美传媒精品一区二区| 又黄又爽又刺激的免费视频.| 亚洲国产精品一区二区三区在线| 在线观看美女被高潮喷水网站| 18禁动态无遮挡网站| av天堂中文字幕网| 免费av中文字幕在线| 久久精品国产亚洲网站| 欧美区成人在线视频| 夫妻午夜视频| 亚洲精品乱码久久久v下载方式| 免费观看在线日韩| 人人澡人人妻人| 久久久久久久久久久丰满| 国产精品久久久久成人av| 91成人精品电影| 亚洲精品亚洲一区二区| 嫩草影院入口| 成人午夜精彩视频在线观看| 亚洲av成人精品一二三区| 99久国产av精品国产电影| a级一级毛片免费在线观看| 国产精品成人在线| 日本欧美国产在线视频| 人人妻人人看人人澡| 最新中文字幕久久久久| 国产 精品1| 内射极品少妇av片p| 九九久久精品国产亚洲av麻豆| 亚洲国产精品成人久久小说| 女人久久www免费人成看片| 亚洲一级一片aⅴ在线观看| 久久久久精品性色| 黄色日韩在线| 精品一区在线观看国产| 中文在线观看免费www的网站| 欧美日韩一区二区视频在线观看视频在线| 一级a做视频免费观看| 免费看av在线观看网站| 夫妻性生交免费视频一级片| 男女国产视频网站| 乱码一卡2卡4卡精品| freevideosex欧美| 欧美 亚洲 国产 日韩一| 国产又色又爽无遮挡免| 成人国产av品久久久| 亚洲欧美日韩东京热| 成人毛片a级毛片在线播放| www.色视频.com| 丰满迷人的少妇在线观看| 婷婷色综合www| 国产日韩欧美亚洲二区| 成年人免费黄色播放视频 | 亚洲精品,欧美精品| 国产熟女午夜一区二区三区 | 久久精品国产亚洲网站| 国产成人精品久久久久久| 久久综合国产亚洲精品| 国产免费又黄又爽又色| 搡女人真爽免费视频火全软件| 久久久久精品性色| 国产高清不卡午夜福利| 丰满迷人的少妇在线观看| 国产在线一区二区三区精| av网站免费在线观看视频| 久久99热这里只频精品6学生| 丁香六月天网| 国产 一区精品| 久久国内精品自在自线图片| 亚洲在久久综合| 亚洲自偷自拍三级| 国产无遮挡羞羞视频在线观看| 大香蕉久久网| 国产亚洲欧美精品永久| 亚洲av在线观看美女高潮| 这个男人来自地球电影免费观看 | 街头女战士在线观看网站| 国产成人免费观看mmmm| 一二三四中文在线观看免费高清| 另类精品久久| h日本视频在线播放| 美女国产视频在线观看| 男人和女人高潮做爰伦理| 99久久精品一区二区三区| av在线观看视频网站免费| 久久久久久久精品精品| 天堂俺去俺来也www色官网| 精品人妻熟女毛片av久久网站| 日日啪夜夜撸| 国产精品免费大片| 在线免费观看不下载黄p国产| 蜜桃在线观看..| 精华霜和精华液先用哪个| 大片免费播放器 马上看| 亚洲第一av免费看| 久久久午夜欧美精品| 嫩草影院新地址| 欧美成人精品欧美一级黄| 久久久国产精品麻豆| 国产成人免费观看mmmm| 亚洲精品日本国产第一区| 91精品一卡2卡3卡4卡| 国产熟女午夜一区二区三区 | 国产精品一区www在线观看| 91精品一卡2卡3卡4卡| 欧美bdsm另类| 一本久久精品| 精品一品国产午夜福利视频| 久久久久久久久久久久大奶| 国产日韩一区二区三区精品不卡 | 老司机影院成人| 亚洲第一av免费看| 日产精品乱码卡一卡2卡三| 黑人猛操日本美女一级片| 久久久久国产精品人妻一区二区| 欧美老熟妇乱子伦牲交| 深夜a级毛片| 国产免费视频播放在线视频| 麻豆精品久久久久久蜜桃| 欧美人与善性xxx| 国产极品粉嫩免费观看在线 | 久久精品国产亚洲网站| 我的老师免费观看完整版| 一二三四中文在线观看免费高清| 亚洲国产精品999| 桃花免费在线播放| 亚洲不卡免费看| 汤姆久久久久久久影院中文字幕| 亚洲四区av| 日韩熟女老妇一区二区性免费视频| 中国国产av一级| 精品一区二区三卡| 日韩中文字幕视频在线看片| 亚洲图色成人| 日韩精品有码人妻一区| 晚上一个人看的免费电影| 一级,二级,三级黄色视频| 午夜91福利影院| 99国产精品免费福利视频| 精品久久国产蜜桃| 亚洲三级黄色毛片| 亚洲国产欧美在线一区| 日日啪夜夜爽| 亚洲av成人精品一二三区| 男人狂女人下面高潮的视频| 国产精品偷伦视频观看了| 国产精品99久久99久久久不卡 | 晚上一个人看的免费电影| 汤姆久久久久久久影院中文字幕| 久久久久精品性色| 欧美精品一区二区大全| 女性被躁到高潮视频| 看十八女毛片水多多多| 黄色毛片三级朝国网站 | 插阴视频在线观看视频| 久久鲁丝午夜福利片| 免费av中文字幕在线| 欧美精品高潮呻吟av久久| 国产极品粉嫩免费观看在线 | av在线老鸭窝| 十分钟在线观看高清视频www | 国产精品成人在线| 青春草国产在线视频| 高清黄色对白视频在线免费看 | 国产精品一区二区性色av| videossex国产| 一个人看视频在线观看www免费| 伦理电影免费视频| 国产伦精品一区二区三区四那| av又黄又爽大尺度在线免费看| 亚洲美女搞黄在线观看| 3wmmmm亚洲av在线观看| 韩国高清视频一区二区三区| 亚洲av国产av综合av卡| 乱系列少妇在线播放| 男女免费视频国产| 夜夜骑夜夜射夜夜干| 夫妻性生交免费视频一级片| av一本久久久久| 18禁动态无遮挡网站| 99久国产av精品国产电影| 老女人水多毛片| 天天躁夜夜躁狠狠久久av| 丰满乱子伦码专区| 如何舔出高潮| 欧美日本中文国产一区发布| 国产精品无大码| 人妻人人澡人人爽人人| 国产又色又爽无遮挡免| 日韩制服骚丝袜av| 国产亚洲精品久久久com| 亚洲三级黄色毛片| 国产精品一区二区性色av| 男女国产视频网站| 麻豆精品久久久久久蜜桃| 国产白丝娇喘喷水9色精品| 日日撸夜夜添| 成年人午夜在线观看视频| 国产色婷婷99| 黄片无遮挡物在线观看| 卡戴珊不雅视频在线播放| 少妇精品久久久久久久| 精品人妻偷拍中文字幕| 久久精品久久精品一区二区三区| 久久久久久久久久久丰满| 欧美三级亚洲精品| 欧美少妇被猛烈插入视频| 亚洲精品456在线播放app| av天堂中文字幕网| 少妇的逼好多水| av黄色大香蕉| 爱豆传媒免费全集在线观看| 国产免费又黄又爽又色| 国产男女内射视频| 免费看不卡的av| 欧美国产精品一级二级三级 | 中文精品一卡2卡3卡4更新| 久久久a久久爽久久v久久| 亚洲伊人久久精品综合| 极品少妇高潮喷水抽搐| 女人精品久久久久毛片| 国产日韩欧美在线精品| 精品久久国产蜜桃| 在线播放无遮挡| 亚洲欧洲精品一区二区精品久久久 | 成年人午夜在线观看视频| 国产精品一区二区性色av| 成年人午夜在线观看视频| 亚洲精品aⅴ在线观看| 五月天丁香电影| 天天操日日干夜夜撸| 黄色视频在线播放观看不卡| 国产极品天堂在线| 性色av一级| 黄片无遮挡物在线观看| 亚洲精品国产av蜜桃| 搡女人真爽免费视频火全软件| 内地一区二区视频在线| 少妇人妻久久综合中文| tube8黄色片| 一边亲一边摸免费视频| 欧美高清成人免费视频www| 国产精品一区二区在线不卡| 伊人久久精品亚洲午夜| 热re99久久国产66热| 全区人妻精品视频| 国产美女午夜福利| 久久99蜜桃精品久久| 久久久久久久大尺度免费视频| 亚洲国产av新网站| 国产色婷婷99| 18禁在线播放成人免费| 久久久久久久久久久免费av| av不卡在线播放| 中文天堂在线官网| 亚洲成人一二三区av| 免费播放大片免费观看视频在线观看| 亚洲国产毛片av蜜桃av| 亚洲国产av新网站| 99热这里只有精品一区| 午夜精品国产一区二区电影| .国产精品久久| 国产精品一区二区在线不卡| 国产欧美日韩精品一区二区| 日韩大片免费观看网站| 极品少妇高潮喷水抽搐| 一本—道久久a久久精品蜜桃钙片| 欧美成人午夜免费资源| 精品久久久久久久久亚洲| 国产精品伦人一区二区| 日韩在线高清观看一区二区三区| www.av在线官网国产| av国产久精品久网站免费入址| 99国产精品免费福利视频| 亚洲色图综合在线观看| 麻豆成人av视频| 亚洲欧美日韩卡通动漫| 欧美+日韩+精品| 午夜激情久久久久久久| 国产成人午夜福利电影在线观看| 欧美国产精品一级二级三级 | 国产极品粉嫩免费观看在线 | 日韩 亚洲 欧美在线| 男的添女的下面高潮视频| 热re99久久国产66热| 精品国产一区二区三区久久久樱花| 日本av手机在线免费观看| 国产精品一区二区三区四区免费观看| 国产黄片视频在线免费观看| 乱码一卡2卡4卡精品| 国产精品三级大全| 男女无遮挡免费网站观看| 欧美少妇被猛烈插入视频| h视频一区二区三区| 美女主播在线视频| av黄色大香蕉| 亚洲av.av天堂| 久久久精品免费免费高清| 又粗又硬又长又爽又黄的视频| 日韩欧美一区视频在线观看 | 久久久久网色| 国产免费一级a男人的天堂| 亚洲内射少妇av| av视频免费观看在线观看| 啦啦啦在线观看免费高清www| 尾随美女入室| 伦理电影免费视频| 女人久久www免费人成看片| 日本爱情动作片www.在线观看| 国产av码专区亚洲av| 午夜老司机福利剧场| 亚洲欧美日韩东京热| 日本爱情动作片www.在线观看| 中文字幕久久专区| 国产伦精品一区二区三区视频9| 啦啦啦在线观看免费高清www| 久久精品久久精品一区二区三区| 特大巨黑吊av在线直播| 中文天堂在线官网| 91久久精品国产一区二区三区| 久久久久久久久久久丰满| 亚洲精品aⅴ在线观看| 久久人人爽人人爽人人片va| 99热这里只有精品一区| 深夜a级毛片| 少妇猛男粗大的猛烈进出视频| 国产乱来视频区| 欧美+日韩+精品| 亚洲精品亚洲一区二区| 一级二级三级毛片免费看| 色视频www国产| 久久久久久久久久人人人人人人| 一级毛片电影观看| av视频免费观看在线观看| 搡女人真爽免费视频火全软件| 国产中年淑女户外野战色| 国产欧美亚洲国产| 青青草视频在线视频观看| 国产高清三级在线| 精华霜和精华液先用哪个| 国产精品伦人一区二区| 免费观看av网站的网址| 校园人妻丝袜中文字幕| 免费黄频网站在线观看国产| 国产黄色免费在线视频| 人妻人人澡人人爽人人| 国产探花极品一区二区| 一二三四中文在线观看免费高清| 天天操日日干夜夜撸| 在线亚洲精品国产二区图片欧美 | 日日啪夜夜撸| 亚洲成人一二三区av| 一本—道久久a久久精品蜜桃钙片| 亚洲国产毛片av蜜桃av| 视频中文字幕在线观看| 人妻一区二区av| 99久久精品国产国产毛片| 亚洲欧美精品自产自拍| 如何舔出高潮| 国内揄拍国产精品人妻在线| 精品一品国产午夜福利视频| 狂野欧美激情性xxxx在线观看| 亚洲成人av在线免费| 丰满少妇做爰视频| 日本与韩国留学比较| 天堂俺去俺来也www色官网| 中国三级夫妇交换| 日韩视频在线欧美| 黄色毛片三级朝国网站 | videossex国产| 美女脱内裤让男人舔精品视频| 99热这里只有是精品在线观看| 中文欧美无线码| 极品少妇高潮喷水抽搐| 黑人高潮一二区| 久久久久久久久久成人| 久久青草综合色| 亚洲av日韩在线播放| 国产精品一区www在线观看| 国产精品秋霞免费鲁丝片| 日日啪夜夜爽| 免费大片18禁| 欧美激情极品国产一区二区三区 | 精品久久久久久电影网| 国产精品人妻久久久久久| 成人影院久久| 最近中文字幕2019免费版| 国产在视频线精品| 国产午夜精品久久久久久一区二区三区| 亚洲精品456在线播放app| 九色成人免费人妻av| 99久久精品一区二区三区| 国产免费一区二区三区四区乱码| 中文字幕人妻丝袜制服| 欧美 日韩 精品 国产| 日本午夜av视频| 成人午夜精彩视频在线观看| 亚洲情色 制服丝袜| 人妻夜夜爽99麻豆av| 狂野欧美激情性bbbbbb| av免费在线看不卡| 欧美一级a爱片免费观看看| 久久国产亚洲av麻豆专区| 99视频精品全部免费 在线| 国产精品伦人一区二区| 最新中文字幕久久久久| 亚洲精华国产精华液的使用体验| 高清av免费在线| av天堂中文字幕网| 超碰97精品在线观看| 国内揄拍国产精品人妻在线| 91精品一卡2卡3卡4卡| 观看美女的网站| 日韩欧美 国产精品| 一级黄片播放器| av又黄又爽大尺度在线免费看| 成人午夜精彩视频在线观看| 午夜福利影视在线免费观看| 大话2 男鬼变身卡| 中文字幕免费在线视频6| 日韩中字成人| 80岁老熟妇乱子伦牲交| 日韩免费高清中文字幕av| 国产永久视频网站| 中文在线观看免费www的网站| xxx大片免费视频| 大香蕉久久网| 午夜福利影视在线免费观看| 最黄视频免费看| 黑人巨大精品欧美一区二区蜜桃 | 色网站视频免费| 亚洲av综合色区一区| 少妇的逼好多水| 尾随美女入室| 国产 精品1|