• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Cluster Analysis for IR and NIR Spectroscopy:Current Practices to Future Perspectives

    2021-12-15 08:12:42SimonCraseBenjaminHallandSureshThennadil
    Computers Materials&Continua 2021年11期

    Simon Crase,Benjamin Hall and Suresh N.Thennadil

    1College of Engineering,IT&Environment,Charles Darwin University,Casuarina,NT 0810,Australia

    2Defence Science and Technology Group,Edinburgh,5111,Australia

    3Energy and Resources Institute,Charles Darwin University,Casuarina,NT 0810,Australia

    Abstract:Supervised machine learning techniques have become well established in the study of spectroscopy data.However,the unsupervised learning technique of cluster analysis hasn’t reached the same level maturity in chemometric analysis.This paper surveys recent studies which apply cluster analysis to NIR and IR spectroscopy data.In addition,we summarize the current practices in cluster analysis of spectroscopy and contrast these with cluster analysis literature from the machine learning and pattern recognition domain.This includes practices in data pre-processing,feature extraction,clustering distance metrics,clustering algorithms and validation techniques.Special consideration is given to the specific characteristics of IR and NIR spectroscopy data which typically includes high dimensionality and relatively low sample size.The findings highlighted a lack of quantitative analysis and evaluation in current practices for cluster analysis of IR and NIR spectroscopy data.With this in mind,we propose an analysis model or workflow with techniques specifically suited for cluster analysis of IR and NIR spectroscopy data along with a pragmatic application strategy.

    Keywords:Chemometrics;cluster analysis;Fourier transform infrared spectroscopy;machine learning;near infrared spectroscopy;unsupervised learning

    1 Introduction

    In the study of IR and NIR spectroscopy in the field of chemometrics,there is a wellestablished range of multivariate analysis techniques based on machine learning that have proved well suited to the chemical spectroscopy data [1].These mature techniques include the supervised learning techniques of partial least squares discriminate analysis (PLS-DA) and linear discriminant analysis (LDA) for classification,and calibration using partial least squares regression (PLSR).

    Cluster analysis is a technique that offers potential value for scenarios in the analysis of spectroscopy but has not reached the same level of maturity in its application to this domain.Cluster analysis is an unsupervised machine learning technique aimed at generating knowledge from unlabelled data [2].While cluster analysis is commonly used for data exploration,there are other circumstances where it is valuable.These include applications when the class structure of the data is known to vary with time,or the cost of acquiring classified (labeled) samples might be too great,making obtaining the large data sets required for some supervised learning techniques infeasible [1].The latter is often the case for data from spectroscopic chemical analysis.

    While cluster analysis is a well-established domain and widely used across diverse disciplines,it would be wrong to assume its application would be clear-cut and simply procedural.It is a highly subjective domain,with many potential techniques whose success will vary depending on the characteristics of the data and the purpose of the analysis.Clustering is very much a human construct,hence,mathematical definitions are challenging and even the definition of good clustering is subjective [3].Fundamental challenges for cluster analysis include [4]:

    (1) What features should be used for clustering?

    (2) How is similarity defined and measured?

    (3) How many clusters are present?

    (4) Which clustering algorithms should be used?

    (5) Does the data actually have any clustering tendency?

    (6) Are the discovered clusters valid?

    These challenges and the data specific characteristics of clustering contribute to the reason why there is no universal “best” clustering algorithm [5].However,those challenges do not mean that there cannot be suggestions for a better practice for conducting cluster analysis on IR and NIR spectroscopy data.

    In this paper,we quantitatively survey 50 papers where cluster analysis is applied to IR and NIR spectroscopy data to understand current practice in this form of analysis.In reviewing the current approaches in clustering IR and NIR spectroscopy data,consideration and commentary is given to highlight potential issues in current practice.

    We also draw on more than 25 papers and texts we have cited from the machine learning associated domains to identify techniques that could contribute towards an improved future practice in cluster analysis of spectroscopy.Special consideration is given to two important characteristics of the spectroscopy data:

    (1) High Dimensionality:A large number of measurements are taken at intervals across a spectrum for each sample.From the data analysis perspective,these form the variables or features.Depending on the type of spectroscopy and the specifics of the instrumentation,the number of features is typically in the hundreds or thousands for each sample.In other cluster analysis literature [6],50 dimensions is referred to as high dimension data,yet spectroscopy data is typically significantly higher dimension than that.Hence,this high dimensionality needs special consideration in determining analytical approaches.

    (2) Low sample size:Spectroscopy and the associated instrumentation are typically used in laboratory situations.Collecting and processing samples can be an expensive process from the perspective of cost,time,and expertise.Hence,the number of samples is often relatively small,particularly from a machine learning perspective.This precludes the use of some cutting-edge cluster analysis techniques such as clustering with deep neural networks (deep clustering).

    These characteristics present unique challenges that focus and somewhat limit the techniques suitable for cluster analysis of spectroscopy data.Hence,this paper presents a novel perspective specific to the needs of cluster analysis in IR and NIR spectroscopy while drawing on strong practices from the machine learning community.This culminates in a proposed analysis model or workflow to assist practitioners in ensuring rigor and validity in their cluster analysis.

    2 Methodology

    An exhaustive review was conducted to collate 50 papers published between 2002 and 2020 where a form of cluster analysis is applied to data from IR and NIR spectroscopy data.44 journal papers [7-50]and six peer reviewed conference papers [51-56]were surveyed.38 of the papers utilize FTIR spectroscopy (in the mid IR or IR band) and 15 of the papers utilize NIR spectroscopy.

    The papers surveyed cover a range of application domains including food and agriculture (30 papers) [7-9,12-14,16-18,20-22,24-26,28,34-37,39,41,44,45,47,48,54],biomedical (15 papers) [10,11,15,19,24,25,30,32,38,40,42,51-53,56],industrial (3 papers) [29,43,49],and forensics(3 paper) [23,31,55].

    The purpose of the majority of the papers is to demonstrate that an analytical testing technique,such as FTIR spectroscopy,paired with cluster analysis could discriminate between different classes of materials.Examples of these classes include cancerous and non-cancerous cells,provenancing of biological species such as tea varieties,fungi or bacteria,and contaminated materials such as counterfeit drugs or adulterated olive oil.Many of the papers subsequently extended this capability beyond cluster analysis through the application of other techniques such as supervised learning to develop classification models for classification of future unlabelled samples.Thirteen of the papers included comparison of multiple clustering techniques [14,19,26,30-32,35,40,41,44,51,52,56],with three papers presenting and evaluating a new clustering algorithm [19,35,41].One paper presented an overview of spectral pre and post processing techniques and includes cluster analysis [10].

    Several of the papers include analysis of additional types of analytical techniques such as Raman spectroscopy or gas chromatography-mass spectrometry (GC-MS),however,we will only include the NIR and IR aspects of those papers in this review.

    In reviewing the 50 surveyed papers to understand the current state of cluster analysis of IR and NIR spectroscopy data,we focus on the aspects of analysis that we consider important for successful cluster analysis.Firstly,the traditional chemometrics aspects of pre-processing,feature selection,and principal component analysis are reviewed.Then the cluster analysis aspects are reviewed.In this part,we include techniques covering each of these aspects which are found in the classic and contemporary pattern recognition and machine learning literature but may not have been considered in the chemometrics literature.These include evaluating the data’s tendency to cluster,the similarity measure used for the clustering,the clustering algorithm itself,how the number of clusters were selected,and how the results were evaluated and quantified.For each aspect,justification for including this step in the analysis is presented and the potential pitfalls of not including it.This is then compared to the analysis within the surveyed papers to understand current practice and to highlight potential shortcomings.

    In reflecting on these findings,potential reasons for this current practice are discussed.Finally,a proposed analysis model or workflow is presented for clustering of NIR and IR spectroscopy data that aims to ensure rigor and validity for future practitioners conducting cluster analysis.

    3 Survey Results and Discussion

    3.1 Pre-Processing and Feature Selection

    Initially,we review the early steps in the analysis process where traditional chemometric techniques are applied before the cluster analysis.The aim of these traditional chemometric analysis stages is to improve the suitability of the data for clustering,hence improving the clustering outcomes.These include data pre-processing,feature selection,and principal component analysis.While the primary focus of this paper is on the cluster analysis,these traditional chemometric analysis components are crucial to the clustering outcomes and warrant investigation.

    3.1.1 Data Pre-Processing

    Data pre-processing methods are used to remove or reduce unwanted signals from data such as instrumental and experimental artefacts prior to cluster analysis.If not performed in the right way,pre-processing can also introduce or emphasize unwanted variation.Hence,proper pre-processing is a critical first step that directly influences the follow-on analysis in the workflow [57].

    In reviewing the surveyed papers (summarized in Tab.1),normalization (scaling and centering),baseline correction,vector normalization,Savitzky-Golay smoothing (with or without derivatives),and various forms of multiplicative scatter correction were commonly applied.However,there was no one technique that was applied to more than half the papers.There is no clear favored technique within the chemometrics community that is applicable to all the datasets or applications.

    Table 1:Data pre-processing approach

    Lee et al.[58],in their review of contemporary data pre-processing practice for ATR-FTIR spectrum,highlight that careful justified selection of pre-processing practices is often ignored and hypothesize that users tend to follow conventional choices in literature or practices they are familiar with without supporting evidence.With this practice,researchers could potentially miss the most appropriate pre-processing methods for their specific data.If multiple pre-processing techniques are evaluated,this allows identification of the optimum technique for the specific characteristics of the researcher’s dataset [58].During our review of the surveyed cluster analysis papers,it was observed that it was rare to include the evaluation of multiple pre-processing techniques,include a reason for application of a particular technique,or even the justification for the use of one technique over another.Despite the challenges and workload associated with evaluating pre-processing,this is clearly an area where practice could be improved.

    It was also noted that 12 papers did not use any pre-processing and some explicitly stated that they were choosing to use no pre-processing,without giving a justified reason.This is generally discouraged as it forgoes the opportunity to correct the data for variations in equipment and measurement technique that may adversely impact the success of the later cluster analysis.

    3.1.2 Feature Selection

    Feature selection,also known as variable selection or variable reduction,refers to selection of the useful variable that convey the relevant information within the data,and removal of those that may include noise or non-valuable information.Within NIR and IR spectroscopy data,the wavenumbers (or wavelengths) are the variables (or features).Hence,feature selection works to remove wavenumbers containing irrelevant data or noise from the dataset.This works to reduce the dimensionality of the data and focus on the information of value.In one of the surveyed papers,Gierlinger et al.[48]found that feature selection was essential in their application and they were unable to achieve class separation (discrimination according to species) when analyzing the full spectra.

    The summary of feature selection approaches from the 50 surveyed papers is shown in Tab.2.Of some concern was that 24 papers conducted their analysis on the full IR or NIR spectrum data with no feature selection applied to reduce the number of variables.Some papers did this as part of a comparison to clustering performance when implementing variable selection,but many papers only conducted analysis on the full spectra.While this simplifies the analysis process,it misses an opportunity to improve the data for the cluster analysis.

    Table 2:Feature selection approach

    15 of the papers selected windows in the spectra based on a priori knowledge.This was typically knowledge of where in the spectra the “fingerprint” wavenumbers were to separate the spectra of materials they were investigating.

    Nine of the papers selected windows of the spectra through visual evaluation of labelled spectra to see at what wavenumbers there was the maximum separation between the different samples’spectra.

    Only six of the papers used quantitative techniques for feature selection.One used a novel method based on an iterative variable elimination algorithm and a clustering quality index to select variables that maximize clustering quality [31].Similarly,two of the papers used genetic algorithms as a computational heuristic search method to identify the wavenumbers to select to maximize the quality of the data [7,10].The remaining quantitative selection techniques used PCA [41],Wavelets [43],and maximum variance [48]metrics.Ten of the papers compared multiple feature selections to evaluate and identify those which were the most beneficial for their cluster analysis [7,8,13,15,27,31,39,41,43,48].

    With only six papers using quantitative methods,this highlights an opportunity to exploit techniques from the machine learning research domain.Within the machine learning community,feature selection is a significant domain of research.However,it is predominantly focused on supervised learning techniques which may not be applicable to unsupervised cluster analysis.Hence,care must be taken when choosing techniques to implement.The challenges of unsupervised feature selection are well explained by Dy et al.[59]with potential unsupervised feature selection techniques presented by Cov?es et al.[60],Boutsidis et al.[61],Dash et al.[62],Cai et al.[63],and Tang et al.[64].Many of these techniques are included in the review by Alelyani et al.[65]of feature selection for clustering,and in a 2019 review and evaluation of unsupervised feature selection techniques by Solorio-Fernandez et al.[66].Note however that these studies do not address the specific characteristics of spectroscopy data and either demonstrate techniques on relatively low dimensionality datasets [59,66],or are focused on domains such as social media data where large sample sizes are common [64].This means some of these techniques may not be suitable for the small sample sizes and high dimensionality data that is typical in spectroscopy.Hence,spectroscopy specific feature selection for cluster analysis is an area requiring further research.

    3.1.3 Principal Component Analysis(PCA)Usage

    PCA is one of the classic dimension reduction techniques of chemometrics and was used in the majority of the surveyed papers.Its dimension reducing capabilities can be used for multiple purposes.One particularly applicable to cluster analysis is to reduce the data to two or three principal components to enable visualization of the data points in two or three dimensions.This enables easy visualization of the clusters that form and visual validation of the clustering.As shown in Tab.3,21 of the papers surveyed utilized PCA for this purpose.This was the most common use of PCA within the surveyed papers.This visualization aspect of PCA was also used in the evaluation in the results of the cluster analysis where clusters were visualized and compared to known class labels to evaluate the level of success in the clustering.

    Table 3:Principal component analysis usage

    19 of the papers used PCA for its general dimension reducing capabilities.Applying PCA can dramatically reduce the number of dimensions in IR or NIR spectroscopy data while still retaining a high percentage of the information.This is effectively a form of feature extraction where the principal components from the PCA form the new variables.It was commonly observed for the typical 3500 dimensions (variables) in FTIR data to be reduced to 10 to 14 principal components while still retaining more than 99% of the original information.While this can speed analysis times and is an enabler for other analytical processes such aslinear discriminant analysis,it is not clear from the works found in the literature whether dimension reduction using PCA leads to improvedcluster analysis[67,68].

    Of note,t-SNE (t-Distributed Stochastic Neighbor Embedding) [69]is an alternative dimension reduction and visualization technique which has been shown to produce better results than PCA [69-72].While none of the papers reviewed used t-SNE in their cluster analysis,it could be worthwhile for practitioners to consider this technique.

    3.2 Cluster Analysis

    The cluster analysis techniques used in the 50 surveyed IR and NIR analysis papers are now evaluated.In the domain of cluster analysis,there are common steps documented across the cited machine learning references that are typically applied to ensure validity and confidence in the outcomes of the cluster analysis.These form the sections of the following review.

    3.2.1 Clustering Tendency

    As a starting point before any clustering is conducted,it is prudent to evaluate the data’stendency to cluster(also referred to asclusterability).That is,the data exhibits a predisposition to cluster into natural groups [4].The goal of clustering is to uncoverrealgroupings within the data.However,clustering algorithms will divide data into any requested number of clusters,regardless of whether these clusters naturally exist in the data.Hence,evaluating the data’s clustering tendency is a valuable indicator that the follow-on cluster analysis will be valid,and the clusters aren’t purely random artefacts of the clustering process [6].

    In reviewing the surveyed papers on cluster analysis of IR and NIR spectroscopy data,only one of the papers assessed their data’s clustering tendency.Zhang et al.[44]used the Hopkins statistic [73]to evaluate their data,finding it had a very low tendency to cluster,indicating that it does not contain natural clustering.This finding was supported when they applied the DBSCAN clustering algorithm [74]and found no logical reasoning or commonality of features within the clusters it produced.

    Reasons for the lack of clustering tendency testing within the other papers may include the often simplistic and self-validating nature of the clustering that is being applied within many of the surveyed papers.Typically,the subjects being clustered were known groupings of materials such as different varieties of tea.Hence,clusterability may have been assumed and validated when cluster analysis delivered the expected results and correct clustering.

    To have high confidence in the results of the cluster analysis and remove the possibility of delivering correct results by random chance,we recommend that a clustering tendency test is conducted.As with most aspects of clustering,there are multiple potential tests for clustering tendency and their effectiveness can be influenced by the characteristics of the data.Common techniques include the Dip test [75],the Silverman test [76]and the Hopkins statistic [73].The Dip test and Silverman test are based on clusterability via multimodality where the null hypothesis of unimodality indicates that the data does not have evidence of cluster structure and should not be clustered.The Hopkins statistic tests clusterability via spatial randomness where the null hypothesis is randomly distributed data that should not be clustered.In considering which technique to use,Adolfsson,Ackerman and Brownstein found that “methods relying on the Hopkins statistics or the Silverman tests may be preferred when small clusters are of interest,while techniques usingthe Dip test may be desired when the application calls for robustness to outliers.” [6].These are typical considerations and differentiation factors for many aspects of clustering practice.

    3.2.2 Similarity Measures for Clustering

    Since the goal of clustering is to identify clusters of objects that are similar,some measure of similarity is required.The similarity measure defines how the similarity of two elements is calculated.Similarity measure may be also referred to as a distance measure,although similarity measures can include correlation-based metrics.

    Within the papers surveyed,Euclidean distance was the most common metric used for comparing similarity,followed by Pearson’s correlation coefficient (Tab.4).Many of the examples of Euclidean distance utilized squared Euclidean distance where the clustering algorithms use Sum of Squared Error (SSE),i.e.,K-Means and Ward’s method Hierarchical Cluster Analysis.Thirteen papers did not describe the similarity measure used for their clustering.

    Table 4:Similarity measure usage

    3.2.3 Clustering Algorithm Selection

    Numerous clustering algorithms have been proposed in the literature with new clustering algorithms continuing to appear.However,clustering algorithms can generally be divided into two forms;hierarchical and partitional [5].They both work to minimize thewithin-clusterdistances or maximizethe between-clusterdistances.Hierarchical techniques structure the data into a nested series of groups which can be displayed in the form of a dendrogram or a tree.Compared to hierarchical algorithms,partitional algorithms find all the clusters simultaneously and do not generate a hierarchical structure.Partitioning techniques include density-based,model-based,grid-based,and soft-computing methods [77].

    In reviewing the clustering techniques used in the surveyed papers (Tab.5),hierarchical cluster analysis was the most prominent with 38 instances.One potential reason for the extensive use of hierarchical clustering is due to it being well matched to the nature of the materials being tested.Biological samples such as the bacteria or plant varieties being tested in many of the surveyed papers form a natural hierarchy in their biological classification which may contain their species,genus and family.This allows for easy comparison to the dendrogram produced from hierarchical clustering and visual validation of the results.

    Variants of hierarchical clustering algorithms are differentiated by the rules they use to form the links between datapoints and hence,the clusters.Single link,complete link,average link and Ward’s method are four of the most popular [78]with Ward’s method being repeatedly demonstrated as the most effective [26,30,79](in the context of those applications).Ward’s method was the most common technique used in the surveyed papers.We recommend that if alternative techniques to Ward’s method are being used,justification or a comparison should be included to explain their usage.Of note,six of the papers did not detail which linkage method was used.Defining the linkage method is crucial for hierarchical cluster analysis as different linkage types can deliver differing results.Not defining this impacts the reproducibility of the published results.

    Table 5:Clustering algorithm usage

    The fuzzy clustering techniques of Fuzzy C Means,Allied Gustafson-Kessel,Possibilistic C-Means,Allied Fuzzy C-Means,Variable String Length Simulated Annealing,and Simulated Annealing Fuzzy Clustering were the next most common technique group.K-Means clustering was also regularly applied to within the surveyed papers.

    Nine of the papers surveyed made comparisons between various clustering techniques.One paper reviewed the linkage techniques for hierarchical clustering,concluding that Ward’s method gave the best results for their application [26].Eight papers compared Hierarchical,K-Means and Fuzzy C Means clustering with one recommending hierarchical [30],one recommending K-Means [31],three recommending Fuzzy C Means [32,51,56]and three inconclusive [40,52,53]for their applications.Hence,no clear conclusion can be drawn as to a best general technique.Three papers [14,19,35]compared various means of fuzzy clustering but with no overlap between the fuzzy clustering variants they compare,no conclusions can be drawn.

    Based on these conflicting findings,it is clear that choosing a clustering algorithm for clustering IR and NIR spectroscopy data is not a simple decision.Yet,in reviewing the justifications provided in the papers for their choice of clustering algorithms (Tab.6),twenty of the papers provided no justification for their choice of algorithm.A potential reason for the lack of justification is that many of the papers achieved satisfactory clustering with their chosen technique,negating the need for further investigation or evaluation.This is a pragmatic approach,although some explanation to the initial choice of algorithm would be of benefit to the reader.If satisfactory clustering is not achieved with the chosen algorithm,this then presents a strong driver for consideration and evaluation of alternative algorithms.

    Table 6:Justification for choosing a clustering algorithm

    In looking to techniques prominent in other machine learning domains,clustering using deep neural networks (deep clustering) is emerging in prominence.As surveyed by Min et al.[80],these techniques may seem attractive for clustering of spectroscopy due to their ability to deal with the sparseness associated with high dimensionality.However,the typically low relative sample size of spectroscopy data is unlikely to meet the needs of the deep neural networks which are best suited to ‘big data’and caution should be used unless large datasets are available.Typically,the hierarchical clustering,k-means clustering,and fuzzy-C clustering as found in our survey are well suited to spectroscopy data,and evaluation of those techniques can highlight which is best suited to the specifics of the subject dataset.Alternatively,to address the variability between these clustering algorithms due to data characteristics,clustering ensembletechniques as described by Strehl et al.[81]could be considered to fuse results from multiple clustering algorithms for robust clustering.

    3.2.4 Predicting the Number of Clusters

    One of the major challenges in cluster analysis is predicting the number of clusters (k) [4].The input variables for clustering algorithms require an input that effects the number of clusters generated;either directly such as setting thekvalue for number of clusters the k-means algorithm,or indirectly such as setting the minimum density in a density-based algorithm which in turn effects the number of clusters.Hence,predicting the number of clusters correctly is important for correct and meaningful clustering and the associated analytical outcomes.

    In reviewing the clustering techniques used within the 50 NIR and IR spectroscopy papers(Tab.7),30 of the papers did not even address the issue of predicting the number of clusters.It is assumed that these authors knew the number of classes of objects in their sample set (e.g.,two for cancerous vs non-cancerous) and therefore felt it was not worth addressing.Similarly,13 of the papers explicitly used a priori knowledge of their data set to manually set the number of clusters(e.g.,knowledge that their data set contains 4 varieties of olive tree for olive-oil production).While this may have been sufficient for the purposes of their research (typically,that IR or NIR testing and clustering can correctly separate or differentiate samples),it can be limiting in the scope of their findings and the confidence in what can be drawn from their findings.To highlight this,if a random subset of their data was analyzed which happened to contain less classes of materials than the original set,using the original value ofkclusters would result in incorrect clustering.A better practice would be to include a quantified prediction of the number of clusters as part of the analysis workflow.Being able to correctly predict the number of clusters (against a known value ofk) is a good indicator of clearly separated clusters and provides confidence in the validity of the cluster analysis performed in the study.

    Table 7:Method for selecting the number of clusters

    Of the remaining seven papers surveyed,four used qualitative analysis and three used a quantitative analysis to predict the number of clusters.The qualitative analysis papers visualized the clustering results for various values ofkand used the analyst’s subjective judgement as to which produced the better clustering.This is particularly common for IR spectroscopic imaging applications.Since clustering is a very human centric concept and good clustering is somewhat subjective (particularly for FTIR imaging),this is a valid approach.However,quantitative approaches are preferred to minimize that subjectivity.

    Three common quantitative techniques for predicting the number of clusters include the“Elbow” method,the Gap Statistic,and the use of internal cluster validation indices (such as the Silhouette score method).

    In the elbow method,the total within-cluster sum-of-squares variation is calculated and plottedvs.different values ofk(i.e.,k=1,...,10).Where the slope of the plot changes from steep to shallow (an elbow) is the predicted number of clusters.While this method is simple,it can be inexact and sometimes ambiguous.The elbow method was used in one of the reviewed papers [34].

    The Gap Statistic method aims at providing a statistical procedure to formalize the heuristic of the elbow method [82].The gap statistic compares the total within within-cluster sumof-squares variation for different values ofkwith their expected values under null reference distribution of the data.The estimate of the optimal number of clusters will be the value that maximize the gap statistic.This means that the resulting clustering structure is far away from the random uniform distribution of points.

    A third technique to predict the number of clusters is through the use of internal cluster validation indices.This was the quantitative approach used in two of the reviewed papers (i.e.,Xie-Beni cluster validity measure for fuzzy clustering [56]and Silhouette score for K-Means clustering [32]).Internal cluster validation indices calculate the “goodness” of clustering which is typically based on the tightness of the clusters and separation between clusters.One of the most common indices is the Silhouette score.The Silhouette score is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample [83].The Silhouette score for a sampleiis

    The overall Silhouette score for a set ofidatapoints is then the mean of the individual Silhouette scores.To use this to determine the correct number of clusters,the Silhouette score is calculated for varying numbers of k (i.e.,k=1,...,10).The number of clusterskthat produces the highest Silhouette score infers that it is the best resulting clustering and that is the correct number of clusters.

    3.2.5 Cluster Evaluation and Validation

    Since cluster analysis is an unsupervised learning task,it can be challenging to validate the goodness of the clustering and gain confidence in the clustering results [84].Cluster validation is the formal process that evaluates the results of the cluster analysis in a quantitative and objective fashion [4].

    There are two main types of validity criteria that can be applied:internal,andexternal.External validation measures use data external to the data used for clustering.This external data is typically in the form of ‘true labels’such as classification labels used to evaluate the clustering.Internal measures evaluate the goodness of cluster structure without external labels to judge how good the clustering is and if the clustering is valid.

    External validation was the dominant approach used in the reviewed papers,and it fits the purpose of the majority of papers:demonstrating that IR or NIR testing can correctly separate samples into classes where true labels for the samples are known.As summarized in Tab.8,external validation approaches used included comparing true labels to the clustering labels in cluster plots,dendrograms,biomedical images,and numerical tables.When the clustering algorithm correctly separates the samples into clusters that perfectly match the true labels,then validation becomes a largely trivial task,as was the case for most of the papers.

    Table 8:Method of validating clustering results

    When clustering is only partially correct,the task of measuring this level of correctness is less trivial.Concerningly,five of the papers reported their results as a “percentage correct”against known labels.This is a notion that does not match the concept of clustering.The labeling generated from cluster analysis (unsupervised learning) are symbolic and based on similarity,so directly matching them to classification labels ignores a correspondence problem [81].To highlight this,an example is considered where data points from one class are spread across two clusters:Are the points in one cluster correct and the other incorrect to provide a percentage? and which is the true correct cluster? Or are they all incorrect? This highlights why external validation indices are required which employ notions such as homogeneity,completeness,purity and alike.

    As with most aspects of clustering,there are many potential validation indices that have been proposed.Desgraupes et al.[85]details 38 different internal and external evaluation indices and Xiong and Li review sixteen of them [84].Two of the most common external indices are the V-measure [86]and the Adjusted Rand Index [87].

    The V-measure or ‘Validity measure’is the harmonic mean between thehomogeneity(h)andcompleteness(c)of clusters.i.e.,

    where aβvalue of 1 is used to place equal importance on homogeneity and completeness.The result is a score between 0.0 and 1.0 where 1.0 represents perfectly correct labelling.

    The Adjusted Rand Index (ARI) [87]is a corrected-for-chance version of the Rand Index [88]which determines the similarity between two partitions as a function of positive and negative agreements in pairwise cluster assignments.As described in [89]:Given a partition U and a reference partition V,in Eq.(3),(a) accounts for the total number of object pairs belonging to the same cluster in both U and V;(b) represents the total number of object pairs in the same cluster in U and in different clusters in V;(c) is the total number of object pairs that are in different clusters in U and in the same cluster in V;and (d) is the total number of object pairs that are in different clusters in both U and V.

    Internal cluster validation is used where the true labels are not known for evaluation or there is a desire to compare the quality of clustering generated by different clustering techniques [90].Two common internal validation indices are the previously described Silhouette score (also called Silhouette index or SI) for traditional clustering,and the Xie-Beni index (XB) for fuzzy clustering techniques.In the papers reviewed,there was one instance of the use of SI when evaluating which selected wavenumbers resulted in the best quality clustering [31],and one use of XB where it was used to compare the quality of clustering from various fuzzy clustering algorithms [19].

    3.3 Reflection on Findings

    In reflecting on the findings,we will primarily focus on the clustering aspects of the analysis presented in the surveyed papers.Here,shortcomings were observed (as previously highlighted)that may indicate a lack of familiarity with some of the complexities of clustering practice by some researchers using spectroscopy.These indicators include a lack of clarity in the explanation of the cluster analysis process,missing details such as the type of linkage used in hierarchical cluster analysis or the distance metric used,and cluster validation indices not being used for validation.This uncommonness of cluster validation indices is a significant difference to literature from the machine learning domain where quantified cluster analysis is more prominent.

    This is not unexpected as while clustering is certainly not a new field,it is one with challenges,complexities,uncertainties and ambiguities that may not be appreciated by researchers where cluster analysis is not their primary area of research.There is limited conclusive literature available on clustering of spectroscopy data to support practitioners and the choice of best techniques can be dependent on the specific characteristics of the data being analyzed.

    An additional potential contributor to the observed shortcomings is the chemometric software that is commonly used in association with IR or NIR data analysis.Many practitioners look for off-the-shelf solutions for their chemometric analysis [91]as was seen in our survey.Software packages such as OPUS (Bruker),Unscrambler (Camo) and PLS Toolbox are now including the capability to do cluster analysis such as hierarchical clustering or k-means.Hence this capability is becoming available to many users who use this software for chemometric analysis.However,the clustering capabilities in these software packages are currently limited to a small set of clustering algorithms and are missing the quantitative evaluation components such as tendency to cluster metrics,techniques for predicting of the number of clusters,or internal and external validation indices.These quantitative metrics remain niche analytical capabilities which are typically conducted with code developed in MATLAB,R or python.

    Finally,the applications where clustering is being applied was simplistic in many of the surveyed papers.In an example where the aim of the research is to demonstrate IR or NIR spectroscopy can separate samples intokknown classes,then cluster analysis was used with the number of clusters set tokand evaluation was done manually against the known labels.Hence,the use quantitative clustering metrics was not essential.While this cluster analysis does meet the researcher’s goals and demonstrates the capability of IR or NIR spectroscopy in that instance,the limitations of that simplistic analysis are not made explicit.e.g.,Without predicting the number of clusters k,then the analysis is only valid for that number of classes and the findings cannot be assumed valid for a subset of that data where the number of classes may differ.

    4 A Proposed Analysis Model

    In order to add rigor to future cluster analysis conducted on IR and NIR spectroscopy data,an analysis model orworkflowis now proposed.As presented in Fig.1,the analysis model consists of four stages:(1) IR or NIR measurements of samples,(2) early stages of the traditional chemometric process to improve the data,(3) cluster analysis utilizing quantified metrics or indices to support analysis decision making,and (4) evaluation and validation of the clustering utilizing appropriate qualitative or quantitative techniques.Core to the application of each stage is the use of clustering indices to enable quantified evaluation and selection of appropriate techniques.Details of these techniques have been included earlier when reviewing current practice within the surveyed papers.

    At this point,it is worth discussing the depth of analysis conducted at each stage of this analysis model.If full quantitative analysis and evaluation is conducted at each stage of the workflow,it could become a substantial and time-consuming package of analysis.i.e.,application and evaluation of multiple pre-processing techniques,application and evaluation of multiple variable or feature selection techniques,PCA analysis,testing for tendency to cluster,application and evaluation of multiple similarity measures,application and evaluation of multiple clustering algorithms,application of quantitative clustering indices to predict the number of clusters,and application of clustering indices to evaluate the final results of the cluster analysis.

    A pragmatic approach is suggested.Consideration should be given to the purpose of the analysis and its importance,i.e.,early exploratory analysis may not warrant as much effort compared to a conclusive demonstration of a cancer detection technique aimed at wide spread publication.Similarly,consideration should be given to the data itself and the challenge it presents to cluster analysis.If the data can be visually seen to be well separated and sufficiently accurate clustering can be easily achieved,then it may not warrant the evaluation of multiple techniques to achieve improved data and clustering characteristics.

    Figure 1:A proposed analysis model for cluster analysis of NIR and IR spectroscopy data

    A streamlined approach may be to select common or familiar approaches for data preprocessing,variable selection,similarity measure,and clustering algorithm and then evaluate the results.If sufficiently accurate clustering is achieved with these selected approaches,it may not warrant refinement and evaluation in these areas.It is however recommended that clustering tendency is tested,and the number of clusters is predicted as these are valuable indicators in the confidence of the clustering results and its applicability.Additionally,if this streamlined approach is pursued,we encourage the analysts and authors to be explicit about this approach when publishing their results and to detail why those decisions were made.

    If sufficiently accurate clustering is not achieved utilizing this streamlined approach,then that is a driver for more detailed analysis and evaluation at each of the stages of the analysis model with the final clustering indices scores as the metric against which results can be assessed.Similarly,if true labelled data is not available for evaluating the results of the cluster analysis,internal clustering indices will be the metric used for assessing the outcome of the overall analysis.

    Of note,this potentially significant volume of analysis will have the most burden the first time the analysis model is implemented.If the workflow can be implemented in an analysis environment such as MATLAB,R or python,the time required for subsequent applications of this analysis model will be significantly less.Hence,if practitioners regularly intend to conduct cluster analysis and desire to have a rigorous methodology that delivers quantifiable results,establishing an extensive workflow with multiple stages of evaluation is likely to be worthwhile.

    5 Concluding Remarks

    We have surveyed and reviewed 50 papers from 2002 to 2020 which apply cluster analysis to IR and NIR spectroscopy data.The analysis process used in these papers was compared to extensive literature from the machine learning domain.The findings highlighted a lack of quantitative analysis and evaluation in the NIR and IR cluster analysis.Of specific concern were a lack of testing for the data’s tendency to cluster and prediction of the number of clusters.These are key tests that can provide increased rigor and confidence,and widen the applicability of the cluster analysis

    In a bid to improve on current practice and support researchers conducting cluster analysis on IR and NIR spectroscopy data,an analysis model has been presented to highlight potential future perspectives for the cluster analysis.The proposed analysis model or workflow incorporates quantitative techniques drawn from machine learning literature to provide rigor and ensure validity of the clustering outcomes when analyzing IR and NIR spectroscopy data.

    Funding Statement:This research is supported by the Commonwealth of Australia as represented by the Defence Science and Technology Group of the Department of Defence,and by an Australian Government Research Training Program (RTP) Scholarship.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久热这里只有精品99| 色综合欧美亚洲国产小说| 一级毛片精品| 丝袜在线中文字幕| 国产精品免费一区二区三区在线 | 一个人免费在线观看的高清视频| 亚洲天堂av无毛| 精品午夜福利视频在线观看一区 | 深夜精品福利| 欧美日韩视频精品一区| 丰满迷人的少妇在线观看| 欧美国产精品va在线观看不卡| av线在线观看网站| 亚洲av电影在线进入| 中文字幕人妻熟女乱码| 精品高清国产在线一区| 97在线人人人人妻| 午夜福利免费观看在线| 99久久国产精品久久久| 亚洲av片天天在线观看| 国产日韩欧美亚洲二区| 99国产综合亚洲精品| 精品国产乱码久久久久久小说| 丰满迷人的少妇在线观看| 久久国产精品影院| 日韩精品免费视频一区二区三区| 夜夜爽天天搞| 丰满人妻熟妇乱又伦精品不卡| 亚洲一区中文字幕在线| 亚洲第一av免费看| 天天操日日干夜夜撸| 大码成人一级视频| 欧美日韩精品网址| 一区福利在线观看| 最新的欧美精品一区二区| 午夜久久久在线观看| 亚洲va日本ⅴa欧美va伊人久久| av天堂在线播放| av有码第一页| bbb黄色大片| 日日摸夜夜添夜夜添小说| 黄色 视频免费看| 日韩欧美三级三区| 国产欧美日韩一区二区精品| 美女福利国产在线| 国产精品影院久久| 日本a在线网址| 一本—道久久a久久精品蜜桃钙片| 精品卡一卡二卡四卡免费| 三上悠亚av全集在线观看| 亚洲成av片中文字幕在线观看| 黄片大片在线免费观看| 脱女人内裤的视频| 嫩草影视91久久| 国产在线免费精品| 精品国产乱码久久久久久小说| 啦啦啦在线免费观看视频4| 久久国产精品影院| 十八禁网站免费在线| 欧美成狂野欧美在线观看| 国产aⅴ精品一区二区三区波| 侵犯人妻中文字幕一二三四区| 99精国产麻豆久久婷婷| 精品久久蜜臀av无| 国产老妇伦熟女老妇高清| 国产xxxxx性猛交| 18禁美女被吸乳视频| 成年女人毛片免费观看观看9 | 亚洲精品成人av观看孕妇| 两个人免费观看高清视频| av超薄肉色丝袜交足视频| 国产男女内射视频| 亚洲精品一卡2卡三卡4卡5卡| 两个人免费观看高清视频| 中国美女看黄片| 正在播放国产对白刺激| 纯流量卡能插随身wifi吗| 久久久水蜜桃国产精品网| 中文字幕人妻熟女乱码| 夜夜骑夜夜射夜夜干| 欧美av亚洲av综合av国产av| 国产成人精品在线电影| 两人在一起打扑克的视频| 99九九在线精品视频| 日本五十路高清| 99精国产麻豆久久婷婷| 自拍欧美九色日韩亚洲蝌蚪91| 免费日韩欧美在线观看| 国产精品免费一区二区三区在线 | 两人在一起打扑克的视频| 久久精品亚洲精品国产色婷小说| 一进一出好大好爽视频| 欧美日韩视频精品一区| av电影中文网址| tube8黄色片| 久久99一区二区三区| 亚洲欧洲日产国产| 777久久人妻少妇嫩草av网站| 久久久精品免费免费高清| 久久精品91无色码中文字幕| 99精品在免费线老司机午夜| 亚洲美女黄片视频| 久久精品91无色码中文字幕| 日日摸夜夜添夜夜添小说| 国产精品影院久久| 母亲3免费完整高清在线观看| 高清在线国产一区| 三上悠亚av全集在线观看| 精品久久久久久久毛片微露脸| 免费不卡黄色视频| 成年版毛片免费区| 国产在线视频一区二区| 中文字幕色久视频| 亚洲欧美激情在线| 欧美日本中文国产一区发布| 久久精品国产99精品国产亚洲性色 | 久热爱精品视频在线9| a级片在线免费高清观看视频| 日本av免费视频播放| 国产精品电影一区二区三区 | 91成年电影在线观看| av片东京热男人的天堂| 一二三四在线观看免费中文在| 亚洲av美国av| 亚洲成a人片在线一区二区| 又黄又粗又硬又大视频| 一区二区日韩欧美中文字幕| 女警被强在线播放| 国产成+人综合+亚洲专区| 国产不卡一卡二| 国产成人影院久久av| 黄色怎么调成土黄色| 成人国产一区最新在线观看| 交换朋友夫妻互换小说| 亚洲 欧美一区二区三区| 成人国产一区最新在线观看| 亚洲国产欧美一区二区综合| 91精品国产国语对白视频| 国产野战对白在线观看| 超色免费av| xxxhd国产人妻xxx| 午夜福利欧美成人| 欧美日韩一级在线毛片| 天天躁日日躁夜夜躁夜夜| 国产一区有黄有色的免费视频| av网站免费在线观看视频| 国产精品一区二区在线不卡| 男人舔女人的私密视频| a级毛片在线看网站| 国产精品一区二区免费欧美| 成人国产一区最新在线观看| 这个男人来自地球电影免费观看| 久久精品国产综合久久久| 99re在线观看精品视频| 欧美人与性动交α欧美精品济南到| 午夜福利视频精品| 久久久精品免费免费高清| 久久精品亚洲av国产电影网| 亚洲自偷自拍图片 自拍| 精品人妻在线不人妻| 成年人黄色毛片网站| 亚洲午夜理论影院| 在线永久观看黄色视频| 性少妇av在线| 久久久久久免费高清国产稀缺| 露出奶头的视频| 美女扒开内裤让男人捅视频| 欧美+亚洲+日韩+国产| 性色av乱码一区二区三区2| 午夜成年电影在线免费观看| 久久精品国产a三级三级三级| 一夜夜www| 搡老乐熟女国产| 日韩欧美国产一区二区入口| 18禁美女被吸乳视频| 久久国产精品人妻蜜桃| 国产xxxxx性猛交| 久久性视频一级片| 欧美精品啪啪一区二区三区| 国产精品.久久久| 国产成人欧美在线观看 | 18禁国产床啪视频网站| 日韩大片免费观看网站| 国产高清视频在线播放一区| 啪啪无遮挡十八禁网站| 精品少妇内射三级| 久久天堂一区二区三区四区| 老司机午夜福利在线观看视频 | 国产成人精品无人区| 9色porny在线观看| 免费看a级黄色片| 久久天躁狠狠躁夜夜2o2o| 日日夜夜操网爽| 亚洲成人免费av在线播放| 母亲3免费完整高清在线观看| 国产av国产精品国产| 精品一区二区三卡| 国产精品秋霞免费鲁丝片| 一边摸一边抽搐一进一出视频| 男女床上黄色一级片免费看| 夫妻午夜视频| 亚洲伊人色综图| 国产精品久久久久成人av| 搡老岳熟女国产| 免费人妻精品一区二区三区视频| 在线 av 中文字幕| 亚洲人成77777在线视频| 高清av免费在线| 少妇粗大呻吟视频| 欧美精品人与动牲交sv欧美| 国产亚洲精品一区二区www | 波多野结衣一区麻豆| 在线观看舔阴道视频| 国产精品久久久人人做人人爽| 午夜精品久久久久久毛片777| 岛国在线观看网站| 亚洲av电影在线进入| 欧美大码av| 免费看a级黄色片| 嫁个100分男人电影在线观看| 香蕉久久夜色| 性高湖久久久久久久久免费观看| 两个人看的免费小视频| 搡老岳熟女国产| 国产成人影院久久av| 欧美日韩国产mv在线观看视频| 国产精品国产av在线观看| 国产成人一区二区三区免费视频网站| 不卡一级毛片| 亚洲精品自拍成人| 在线十欧美十亚洲十日本专区| www.999成人在线观看| 老司机福利观看| 久久久久久人人人人人| 嫁个100分男人电影在线观看| 国产成+人综合+亚洲专区| 99国产精品免费福利视频| 777米奇影视久久| 纯流量卡能插随身wifi吗| 久久天躁狠狠躁夜夜2o2o| 亚洲欧美一区二区三区久久| 国产午夜精品久久久久久| 欧美日韩亚洲国产一区二区在线观看 | 国产精品99久久99久久久不卡| 岛国毛片在线播放| 50天的宝宝边吃奶边哭怎么回事| 国产淫语在线视频| 亚洲免费av在线视频| 欧美日韩黄片免| 最近最新中文字幕大全免费视频| 成人永久免费在线观看视频 | 我要看黄色一级片免费的| 99re6热这里在线精品视频| 精品欧美一区二区三区在线| 亚洲专区中文字幕在线| 免费观看a级毛片全部| 激情在线观看视频在线高清 | 国产精品亚洲av一区麻豆| 欧美日韩黄片免| 精品福利永久在线观看| 亚洲va日本ⅴa欧美va伊人久久| 美女高潮到喷水免费观看| 久久久精品94久久精品| 国产真人三级小视频在线观看| 欧美日韩福利视频一区二区| 午夜两性在线视频| av超薄肉色丝袜交足视频| 国产亚洲精品一区二区www | 99久久国产精品久久久| 成人av一区二区三区在线看| 在线观看免费日韩欧美大片| 在线观看66精品国产| 香蕉丝袜av| 日日夜夜操网爽| 久久天堂一区二区三区四区| 成人18禁在线播放| 国产成人系列免费观看| 热99re8久久精品国产| 建设人人有责人人尽责人人享有的| 免费观看av网站的网址| 亚洲欧美一区二区三区久久| 午夜福利免费观看在线| 啦啦啦视频在线资源免费观看| 久久毛片免费看一区二区三区| 成年女人毛片免费观看观看9 | 成年动漫av网址| 精品久久久久久电影网| 欧美日韩亚洲高清精品| 欧美日韩视频精品一区| 超碰97精品在线观看| 电影成人av| 日韩免费高清中文字幕av| 国产三级黄色录像| 亚洲精品av麻豆狂野| 宅男免费午夜| 中文字幕最新亚洲高清| 蜜桃在线观看..| 777米奇影视久久| 女人高潮潮喷娇喘18禁视频| 老司机午夜十八禁免费视频| 亚洲av成人不卡在线观看播放网| 女警被强在线播放| 亚洲精品国产区一区二| 嫁个100分男人电影在线观看| 在线 av 中文字幕| 久久99一区二区三区| 精品久久久精品久久久| 女性生殖器流出的白浆| 黄色毛片三级朝国网站| 日韩三级视频一区二区三区| 免费在线观看日本一区| 十八禁网站网址无遮挡| 一级毛片女人18水好多| www.999成人在线观看| 成人18禁高潮啪啪吃奶动态图| 国产精品香港三级国产av潘金莲| 熟女少妇亚洲综合色aaa.| 日韩成人在线观看一区二区三区| 免费不卡黄色视频| 人人妻,人人澡人人爽秒播| 久久久国产成人免费| 成人特级黄色片久久久久久久 | 午夜福利影视在线免费观看| 91成年电影在线观看| 日日夜夜操网爽| 男女床上黄色一级片免费看| 亚洲欧美日韩高清在线视频 | 精品高清国产在线一区| 人人妻人人澡人人爽人人夜夜| 国产精品电影一区二区三区 | 美女主播在线视频| 在线亚洲精品国产二区图片欧美| 99热国产这里只有精品6| 丰满饥渴人妻一区二区三| 亚洲国产欧美一区二区综合| 亚洲欧美精品综合一区二区三区| 亚洲精品自拍成人| 在线观看66精品国产| 夜夜爽天天搞| 侵犯人妻中文字幕一二三四区| 啪啪无遮挡十八禁网站| 黄色成人免费大全| 久久免费观看电影| 黑人巨大精品欧美一区二区mp4| 亚洲av欧美aⅴ国产| 欧美激情久久久久久爽电影 | 丰满饥渴人妻一区二区三| 久久久久国内视频| 国产精品一区二区精品视频观看| 国产成人精品无人区| 欧美精品啪啪一区二区三区| 天天操日日干夜夜撸| 757午夜福利合集在线观看| 亚洲九九香蕉| 欧美乱码精品一区二区三区| 久久中文字幕一级| 欧美 亚洲 国产 日韩一| 欧美日韩亚洲综合一区二区三区_| 精品国产国语对白av| 性高湖久久久久久久久免费观看| 午夜福利视频精品| 亚洲av电影在线进入| 色播在线永久视频| 叶爱在线成人免费视频播放| 一本综合久久免费| 中文字幕人妻丝袜制服| 国产老妇伦熟女老妇高清| 亚洲国产精品一区二区三区在线| a级毛片黄视频| 亚洲国产精品一区二区三区在线| 亚洲色图av天堂| 天天操日日干夜夜撸| 日本av免费视频播放| 国产无遮挡羞羞视频在线观看| 欧美乱妇无乱码| 不卡一级毛片| av福利片在线| 成人国产一区最新在线观看| 夜夜夜夜夜久久久久| 成人国产一区最新在线观看| 人人妻人人澡人人看| 日韩免费av在线播放| 国产极品粉嫩免费观看在线| 十八禁高潮呻吟视频| 亚洲伊人久久精品综合| 一个人免费看片子| 777米奇影视久久| 国产日韩欧美在线精品| 国产男女内射视频| kizo精华| 免费在线观看日本一区| 露出奶头的视频| 一区二区三区乱码不卡18| 国产精品av久久久久免费| 午夜福利影视在线免费观看| cao死你这个sao货| 欧美成人午夜精品| 久久久国产成人免费| 999久久久国产精品视频| 热99re8久久精品国产| 99国产极品粉嫩在线观看| av片东京热男人的天堂| 国产xxxxx性猛交| 法律面前人人平等表现在哪些方面| 久久精品国产亚洲av香蕉五月 | 水蜜桃什么品种好| 最新的欧美精品一区二区| 丰满饥渴人妻一区二区三| 亚洲熟女精品中文字幕| 国产精品影院久久| 欧美一级毛片孕妇| 在线观看免费视频网站a站| 大片电影免费在线观看免费| 国产一卡二卡三卡精品| av国产精品久久久久影院| 国产午夜精品久久久久久| 午夜日韩欧美国产| 亚洲欧美一区二区三区黑人| 精品少妇一区二区三区视频日本电影| 国产精品成人在线| 成人国产一区最新在线观看| 日韩中文字幕视频在线看片| 日韩欧美一区二区三区在线观看 | 咕卡用的链子| 高清黄色对白视频在线免费看| 777久久人妻少妇嫩草av网站| 国产成人影院久久av| 精品午夜福利视频在线观看一区 | 久9热在线精品视频| 午夜视频精品福利| 桃花免费在线播放| 亚洲国产看品久久| 欧美久久黑人一区二区| 色94色欧美一区二区| 欧美乱码精品一区二区三区| 天天操日日干夜夜撸| 一边摸一边做爽爽视频免费| 日韩有码中文字幕| 久久久国产欧美日韩av| √禁漫天堂资源中文www| 黄片小视频在线播放| 脱女人内裤的视频| 国内毛片毛片毛片毛片毛片| 国产老妇伦熟女老妇高清| 久久久久国内视频| 18禁观看日本| 久久精品国产99精品国产亚洲性色 | 国产伦人伦偷精品视频| 国产精品一区二区精品视频观看| 久久性视频一级片| av天堂在线播放| 一级片'在线观看视频| 一进一出好大好爽视频| 国产成人免费无遮挡视频| 精品国产乱码久久久久久男人| 下体分泌物呈黄色| 亚洲熟女精品中文字幕| 国产亚洲精品第一综合不卡| 免费av中文字幕在线| 91精品三级在线观看| 亚洲av成人一区二区三| 啪啪无遮挡十八禁网站| 黄色 视频免费看| 色婷婷av一区二区三区视频| 久久狼人影院| 制服人妻中文乱码| 五月开心婷婷网| 亚洲av成人一区二区三| 亚洲精品中文字幕一二三四区 | 热99re8久久精品国产| 搡老岳熟女国产| 中文字幕最新亚洲高清| 色94色欧美一区二区| 757午夜福利合集在线观看| 日韩大码丰满熟妇| 91字幕亚洲| 成年女人毛片免费观看观看9 | 一个人免费在线观看的高清视频| 色老头精品视频在线观看| 一区二区三区精品91| 国产精品熟女久久久久浪| 美女高潮到喷水免费观看| 亚洲成a人片在线一区二区| 亚洲 国产 在线| a级毛片在线看网站| 午夜福利视频精品| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲av日韩精品久久久久久密| 高潮久久久久久久久久久不卡| 欧美激情 高清一区二区三区| 日本av免费视频播放| 中文字幕精品免费在线观看视频| videosex国产| 我的亚洲天堂| 考比视频在线观看| 一本—道久久a久久精品蜜桃钙片| 国产亚洲一区二区精品| 久久天躁狠狠躁夜夜2o2o| 欧美 亚洲 国产 日韩一| 精品国产一区二区三区久久久樱花| 99精品在免费线老司机午夜| 深夜精品福利| 丰满迷人的少妇在线观看| 精品视频人人做人人爽| 亚洲第一欧美日韩一区二区三区 | 久久精品人人爽人人爽视色| 怎么达到女性高潮| 人妻一区二区av| 最新美女视频免费是黄的| 两个人免费观看高清视频| 国产精品一区二区免费欧美| 国产男女超爽视频在线观看| 麻豆成人av在线观看| 热99re8久久精品国产| 亚洲精品久久成人aⅴ小说| av国产精品久久久久影院| 日本a在线网址| 999久久久精品免费观看国产| 亚洲免费av在线视频| 18禁国产床啪视频网站| 女人爽到高潮嗷嗷叫在线视频| 久久天躁狠狠躁夜夜2o2o| 男人舔女人的私密视频| 亚洲欧洲日产国产| 亚洲av日韩精品久久久久久密| 亚洲色图av天堂| 日本一区二区免费在线视频| 亚洲国产看品久久| 亚洲精品av麻豆狂野| 69av精品久久久久久 | 日本撒尿小便嘘嘘汇集6| 久久久精品免费免费高清| 一区二区三区国产精品乱码| 精品福利永久在线观看| 亚洲 国产 在线| 黑人猛操日本美女一级片| 色94色欧美一区二区| 亚洲专区国产一区二区| 极品人妻少妇av视频| 热99久久久久精品小说推荐| 久久精品国产综合久久久| 汤姆久久久久久久影院中文字幕| videos熟女内射| 国产高清国产精品国产三级| 久久中文字幕一级| 超碰97精品在线观看| av又黄又爽大尺度在线免费看| 十八禁网站免费在线| 五月天丁香电影| 午夜精品久久久久久毛片777| 国产又色又爽无遮挡免费看| 久久 成人 亚洲| 久久免费观看电影| 亚洲精品在线观看二区| 国产aⅴ精品一区二区三区波| 久久中文字幕一级| 999精品在线视频| 精品久久蜜臀av无| 国产午夜精品久久久久久| 黄色视频在线播放观看不卡| 日本wwww免费看| 一进一出好大好爽视频| 黄频高清免费视频| 欧美 日韩 精品 国产| 免费久久久久久久精品成人欧美视频| 成人免费观看视频高清| 亚洲专区国产一区二区| 国产区一区二久久| 天堂动漫精品| 久久久久网色| 热99re8久久精品国产| 91麻豆精品激情在线观看国产 | 久久久久国产一级毛片高清牌| 精品少妇黑人巨大在线播放| 9热在线视频观看99| 久久这里只有精品19| 欧美激情 高清一区二区三区| 国产精品久久久久成人av| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲精品久久成人aⅴ小说| 亚洲av成人一区二区三| 久久精品亚洲av国产电影网| 美女午夜性视频免费| 美女福利国产在线| 欧美亚洲 丝袜 人妻 在线| 国产日韩欧美在线精品| 真人做人爱边吃奶动态| 99riav亚洲国产免费| 搡老岳熟女国产| 真人做人爱边吃奶动态| 超色免费av| 99re在线观看精品视频| 一区二区三区国产精品乱码| 桃红色精品国产亚洲av| 搡老岳熟女国产| 美女福利国产在线| 亚洲五月婷婷丁香| 啦啦啦免费观看视频1| 正在播放国产对白刺激| 天堂中文最新版在线下载| 国产精品久久久久久精品古装| 一级片'在线观看视频| 亚洲人成77777在线视频| 久久久久久免费高清国产稀缺| 亚洲精品国产一区二区精华液| 国产精品自产拍在线观看55亚洲 | 亚洲熟妇熟女久久| 黄色成人免费大全| 亚洲精品乱久久久久久| 精品少妇黑人巨大在线播放| 亚洲精品一卡2卡三卡4卡5卡| 十分钟在线观看高清视频www| 国产一区二区三区综合在线观看| 久热爱精品视频在线9| 日韩中文字幕视频在线看片|