• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Melanoma Identification Through X-ray Modality Using Inception-v3 Based Convolutional Neural Network

    2022-08-24 12:57:48SaadAwadhAlanazi
    Computers Materials&Continua 2022年7期

    Saad Awadh Alanazi

    College of Computer and Information Sciences, Jouf University, Sakaka, Aljouf, 72341, Saudi Arabia

    Abstract: Melanoma, also called malignant melanoma, is a form of skin cancer triggered by an abnormal proliferation of the pigment-producing cells,which give the skin its color.Melanoma is one of the skin diseases, which is exceptionally and globally dangerous, Skin lesions are considered to be a serious disease.Dermoscopy-based early recognition and detection procedure is fundamental for melanoma treatment.Early detection of melanoma using dermoscopy images improves survival rates significantly.At the same time,well-experienced dermatologists dominate the precision of diagnosis.However, precise melanoma recognition is incredibly hard due to several factors:low contrast between lesions and surrounding skin, visual similarity between melanoma and non-melanoma lesions, and so on.Thus, reliable automatic detection of skin tumors is critical for pathologists’ effectiveness and precision.To take care of this issue, numerous research centers around the world are creating autonomous image processing-oriented frameworks.We suggested deep learning methods in this article to address significant tasks that have emerged in the field of skin lesion image processing: we provided a Convolutional Neural Network (CNN) based framework using an Inception-v3 (INCP-v3)melanoma detection scheme and accomplished very high precision (98.96%)against melanoma detection.The classification framework of CNN is created utilizing TensorFlow and Keras in the backend (in Python).It likewise utilizes Transfer-Learning (TL) approach.It is prepared on the data gathered from the“International Skin Imaging Collaboration (ISIC)”repositories.The experiments show that the suggested technique outperforms state-of-the-art methods in terms of predictive performance.

    Keywords: Deep learning; chronic obstructive pulmonary disease; chronic bronchitis; convolutional neural network; X-ray images

    1 Introduction

    Extensive skin malignant growth contrasted with other types of tumors is a predominant factor in making it quite possibly the most severe medical problem worldwide.By looking back at history, melanoma is an uncommon malignant growth, yet in the previous fifty years, the overall cases of melanoma are drastically increasing.The financial load of Melanoma cure is likewise costly.In the USA, (the entire skin cancer’s curing costs) $3.3 Billion, out of $8.1 Billion, are spent distinctly on melanoma.Basal cell carcinoma and squamous cell carcinoma are famously reparable whenever analyzed and cured in the beginning phases.The 5-year patient survival rate with beginning phase analysis of melanoma is roughly ninety-nine percent [1].

    Human skin is the biggest organ in our body that gives assurance against injury, infections,light, and heat.Additionally, it stores nutrients, fat, and water.The deadliest kind of cancer is melanoma, as it spreads everywhere on our body (in case the proper diagnosis and treatment are not found in the beginning phase).Three sorts of skin tumors exist, in particular melanoma,squamous cell carcinoma, and basal cell carcinoma.Among all skin malignancies, melanoma is the most dangerous type.The instances of melanoma skin disease are expanding very fast.

    The earliest identification of melanoma skin malignant growth should be treated in the beginning phase for the patient to acquire high recovery chances.On the skin, it appears as marks and moles.There are two distinct layers per the skin structure: the inward layer (i.e.,dermis) and the external most layer (i.e., epidermis).Melanocytes are present in epidermis (i.e., the lower part).It comprises pigmented cell which is found in the epidermis and generates melanin.Skin is provided color through this pigment.When we have sun exposure, more pigments are produced in the skin (because of Melanocytes).Thus, the skin becomes darkened in color (due to melanocytes), which results in melanoma [2].

    The melanoma (ME) oriented risks comprise excessive ultra-violet ray exposure, susceptibility to low immunity systems, previous genetic history, burning in the sun, and fair skin tone.In case ME is not identified in its beginning phases, it may develop and spread along the epidermis (i.e.,skin’s primary layer).Finally, it interacts with blood and Lymph vessel.

    The essential parameters against the ME detection are size, color, and shape, which are significant characteristics of detecting skin malignancy, and they show up as moles (diameter more than six millimeters, changed color, random shape, and borders).The different non-intrusive procedures are provided in such a manner to perform the classification of malignant growth and isolated as melanoma or benign.Along these lines, the vital factor in decreasing the death rate is the earliest recognition of skin malignant growth [3-5].

    The practices of dermatologists have been enhanced nowadays using Artificial Intelligence(AIN), Machine Learning (ML), and Deep Learning (DL) in distinct ways (i.e., treatment,diagnoses, detection, etc.) [6-8].Later progressions in admittance to enormous datasets, quicker processing, and less expensive data storing capacity have energized ML approaches in the dermatological domain.It has been commonly utilized five distinct perspectives in dermatology, as explained below.(i) medical images-based illness characterization; (ii) dermatopathology images-oriented illness characterization; (iii) utilizing individual observing gadgets or mobile apps for the evaluation of skin illnesses; (iv) encouraging wide-scope epidemiological research and (v) best accuracy acquisition in disease classification and detection [9].Accordingly, this research is deliberated to provide image processing, ML, and more specifically, DL-oriented ME detection.

    Despite a high mortality rate, melanoma that is detected early is usually curable.Meanwhile,even seasoned dermatologists find it challenging to distinguish between melanoma and other benign moles during their initial development stages.This is being accomplished through the development of computerized algorithms.Certain low-complexity methods are designed for use on tablets and smartphones and can assist non-specialists.On the other hand, automated and professional decision-making necessitates sophisticated, intelligent algorithms and equipment in this area with improved accuracy and precision.

    The objective of this research is to use deep learning to develop an autonomous melanoma detection mechanism.The input digital images are preprocessed to accomplish this, which are typically subjected to noise and illumination repercussions.This preprocessing efficiently aids in the extraction of discriminative features from images by the Inception-v3-based CNN.Following that, the images are fed into a CNN architecture, classifying the input as benign or malignant.

    The remainder of this manuscript is ordered as below.Section 2 explains the literature on melanoma detection.Section 3 elucidates the methodology of this research.Section 4 provides the experiments and results.Section 5 gives the research relevant discussions.Lastly, the manuscript is concluded in Section 6.

    2 Literature Review

    The exploration of image processing-based melanoma detection has progressed considerably throughout the long term.Till now, various procedures have been attempted.

    The research presented a strategy that applies six distinctive detection procedures (based on data mining) and afterward builds up a hybrid scheme utilizing Gradient Boosting classifier, AdaBoost, and Bagging procedures to anticipate the skin disease classification.Moreover,strategy is used to choose the most striking fifteen parameters (most significant in skin disease classification) [10].

    The author presents a model-driven design in the cloud environment by utilizing the Deep-Learning Scheme (DLS) in its center usage.It was utilized to build models that help foresee skin cancers (for achieving an efficient precision rate).This manuscript was intended to construct a robust scheme and apply them to the classification of dermal cells.The DLS worked here is tried on standard-dataset for skin-cancer detection and dermal-cell classification [11].

    For better performance, efficiency, and accuracy, autonomous recognition of skin injury is performed in [12].For skin lesions (SL) early identification, the proposed calculation applies to include extraction utilizing Asymmetry Borders-Colors-Dermatoscopic Structures (ABCD) rule,HOG (for the extraction of features), GLCM, and ABCD-rule.Preprocessing is applied to enhance the quality of SL.Utilizing the SVM classifier, the Area under Curve obtained is 0.94%,and 97.8% of the precision rate is achieved.

    For melanoma classification, a hybrid approach is utilized by the author [13].It was helpful against the inspection of any dubious suspicious lesion (SL).Their proposed framework depended on the classification through three diverse phases: two classical AI schemes and CNN prepared with an extensive feature-set, depicting the SL’s color, texture, and borders.Using the voting approach, these strategies are then consolidated toward performance improvement.

    Through the manuscript [14], the author proposed 2 DLs to address the three issues, i.e.,feature-extraction (dermoscopic), segmentation, detection of SL.A DLS comprising of 2 fully convolutional residuals-networks is utilized for segmentation and generating the classification results.

    Through the manuscript [15], the author presented an autonomous model of skin cancer detection, using the images of SL, rather than regular clinical staff-based identification, but through the intervention of Artificial Intelligence and Machine Learning (ML).This research is planned into three stages, i.e., collecting the data, performing the data augmentation, framework construction, and classification.Using image processing, the author utilized different ML approaches like Support Vector Machine & CNN to frame a superior design, prompting an 85%precision rate.

    The framework presented by [16] vigorously depends on the data-generation unit and processing unit for occlusion removal (images).It based on the generative-adversarial network for the classification of SL.The author observationally verified their methodology and showed that joining the two units mentioned above for melanoma classification.Over fundamental baselines,superior results were acquired.

    For the classification and detection of SL (more specifically ME), the manuscript [17] provided a“Computer-Aided Detection”(CADN) framework (utilizing Mutual Information estimations)through a combination of CNN and handcraft highlights related to the clinical scheme (i.e.,ABCD-rule).

    Due to the high ME-oriented mortality rate, it is required to provide an approach to detect SL (to save a large number of lives).CNN is a class of DLS, a fundamentally more specialized form of multi-layer-perceptron.Through the research [18], the author intended to build up a deep learning-oriented CNN framework (against the ME detection) for the early detection of SL.For speedy processing, the model was also equipped with Transfer Learning approaches.

    In the manuscript [19], the author provided a“faster, region-oriented CNN”(FRCNN)using the pigmented SL images for both training and testing purposes.Moreover, the same testing was performed using ten dermatologic learners (TDLs), ten board-affirmed dermatologists(BADs), and the precision rate was contrasted with FRCNN.The experiments observed that the arrangement precision rate of dermatologist-oriented solutions was less than FRCNN.

    From the above studies, it can be concluded that numerous studies exist against skin cancer detection using different approaches of image processing.Distinct models are created and tried with diverse organization designs by fluctuating the layer’s count utilized for ME detection, yet not restricted to Dense-layers, Pooling layers, Dropout-layers, or Convolutional layers (Convl).In the literature, a comprehensive discussion is provided on this research, and various other challenges still exist in this domain [20].

    3 Materials and Methods

    This section provides a comprehensive description of the proposed work that we use for ME Classification.

    3.1 Processing of Filters

    The primary image processing depended on the use filters, for example, to get the horizontal and vertical edges of an item in a picture utilizing the blend of filters.The filter-oriented processing is given in Fig.1.In the mathematical expression, we can define Horizontal Edge Filter(HEFL) as provided in Eq.(1).In this equation, VEFL refers to Vertical Edge Filter.

    The 6×6 grayscale images of melanoma are considered (for simplicity).We used a 2-Dimensional matrix.Here each element’s value addresses the measure of light in the respective pixels.A convolutional product is used for the extraction of horizontal edges of the image.In other words, for each block, it is the summation of each element’s product.

    Figure 1: Processing through filters

    For the 1st 3×3 image’s block, the product of each element is performed, later the right block is considered, and this is repeated until all the blocks have been covered.This process can be summarized as given below in Fig.2:

    Figure 2: Edge detection

    Through this view, the way of using a neural network can be perceived , a filter can be learned(see Fig.3).

    Figure 3: Use of learned filter

    The principal instinct is to set a neural organization where the images of ME input is passed and yields a characterized target (i.e., state that can be either functional or non-functional).The learning of parameters is based on backpropagation.

    In CNN, the author used a sequence of pooling and convolutional layers that permit separating the image’s principle highlights (best suited to state classification).In the accompanying parts, every constituent alongside its mathematics would be explained.The following section discusses the Convolutional Product (CPT), Padding (PD), Stride (ST), Convolution (CN), and Pooling (PLN).Instead of unequivocally characterizing the CPT, I will initially characterize few fundamental processes like ST and PD.Here we will first explain the procedure of PD.

    From the CPT, it can be observed that utilizing the VEFL, the image’s middle pixels are heavily utilized as compared to the corner pixels.It implies that the edge’s data is discarded.To tackle this issue, PD is commonly added around the picture to consider the edge’s pixels.Conventionally, zero-based padding is done and signified with pd (i.e., a parameter for padding)through which on the four sides of the image, several elements are represented.The image (2D &grayscale) padding procedure is shown below (see Fig.4), here pd = 1:

    Figure 4: Padding procedure

    In CPT, as a sub-process, ST is carried out.The result’s size shrinking is done through a vast ST and the other way around.Here st represents the parameter of stride.The CPT is shown by Fig.5 (per block, summation of element-wise components) here st = 1:

    Figure 5: Stride process

    The CPT is defined between a filter and a tensor, but first, it is required to characterize the PD and ST.When we characterized the CPT on a 2D framework (that is, summation of‘element wised products’), next, the CPT (on a volume) is officially defined.Generally, the image can numerically to be addressed in the form of a tensor using the accompanying measurements.The dimension (DIMN) of an image (img) can be described as below (see Eq.(2)):

    For example, in a colored image, we have, Blue, Green, and Red colors (herenCNL= 3).Conventionally, the K-filters (with odd dimensions and squared shape) are considered, which are represented as fl.Where every pixel is centered to the fl.Consequently, all the components are considered.While working on the CPT, the image’s channels must be identical to the K number of filters/kernels.In this manner, each channel will have a different filter (applied).Consequently,the filter dimension can be described as below (see Eq.(3)):

    The CPT between the filter and the image is a 2D matrix.Here every component is the subcube of the picture and summation of the filter’s element-wise product as outlined in Fig.6.In mathematics, the filter and image can be used as below (see Eq.(4)).

    Figure 6: Convolution procedure

    (1) In 1×1 convolution, where we considered fl= 1, this can be valuable to lessen thekCNLwithout any variation inkWT, andkHT.

    (2) In the same convolution, the input size is equal to the size of the output→pd =.

    (3) In valid convolution the pd = 0.

    In PLN, through sum up the data, we down-sampled the features of the image.Using the channel, this procedure is done.It keeps thenCNLas-is, and only influences thenWT, andnHT.When the input image of melanoma is provided, then a filter is slide on it (by following a particular st, and no parameters are learned here).Later, on the chosen components, a function is applied.

    Conventionally, a filter with squared shape and fl size is considered.We took st = 2 and fl= 2.We used average pooling here, which is depicted below in Fig.7.

    Figure 7: Pooling scheme

    For the development of CNN, we combined the steps mentioned above layer by layer.This section describes it.A single layer in CNN can be fully connected (FCL), i.e., identical to the layer in Feed-Forward Neural Network, pooling layer (PLL), and Convolutional Layer (Convl)using some Activation Function (AFN).

    In Convl, we applied CPT, many filters are applied here, and input is gone along with AFN δ.On the first layer, the following parameters are used:

    (1) Input image of melanoma:i0is the input image of melanoma,il-1having the size of

    (2) PD is denoted aspdl

    (3) ST is denoted asstl

    (5) For nth convolution, the bias is denoted as.

    (6) AFN is denoted as δl.

    (7) Result:ilhaving the size of

    Next, we can define this as?k∈[1,2,3,...,]:

    Consequently,

    Here,

    8.Here, the quantity of filters is represented by.

    9.For the lth layer, the parameters that are learned are given below:

    10.fls have parameters, i.e., (fllX fllX) X

    11.For Broadcasting, bias has parameters, i.e., (1 X 1 X 1) X

    The following section describes the PLN layer.Without any influence on the number of channels, the input features are downsampled using the PLN layer.In this procedure, the following notations are considered.

    12.Input image of melanoma:i0is the input image of melanoma,il-1having the size of

    13.PD is denoted aspdl

    14.ST is denoted asstl

    15.Pooling fl size:fll

    16.PLN function is denoted as δl.

    17.Result:ilhaving the size of

    It can be narrated as:

    Here,

    When

    There are no parameters to learn by PLN layer.The following section describes the FLC.A finite number of neurons are present in FLC.A vector is passed as input, and as a result, it also generates a vector.Generally, by considering the nth node of the mth layer, the following equation can be derived.

    The result of the pooling or convolution layer is the inputim-1, and it has the dimension of.The tensor is flattened to 1D-vector to plug the input into FCL.The dimension of the tensor would be, consequently:

    For the lth layer, the learned parameters would be:

    ?wn,lis the weight, having the parameters, i.e.,kl-1Xkl.

    ?klparameters for bias.

    Following is the pictorial illustration of FCL (see Fig.8).

    Figure 8: Fully connected layers

    All the above-explained procedures are summed up and presented below (see Fig.9).When we applied a sequence of Convl (using AFNs), the next PLN is applied, and this process is repeated several times.Using this procedure, feature extraction of melanoma is done.Next, the extracted features are transmitted to a neural network, i.e., FCL, using the AFNs alongside.While moving deeper into the network, it was desired to reducekWTandkHT, and increasekCNL.The shape of CNN is given in Fig.10.

    Figure 9: The schematic view of CNN(a)

    Figure 10: The overall view of CNN(b)

    3.2 Overview of Architecture

    The INCP-v3 is a generally acclaimed image-processing framework, which different analysts have constructed after some time.Initially dependent on the article by Szegedy, i.e.,“Reevaluating the Commencement Architecture for Computer Vision”.The model has accomplished a precision rate of higher than 98 percent (using preprocessing filters) on the dataset of ImageNet.

    Before the application of INCP-v3, as preprocessing phase, image filtering is applied.It is essential to detect the ME edge, eliminate the noise, sharpen, and smooth the ME input images.A comparison of different standard imaging filters is provided to analyze their performance in Fig.11.Here, the outperforming results of the Gaussian filter are obtained, and we utilized this filter for preprocessing of ME images.Here, the x-axis shows the size of the filter, and Y-axis shows the BRISQUE value (which represents the quality of the image after applying that specific filter).

    Figure 11: Comparative results of distinct imaging filters

    The pictorial representation of INCP-v3 is given in below Fig.12.The Google proposed INCP-v3 is the 3rd version of the CNN series.It is prepared on thousand classes from the dataset of Image-Net that is itself prepared on images (i.e., around one million).

    Figure 12: Overview of INCP-v3

    The inception modules (IM) are comprised by it, which applies the multi-size filters on the inputs of a similar level.The computation cost engaged with the IM is addressed by applying 1×1 convolution for input truncation to a more modest middle-sized bock known as the bottle-neck layer (BTNKL).On the BTNKL, to altogether diminish the computational cost,various filters are applied.For regularization, the assistant classifiers in the INCP-v3 perform a contribution in the function for weight loss.For the INCP-v3, to use the pre-trained weight of ImageNet, the size of images needs to be 299×299.

    Algorithm 1: Melanoma Detection Initialization:1.LD = Labeled Dataset 2.ULD = Unlabeled Dataset 3.TIT = Total Iterations 4.LM = Learning Model 5.TR = Threshold-Rate Procedure:6.Dataset-Augmentation←Image Filtering using Gaussian Filter, Transformation of Color Space, Noise-Injection, Flipping, Rotation, Cropping 7. TrainingDataset←DStrain 8. TrainingPhase←Ptrain 9. TunedModel←MDtuned 10. Obtain MDtuneduse Ptrain 11.DSsynthetic←{ }12. While {mi, nj}∈Ptraindo 13. Synthesize smp samples {mi, nj}P1 14.MDtuned 15.DSsynthetic←DSsyntheticU{mi, nj}P1 16. End While 17. TML←Training (LD, LM, Params)18. For j = 1: TIT 19. Forward_Propagation 20. Prediction = Classify: (output-label, error-rate)←TML(ULD)21. For instances of ULD 22. If output-label == Prediction(Continued)

    23.//True-Positives, True-Negatives 24. Else 25.//False-Positives, False-Negatives 26. End For 27. Evaluation of loss function 28. Back_Propagation 29. While error-rate>TR do 30.Parameter Updation 31. End While 32. End For Outcome:33.Trained CNN Framework: TML

    Fig.13 shows the overall flow of the proposed work.

    Figure 13: The flow of proposed work

    3.2.1 Dataset

    The dataset we utilized for this research is acquired from the“International Skin Imaging Collaboration (ISIC)”repository.We used the data augmentation approach to increase the number of melanoma images.The following mentioned techniques are used for data augmentation:

    ?Transformation of color space

    ?Noise-Injection

    ?Flipping

    ?Rotation

    ?Cropping

    Due to extensive training data, a better learning rate is achieved.When the data got prepared,three parts are generated from it.1) Training dataset is used for batch construction and model training.2) Dev dataset is used to evaluate variance, bias, and for the model fine-tuning.3) Test dataset is used for the generalization of precision/error of the trained model.

    3.2.2 Training Methodology

    A set of labeled images of melanoma is used for the training of CNN.When the input images are provided, they are propagated to other distinct layers of CNN, which resulted in the classification of the melanoma state.

    3.2.3 Experimental Results

    For melanoma case classification, just a single metric (i.e., accuracy) will not analyze the model’s total effectiveness.Thus Support, F1 Score, Recall, and Accuracy for melanoma classification are carried out.The confusion matrix (CM) is additionally plotted to analyze the performance of the model.Confusion matrix has four components i.e., TNV = True Negative; FPV = False Positive; FNV = False Negative; and TPV = True Positive.

    For instance, when the input image is named with the ME, & the model likewise categorizes it as ME, it is considered the TPV.However, when the input image is marked with the label of ME, but this is categorized as any of the six categorizations, the FNV case.FPV happens when the input image is classified as the positive case/functional case (by the system); however, it is associated with any six diseases.When a non-functional image (by the system) is predicted as non-functional, it is categorized as TNV.

    The hyper-parameters used while performing the experiments of the proposed work are given in below Tab.1.The image size required by this model (i.e., 299×299) is composed for the image dataset.The acquired results are presented in Tab.2.Fig.14 provides the overview of ME classification.Fig.15 provides the accuracy and loss curve against the ME classification while testing the proposed work.Lastly, in Tab.3, a comparative view of the proposed work with other prior research is performed.

    Figure 14: Classification overview

    Figure 15: Accuracy and loss curve

    4 Discussion

    CNN (the most superior algorithm for image processing) is used in this research for melanoma detection, where parameter weight calculation is defined over multiple layers.In order to select the parameters (that gives the outperforming prediction results).We started from the real-valued input image.Accordingly, an objective function is used that is known as the loss function (LFN).On the training dataset, it was used for the quantification of the distance between predicted and real value.We minimized the loss rate using two operations.

    Table 1: Hyperparameters

    Table 2: Experimental results for melanoma detection

    Table 3: Comparative view with prior studies

    According to distinct parameters, the cost function’s gradient calculation is done during the backward propagation phase.Later, their updation is performed.

    Also, during forward propagation through the network, data-propagation is done (in batches/entire network).Next, the LFN is calculated on the batch.It is the summation of the error rate (for the classified results).The procedure is repeated for the number of times specified in epoch-number.

    INCP-v3 outperformed due to the following reasons:

    Connection Sparsity:The output of each layer is used as an input for the next layer.

    Sharing of Parameters:In Convl, if one feature-detector is valuable for one image’s part, then it can be valuable for others.The INCP-v3 (with preprocessing using Gaussian filter) is utilized in this research for melanoma classification, with the acquired accuracy rate of 98.96%.

    5 Conclusion

    To summarize, the purpose of this research is to develop a CNN model (i.e., INCP-v3 with a Gaussian filter for image preprocessing) to analyze and identify ME using image processing.Additionally, it investigated the data-augmentation approach for enhancing the robustness of the INCP-v3 framework’s characterization as a preprocessing measure.TL is used to leverage a pre-trained model rather than training a new model from scratch.The INCP-v3 model achieved a precision rate of greater than 98 percent, outperforming some previous studies.

    Acknowledgement:Thanks to my family and colleagues, who provided moral support.

    Funding Statement:The author received no specific funding for this study.

    Conflicts of Interest:The author declares that he has no conflicts of interest to report regarding the present study.

    如日韩欧美国产精品一区二区三区| 国产成人aa在线观看| 亚洲国产欧美日韩在线播放| 亚洲欧美成人精品一区二区| 日韩一本色道免费dvd| 日日撸夜夜添| 欧美日韩国产mv在线观看视频| 国产成人aa在线观看| 久久久久国产精品人妻一区二区| 国产黄色免费在线视频| 少妇高潮的动态图| 日本黄色日本黄色录像| 人人妻人人爽人人添夜夜欢视频| 欧美人与善性xxx| 亚洲性久久影院| 欧美+日韩+精品| 五月开心婷婷网| 美女脱内裤让男人舔精品视频| 免费观看在线日韩| 欧美日韩综合久久久久久| 国产男女超爽视频在线观看| av.在线天堂| 22中文网久久字幕| 亚洲精品日韩在线中文字幕| 国产一区二区在线观看日韩| 精品久久久久久电影网| 亚洲av福利一区| 成人国语在线视频| 久久精品国产a三级三级三级| 天堂俺去俺来也www色官网| 侵犯人妻中文字幕一二三四区| 亚洲国产精品成人久久小说| 国产日韩欧美视频二区| 18禁动态无遮挡网站| 国产欧美日韩一区二区三区在线| 久久精品国产鲁丝片午夜精品| 免费黄频网站在线观看国产| 久热这里只有精品99| 亚洲在久久综合| 国产男女超爽视频在线观看| 永久免费av网站大全| 18在线观看网站| 国产精品三级大全| 国产精品熟女久久久久浪| 亚洲av在线观看美女高潮| 欧美激情极品国产一区二区三区 | 老司机亚洲免费影院| 丁香六月天网| 满18在线观看网站| 激情五月婷婷亚洲| 欧美成人午夜精品| 国产一区二区三区综合在线观看 | 天天躁夜夜躁狠狠躁躁| 国产高清不卡午夜福利| 一级片'在线观看视频| 大香蕉97超碰在线| 少妇被粗大的猛进出69影院 | 久久精品国产综合久久久 | 亚洲内射少妇av| 曰老女人黄片| 亚洲欧美成人精品一区二区| 国产精品无大码| 免费大片18禁| 成年av动漫网址| h视频一区二区三区| 秋霞伦理黄片| 久久久久精品人妻al黑| 久久久久久久久久久免费av| 欧美日本中文国产一区发布| 97超碰精品成人国产| 人妻一区二区av| 久久久久久久久久人人人人人人| 久久久久久久久久人人人人人人| av网站免费在线观看视频| 街头女战士在线观看网站| 亚洲av.av天堂| 精品久久蜜臀av无| 一级a做视频免费观看| 国产成人av激情在线播放| av电影中文网址| 性色av一级| 欧美日韩av久久| 亚洲美女搞黄在线观看| 国产成人精品一,二区| 在线观看免费视频网站a站| 精品人妻在线不人妻| 欧美人与善性xxx| 免费av不卡在线播放| 精品福利永久在线观看| 男女免费视频国产| 中文欧美无线码| av片东京热男人的天堂| √禁漫天堂资源中文www| 免费大片18禁| 精品国产国语对白av| 亚洲综合精品二区| 超碰97精品在线观看| 美女视频免费永久观看网站| 欧美激情极品国产一区二区三区 | 亚洲成人手机| 国产高清三级在线| 最新的欧美精品一区二区| 国产男女内射视频| 超碰97精品在线观看| 国产成人精品福利久久| 国产精品人妻久久久影院| 国产精品久久久久久精品电影小说| 午夜精品国产一区二区电影| 狂野欧美激情性xxxx在线观看| 91aial.com中文字幕在线观看| 亚洲精品乱久久久久久| 在线观看三级黄色| 啦啦啦在线观看免费高清www| 亚洲综合色网址| 国语对白做爰xxxⅹ性视频网站| 国产一区有黄有色的免费视频| 日本黄大片高清| 精品久久国产蜜桃| 18禁动态无遮挡网站| 免费av不卡在线播放| 亚洲精品色激情综合| 午夜激情av网站| 日韩在线高清观看一区二区三区| 人人妻人人澡人人看| 一级毛片我不卡| 亚洲第一av免费看| 91aial.com中文字幕在线观看| 一二三四中文在线观看免费高清| 亚洲三级黄色毛片| 国产精品一国产av| 午夜91福利影院| 天天影视国产精品| 伦理电影大哥的女人| 男女边摸边吃奶| 精品国产乱码久久久久久小说| 亚洲美女视频黄频| 亚洲情色 制服丝袜| 秋霞在线观看毛片| 久久久久国产网址| 日本猛色少妇xxxxx猛交久久| 亚洲精品成人av观看孕妇| 国产亚洲av片在线观看秒播厂| 国产无遮挡羞羞视频在线观看| 亚洲av男天堂| 自拍欧美九色日韩亚洲蝌蚪91| 日韩av免费高清视频| 天美传媒精品一区二区| 黄色 视频免费看| 国产高清不卡午夜福利| 午夜激情久久久久久久| 女的被弄到高潮叫床怎么办| 亚洲欧美精品自产自拍| 97在线视频观看| 啦啦啦啦在线视频资源| 国产精品.久久久| 日韩三级伦理在线观看| 国产亚洲精品久久久com| 久久精品国产亚洲av天美| 考比视频在线观看| 欧美老熟妇乱子伦牲交| 香蕉丝袜av| 亚洲色图 男人天堂 中文字幕 | 我的女老师完整版在线观看| 国产1区2区3区精品| 免费黄频网站在线观看国产| 大香蕉久久成人网| 日本欧美国产在线视频| 大香蕉久久网| √禁漫天堂资源中文www| 国产精品久久久久久久久免| 中文字幕另类日韩欧美亚洲嫩草| 精品少妇久久久久久888优播| 黑人巨大精品欧美一区二区蜜桃 | 建设人人有责人人尽责人人享有的| 大香蕉久久网| 伦理电影免费视频| 国产乱人偷精品视频| 欧美精品人与动牲交sv欧美| 免费观看无遮挡的男女| 少妇精品久久久久久久| 91国产中文字幕| 王馨瑶露胸无遮挡在线观看| 国产不卡av网站在线观看| 又大又黄又爽视频免费| 欧美精品亚洲一区二区| 九草在线视频观看| 精品一区在线观看国产| 在线观看人妻少妇| 人成视频在线观看免费观看| 啦啦啦在线观看免费高清www| 卡戴珊不雅视频在线播放| 我要看黄色一级片免费的| 欧美成人精品欧美一级黄| 色网站视频免费| 欧美国产精品一级二级三级| 99久久人妻综合| 成人毛片60女人毛片免费| 天堂8中文在线网| 熟妇人妻不卡中文字幕| 大陆偷拍与自拍| 日韩伦理黄色片| 日韩av免费高清视频| 欧美性感艳星| 曰老女人黄片| av线在线观看网站| 人人妻人人爽人人添夜夜欢视频| 国产亚洲欧美精品永久| 丝瓜视频免费看黄片| 男女免费视频国产| 精品卡一卡二卡四卡免费| 少妇的逼水好多| 亚洲欧美成人综合另类久久久| 色婷婷久久久亚洲欧美| 久久久久久久久久久免费av| 桃花免费在线播放| 综合色丁香网| 狂野欧美激情性bbbbbb| 国产亚洲精品第一综合不卡 | 黄色毛片三级朝国网站| 欧美老熟妇乱子伦牲交| 国产激情久久老熟女| 中文字幕制服av| 国产免费福利视频在线观看| 91在线精品国自产拍蜜月| 亚洲国产看品久久| 99热国产这里只有精品6| 亚洲欧美中文字幕日韩二区| 日本免费在线观看一区| 国精品久久久久久国模美| 日韩精品免费视频一区二区三区 | 久久ye,这里只有精品| 女的被弄到高潮叫床怎么办| 少妇被粗大的猛进出69影院 | 日本wwww免费看| 亚洲国产精品一区三区| 丰满饥渴人妻一区二区三| 欧美激情国产日韩精品一区| 黄片无遮挡物在线观看| 黄色怎么调成土黄色| 亚洲精品一区蜜桃| 成人黄色视频免费在线看| 亚洲av免费高清在线观看| 久久久久久久久久成人| 男女高潮啪啪啪动态图| 国产免费又黄又爽又色| 精品久久久久久电影网| 香蕉精品网在线| 欧美精品人与动牲交sv欧美| 亚洲四区av| 精品人妻熟女毛片av久久网站| 十八禁高潮呻吟视频| 男女啪啪激烈高潮av片| 大香蕉97超碰在线| 色网站视频免费| 欧美日韩视频高清一区二区三区二| 国产精品久久久av美女十八| 一区二区av电影网| 国产一级毛片在线| 欧美亚洲 丝袜 人妻 在线| 夜夜骑夜夜射夜夜干| 熟女av电影| 国产麻豆69| 99久久综合免费| 欧美成人精品欧美一级黄| 久久精品aⅴ一区二区三区四区 | 人成视频在线观看免费观看| 国产精品久久久久成人av| 亚洲成av片中文字幕在线观看 | 免费av不卡在线播放| 黄片无遮挡物在线观看| 2018国产大陆天天弄谢| 中文乱码字字幕精品一区二区三区| 亚洲丝袜综合中文字幕| 亚洲精品自拍成人| 综合色丁香网| 国产无遮挡羞羞视频在线观看| 成年人免费黄色播放视频| av福利片在线| 久久热在线av| 国产片特级美女逼逼视频| h视频一区二区三区| av网站免费在线观看视频| 久久影院123| 精品99又大又爽又粗少妇毛片| 香蕉丝袜av| 免费av中文字幕在线| 热99国产精品久久久久久7| 国产xxxxx性猛交| 国产一区二区三区av在线| videossex国产| 久久精品aⅴ一区二区三区四区 | 国产av码专区亚洲av| xxx大片免费视频| 欧美 亚洲 国产 日韩一| 国产色爽女视频免费观看| 亚洲精品aⅴ在线观看| 欧美性感艳星| 侵犯人妻中文字幕一二三四区| 美国免费a级毛片| 飞空精品影院首页| 伦理电影大哥的女人| 成人国产av品久久久| 国产福利在线免费观看视频| 爱豆传媒免费全集在线观看| 中文乱码字字幕精品一区二区三区| 在线观看一区二区三区激情| 久久免费观看电影| 在线观看美女被高潮喷水网站| 97在线人人人人妻| 国产成人精品无人区| 国产成人av激情在线播放| 欧美 日韩 精品 国产| 亚洲精品乱码久久久久久按摩| 欧美人与性动交α欧美软件 | 亚洲av免费高清在线观看| 黑丝袜美女国产一区| 国产成人免费无遮挡视频| 99re6热这里在线精品视频| 亚洲精品国产色婷婷电影| 免费人成在线观看视频色| 成人国产av品久久久| 亚洲综合精品二区| 亚洲经典国产精华液单| 毛片一级片免费看久久久久| 成人亚洲精品一区在线观看| av卡一久久| 久久久久精品人妻al黑| 免费观看性生交大片5| 亚洲精品一二三| √禁漫天堂资源中文www| 国产免费又黄又爽又色| 2018国产大陆天天弄谢| 久久这里有精品视频免费| 国产精品人妻久久久影院| 精品亚洲成a人片在线观看| 日韩精品有码人妻一区| 自线自在国产av| 美女主播在线视频| 精品99又大又爽又粗少妇毛片| 国产熟女午夜一区二区三区| 国产欧美日韩一区二区三区在线| av免费观看日本| 亚洲av综合色区一区| 国产黄色免费在线视频| 青春草国产在线视频| 男女啪啪激烈高潮av片| 一二三四在线观看免费中文在 | av电影中文网址| 亚洲人成77777在线视频| 91精品伊人久久大香线蕉| 九草在线视频观看| 99久久综合免费| 久久久久久久久久久免费av| videossex国产| 亚洲人成77777在线视频| 久久韩国三级中文字幕| 色94色欧美一区二区| 少妇猛男粗大的猛烈进出视频| 欧美激情极品国产一区二区三区 | 成年美女黄网站色视频大全免费| 老司机亚洲免费影院| 国产成人精品久久久久久| 亚洲综合精品二区| 亚洲av日韩在线播放| 又黄又粗又硬又大视频| 亚洲国产精品一区二区三区在线| a级毛片黄视频| 日本黄大片高清| 老司机影院成人| 久久99一区二区三区| 在现免费观看毛片| 天天躁夜夜躁狠狠久久av| 水蜜桃什么品种好| 视频中文字幕在线观看| 精品视频人人做人人爽| 亚洲精品国产色婷婷电影| 免费看不卡的av| 男女国产视频网站| 欧美老熟妇乱子伦牲交| 精品少妇久久久久久888优播| 人人妻人人爽人人添夜夜欢视频| 黑人欧美特级aaaaaa片| 国产高清三级在线| 肉色欧美久久久久久久蜜桃| 亚洲精品美女久久久久99蜜臀 | 欧美日韩综合久久久久久| 午夜激情久久久久久久| 亚洲成人一二三区av| 男女边摸边吃奶| 老司机影院成人| 国产av精品麻豆| 国精品久久久久久国模美| 少妇被粗大的猛进出69影院 | 亚洲精品国产av成人精品| 国产在线视频一区二区| 两个人看的免费小视频| 一区二区日韩欧美中文字幕 | 人人妻人人添人人爽欧美一区卜| 在线亚洲精品国产二区图片欧美| 老司机影院毛片| 欧美国产精品一级二级三级| 久久精品国产a三级三级三级| 欧美亚洲 丝袜 人妻 在线| 欧美人与性动交α欧美软件 | 黄片播放在线免费| 男女无遮挡免费网站观看| 婷婷色综合大香蕉| 三上悠亚av全集在线观看| 亚洲美女黄色视频免费看| 欧美 日韩 精品 国产| 国产国语露脸激情在线看| av线在线观看网站| 777米奇影视久久| 久久这里有精品视频免费| 伦理电影免费视频| 欧美日本中文国产一区发布| 男男h啪啪无遮挡| 美女脱内裤让男人舔精品视频| 国内精品宾馆在线| 亚洲精品视频女| 国产日韩欧美视频二区| 少妇被粗大猛烈的视频| 精品亚洲成国产av| 色哟哟·www| av视频免费观看在线观看| 亚洲欧洲精品一区二区精品久久久 | 2022亚洲国产成人精品| 欧美亚洲日本最大视频资源| 亚洲美女视频黄频| 国产又爽黄色视频| 男女国产视频网站| 成年人免费黄色播放视频| 亚洲高清免费不卡视频| 日本-黄色视频高清免费观看| av女优亚洲男人天堂| 日韩av不卡免费在线播放| 天堂俺去俺来也www色官网| 美女主播在线视频| 午夜福利视频在线观看免费| 少妇的逼好多水| 精品视频人人做人人爽| 两个人看的免费小视频| 亚洲成色77777| 久久国内精品自在自线图片| 亚洲av中文av极速乱| 婷婷成人精品国产| www.色视频.com| 国产xxxxx性猛交| 国产片内射在线| 不卡视频在线观看欧美| 9191精品国产免费久久| 交换朋友夫妻互换小说| 亚洲成人一二三区av| 搡女人真爽免费视频火全软件| 久久久a久久爽久久v久久| 日本猛色少妇xxxxx猛交久久| 少妇猛男粗大的猛烈进出视频| 王馨瑶露胸无遮挡在线观看| 国产成人一区二区在线| 色哟哟·www| 国产一区有黄有色的免费视频| www.色视频.com| 欧美亚洲日本最大视频资源| 国产黄频视频在线观看| 久久精品夜色国产| 天堂中文最新版在线下载| 99久久综合免费| 久久鲁丝午夜福利片| 丰满迷人的少妇在线观看| 老女人水多毛片| 精品一区二区免费观看| 飞空精品影院首页| 香蕉国产在线看| 天堂俺去俺来也www色官网| 国产成人一区二区在线| 国产欧美另类精品又又久久亚洲欧美| 成年av动漫网址| 亚洲色图 男人天堂 中文字幕 | 亚洲伊人色综图| 26uuu在线亚洲综合色| 少妇人妻精品综合一区二区| 国产免费一级a男人的天堂| 午夜福利影视在线免费观看| 久久女婷五月综合色啪小说| 国国产精品蜜臀av免费| 国产亚洲午夜精品一区二区久久| 国产乱人偷精品视频| 亚洲美女黄色视频免费看| 另类精品久久| 中文欧美无线码| 麻豆精品久久久久久蜜桃| 国产av精品麻豆| 热re99久久国产66热| 国产成人精品福利久久| 嫩草影院入口| 人体艺术视频欧美日本| 2021少妇久久久久久久久久久| 国产成人精品无人区| 欧美精品亚洲一区二区| 日产精品乱码卡一卡2卡三| 久久久精品区二区三区| 制服诱惑二区| 久久久久久人人人人人| 精品久久久久久电影网| 中文字幕亚洲精品专区| 九草在线视频观看| 在线观看免费高清a一片| 免费高清在线观看日韩| 99久久精品国产国产毛片| av视频免费观看在线观看| 男女国产视频网站| 精品人妻在线不人妻| 日本猛色少妇xxxxx猛交久久| 成年动漫av网址| 一级毛片电影观看| 国产成人一区二区在线| 黄色毛片三级朝国网站| 老司机影院毛片| 国产精品久久久久成人av| 人成视频在线观看免费观看| 狂野欧美激情性bbbbbb| 免费高清在线观看日韩| 欧美人与性动交α欧美软件 | 日韩av免费高清视频| 男女无遮挡免费网站观看| 自拍欧美九色日韩亚洲蝌蚪91| 久久国产精品男人的天堂亚洲 | 国产成人精品一,二区| 国产精品人妻久久久影院| 午夜影院在线不卡| av一本久久久久| 免费大片黄手机在线观看| 欧美日韩国产mv在线观看视频| 看十八女毛片水多多多| 国产免费一区二区三区四区乱码| 人体艺术视频欧美日本| 国产在视频线精品| 成年美女黄网站色视频大全免费| 热99国产精品久久久久久7| 满18在线观看网站| 综合色丁香网| 国产一区二区三区综合在线观看 | 黄色一级大片看看| 性色avwww在线观看| 精品国产露脸久久av麻豆| 一级毛片我不卡| 母亲3免费完整高清在线观看 | 欧美日韩国产mv在线观看视频| 99精国产麻豆久久婷婷| 日本欧美国产在线视频| 91午夜精品亚洲一区二区三区| 51国产日韩欧美| 汤姆久久久久久久影院中文字幕| 一级黄片播放器| 美女主播在线视频| 久久人人97超碰香蕉20202| 18在线观看网站| 18禁动态无遮挡网站| 亚洲精品色激情综合| 黑人巨大精品欧美一区二区蜜桃 | 亚洲国产成人一精品久久久| 日本色播在线视频| 国产乱人偷精品视频| √禁漫天堂资源中文www| 蜜桃在线观看..| 亚洲精品av麻豆狂野| 在线天堂最新版资源| 欧美精品亚洲一区二区| 2022亚洲国产成人精品| 成人二区视频| 亚洲国产av影院在线观看| 成人综合一区亚洲| 国产亚洲午夜精品一区二区久久| 午夜免费男女啪啪视频观看| 观看美女的网站| 亚洲精品美女久久av网站| 亚洲婷婷狠狠爱综合网| 中文字幕人妻丝袜制服| 9热在线视频观看99| 亚洲精品,欧美精品| 天堂俺去俺来也www色官网| 69精品国产乱码久久久| 伊人久久国产一区二区| 久久人人爽人人爽人人片va| 宅男免费午夜| 天美传媒精品一区二区| 老司机影院毛片| 在线 av 中文字幕| av在线观看视频网站免费| 18禁观看日本| 国产欧美另类精品又又久久亚洲欧美| 人妻 亚洲 视频| 色视频在线一区二区三区| 亚洲经典国产精华液单| 亚洲av免费高清在线观看| 亚洲精品日韩在线中文字幕| 国产精品蜜桃在线观看| 色吧在线观看| h视频一区二区三区| 天堂8中文在线网| 国产成人精品在线电影| 亚洲国产色片| 成人黄色视频免费在线看| 一级黄片播放器| 日韩制服骚丝袜av| 午夜免费观看性视频| 啦啦啦视频在线资源免费观看| 99视频精品全部免费 在线| 亚洲欧美中文字幕日韩二区| 亚洲天堂av无毛| 丁香六月天网| 中国国产av一级| 人人妻人人澡人人爽人人夜夜| 热99国产精品久久久久久7| 91aial.com中文字幕在线观看|