• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Improved Convolutional Neural Network Model for DNA Classification

    2022-03-14 09:27:02NaglaaSolimanSamiaAbdAlhalemWalidElShafaiSalahEldinAbdulrahmanIsmaielElSayedElRabaieAbeerAlgarniandFathiAbdElSamie
    Computers Materials&Continua 2022年3期

    Naglaa.F.Soliman,Samia M.Abd-Alhalem,Walid El-Shafai,Salah Eldin S.E.Abdulrahman,N.Ismaiel,El-Sayed M.El-Rabaie,Abeer D.Algarni and Fathi E.Abd El-Samie,

    1Department of Information Technology,College of Computer and Information Sciences,Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia

    2Department of Electronics and Electrical Communications Engineering,Faculty of Electronic Engineering,Menoufia University,Menoufia,32952,Egypt

    3Department of Computer Science and Engineering,Faculty of Electronic Engineering,Menoufia University,Menoufia,32952,Egypt

    Abstract: Recently, deep learning (DL) became one of the essential tools in bioinformatics.A modified convolutional neural network(CNN)is employed in this paper for building an integrated model for deoxyribonucleic acid(DNA)classification.In any CNN model, convolutional layers are used to extract features followed by max-pooling layers to reduce the dimensionality of features.A novel method based on downsampling and CNNs is introduced for feature reduction.The downsampling is an improved form of the existing pooling layer to obtain better classification accuracy.The two-dimensional discrete transform(2D DT)and two-dimensional random projection(2D RP)methods are applied for downsampling.They convert the high-dimensional data to low-dimensional data and transform the data to the most significant feature vectors.However, there are parameters which directly affect how a CNN model is trained.In this paper,some issues concerned with the training of CNNs have been handled.The CNNs are examined by changing some hyperparameters such as the learning rate,size of minibatch,and the number of epochs.Training and assessment of the performance of CNNs are carried out on 16S rRNA bacterial sequences.Simulation results indicate that the utilization of a CNN based on wavelet subsampling yields the best trade-off between processing time and accuracy with a learning rate equal to 0.0001,a size of minibatch equal to 64,and a number of epochs equal to 20.

    Keywords: DNA classification; CNN; downsampling; hyperparameters; DL;2D DT; 2D RP

    1 Introduction

    Technological advances in DNA sequencing allowed sequencing of the genome at a low cost within a reasonable period.These advances induced a huge increase in the available genomic data.Bioinformatics addresses the need to manage and interpret the data that is massively generated by genomic research.Computational DNA classification is among the main challenges, which play a vital role in the early diagnosis of serious diseases.Advances in machine learning techniques are expected to improve the classification of DNA sequences [1].Recently, survey studies have been presented by Leung et al.[2], Mamoshina et al.[3], and Greenspan et al.[4].These studies discussed bioinformatic applications based on DL.The first two are limited to applications in genomic medicine and the latter to medical imaging.The DL is a relatively new field of artificial intelligence, which achieves good results in the areas of big data processing such as speech recognition, image recognition, text comprehension, translation, and genomics.

    There are several contributions based on DL in the fields of medical imaging and genomic medicine.However, the DNA sequence classification issue has received little attention.For an indepth study of DL in bioinformatics, we can consider the review study conducted by Seonwoo et al.[5].In addition, several studies have been devoted to the utilization of CNNs and recurrent neural networks (RNNs) in the field of bioinformatics and DNA classification [6,7].

    1.1 The CNNs

    The classification task based on CNNs depends on several layers.Tab.1 provides a list of the basic functions of a variety of CNN layers [5].

    Rizzo et al.[8] presented a DNA classification approach that depends on a CNN, and the spectral representation of DNA sequences.From the results, they found that their approach provided similar and good results between 95% and 99% at each taxonomic level.Moreover,Rizzo et al.[1] suggested a novel algorithm that depends on CNNs with frequency chaos game representation (FCGR).The FCGR was utilized to convert the original DNA sequence to an image before feeding it to the CNN model.This method is considered as an expansion of the spectral representation that was reported to be efficient.This work is a continuation of the work of Rizzo et al.[1] for the classification of DNA sequences using a deep neural network, and chaos game representation, except for the addition of downsampling layers that can achieve the best trade-off between performance and time of processing, which is the main contribution of this work.The proposed approach is an improved form of the CNN to obtain better classification accuracy.

    1.2 Data Reduction Step

    A weakness of the convolutional layer performance is that it reports the exact position of features in the input.Slight shifts in the features located in the input image contribute to different feature maps.The pooling layer is used to resize the feature maps to overcome this problem.A simplified representation of the features observed in the input is the outcome of using a pooling layer.In practice, max-pooling works better than average pooling for computer vision fields such as image recognition [9].We can handle this issue in signal processing by using downsampling methods such as 2D RP, two-Dimensional two-Directional Random Projection ((2D)2RP)) and 2D DT.As a result, a lower-resolution representation of an input signal is produced, including the significant structural components without fine details that might not be helpful.The important purpose of the RP is to reduce the high dimensionality and preserve the geometrical relationship in the dimensionality reduction.

    Table 1: A list of the basic layers used in CNNs

    Dimensionality reduction methods can be briefly categorized into two classes, namely subspace and feature selection.Subspace methods include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Random Projection (RP), etc.The RP can be free from training and much faster.Some extensions of one-dimensional RP (1D RP), including two-dimensional RP (2DRP) [10], two-directional two-dimensional RP(2D)2RP [11,12], sparse RP [13], require far lower computational complexity and storage cost than those of traditional 1D RP.

    The authors of [13] used 2D schemes instead of 1D ones to reduce computational complexity and storage costs.In addition, in [10], the authors proposed(2D)2PSRP methods to generate 2D cancelable faces and palmprints.The authors in [12] showed that 1D cancelable palmprint codes verification performance cannot meet the requirements of accuracy, and their computational and storage costs are large.So, 1D cancelable palmprint codes are extended to 2D cancelable palmprint codes.Moreover, the authors in [11] proposed a novel method called (2D)2RP for feature extraction from biometrics, where they employed(2D)2RP and its variations on the face and palmprint databases.

    Feature selection methods depend on different spectral transformations such as twodimensional Discrete Cosine Transform (2D DCT), and two-dimensional Discrete Wavelet Transform (2D DWT) to extract the features to reduce the amount of data, thereby simplifying the subsequent classification problem, and hence decision-making.Adaptive selection/weighting of features/coefficients is typically used for dimensionality reduction and performance improvement.The features that achieve high discrimination [14], high accuracy [15], and low correlation [12]should be selected and provided with high weights.The number of selected features is less than that of the original features.Feature selection methods have several advantages compared with subspace methods, such as PCA.Sometimes, feature selection methods can be fast and trainingfree, while it is comparable to the subspace methods in terms of accuracy.Furthermore, the selected features maintain their original forms.So, it is easy to observe the true values of the features.The authors in [16] proposed a novel approach for face and palmprint recognition in the DCT domain.In addition, the utilization of fusion rules is also an important tool to reduce computational complexity and storage costs [17].

    The rest of this paper is organized as follows.Section 2 presents the proposed CNN models based on different downsampling layers.The max-pooling, DT, and RP are explained in Sections 3-5, respectively.Section 6 introduces the dataset.The results and discussions are given in Section 7.Finally, Section 8 gives the concluding remarks.

    2 The Proposed CNNs Based on Different Downsampling Layers

    We designed the proposed architecture, inspired by Rizzo et al.[1] architecture that has been reported as an efficient architecture for bacteria classification.We have added one convolutional layer followed by DT or 2D RP or variations of(2D)2RP layer as compared to the original Rizzo et al.[1] architecture.Fig.1 shows the proposed model.Firstly, the input DNA sequences are preprocessed using the FCGR algorithm withk= 6, 7, and 8.Thus, the output image is of dimensionFor more details about FCGR, see [1,18].Then, the normalized output is processed to make the input images suitable to the proposed CNN.The proposed CNN model consists of seven layers.The first four layers (froml1tol4)are convolutional layers, each followed by a max-pooling layer.Additionally, the layersl5tol6are convolutional layers followed by various downsampling layers, which are applied to reduce the dimensionality of training.Several downsampling methods are implemented such as DT, 2D RP, and variations of(2D)2RP.Simulation parameters are specified in Tab.2.

    Figure 1: The architecture of the proposed model

    Table 2: Simulation parameters

    After the convolutional layers, a set of the output images is generated; each of them of dimension(2bi+1), and:

    For example, letk= 6.Hence,=64×64, and the first convolutional layer(layer l1)produces 20 output images of dimension (64 - 5 + 1) = 60.Then, the pooling layer is applied, which produces 20 output images of dimension 60/2 = 30.The proposed CNNs are trained for five different classification tasks, as illustrated in Fig.2, and the simulation parameters are presented in Tab.2.

    Figure 2: The architecture of the classifier

    3 The Max-Pooling

    The downsampling layer is another name for the pooling layer.It reduces the dimensionality of data, by dividing the input into rectangular pooling regions.The max-pooling computes the maximum of each regionRijand consequently reduces the number of outputs.The max-pooling function is expressed as:

    while the average pooling function can be expressed as:

    whereapqis the input at(p,q)withinRij, and |Rij|is the size of the pooling region.

    Let us examine the effect of the max-pooling, when a 4 × 4 matrix input image is used, as shown in Fig.3b.

    In the case of an irregular nature of DNA sequences,k-mers recognition, the effective downsampling layer increases the ability of the CNN to achieve high performance.Anyway, the classification results do not critically depend on the feature extraction stage, but strongly depend on how these features are reduced.

    Figure 3: The pooling of a 4 × 4 matrix (a) The 4 × 4 matrix (b) The effect of max-pooling

    4 Discrete Transform(DT)

    Since the FCGR converts the DNA sequences into the form of images, we can apply the spectral transformations (Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT),and Discrete Wavelet Transform (DWT)) for downsampling, and the feature extraction stage for DNA images.The reason for applying these transformations emerges from their wide and effective use for extracting features, decorrelation, ordering, and dimensionality reduction purposes in the fields of speech, image, and bio-signal processing [19].In signal processing, the DCT [20] can reveal the discriminative characteristics of the signal, namely, its frequency components.It is considered as a separable linear transformation.The basic idea of the DT is to select a certain sub-band after implementing the transformation.For example, the DCT can be implemented on the numerical sequence representing the DNA, and certain coefficients from the DCT can be selected to represent the whole sequence.The definition of the two-dimensional DCT for an input image A is given by:

    where

    and

    whileMandNare the row and column lengths ofA, respectively.

    The wavelet transform is faster and more efficient than the Fourier transform in capturing the essence of data [20].Therefore, there is a growing interest in utilizing the wavelet transform to analyze biological sequences.The DWT is investigated to predict the similarity accurately and reduce computation complexity compared to the DCT and the DFT techniques.

    The wavelet transform has been a very novel method for analyzing and processing of nonstationary signals such as bio-signals in which both time- and frequency-domain information are required.The wavelet analysis is often used for compression and de-noising of signals without appreciable degradations.The wavelet transform can be used to analyze the sequences at different frequency bands.In 2D DWT, the image is decomposed into four sub-bands.After filtering, the signal is downsampling by 2.In this work, the DWT is employed to reduce the dimensionality of features by performing the single-level 2D wavelet decomposition.The decomposition is conducted using a particular wavelet filter.Then, approximation coefficients (LL) can be selected.For example, let the first convolutional layer(layer l1)produce 20 output images of dimension 2b1.Then, a DWT pooling layer is applied, which produces 20 output images of dimensionb1.Fig.4 displays an example of the proposed DWT pooling.

    Figure 4: The proposed DWT pooling

    5 Two-Dimensional Random Projection(2D-RP)

    This method achieves the dimensionality reduction with low computational cost [21,22].If the original dataset is represented by the matrix Xd×n, then the projection of the data onto a lowerk-dimensional space gives Yk×nor Y as follows:

    where Rk×dis the RP matrix andk?d.

    5.1 Implementation of RP

    The following stages of the RP are written using Matlab 2018a:

    ? Set the input as the features map Xm×m(the multilayer CNN features).

    ? Reshape the input to Xd×n.

    ? Create ak×drandom matrix (Rk×d), wherek?d.

    ? forj=1:n

    ? Y(:,j) = R×X(:,j);

    ? End for.

    ? Output=Yk×n.

    5.2 Two Directional Two Dimensional Random Projection(2D)2 RP

    The 2D RP can be implemented simultaneously in two directions, that is called(2D)2RP.In this method, the input matrix is projected at row direction and column direction as follows:

    where R and C are the left mapping matrix for column-direction and right mapping matrix for row-direction, respectively andh?n,k?d.The details of(2D)2RP were explained in [12].With Eq.(8), the projection of data onto a lowerkandhdimensional subspace is implemented.

    5.3 Variations of (2D)2 RP

    The dimensionality reduction is the main purpose of pooling layers as introduced in the previous sections.In this work, the DWT and DCT are proposed to make the pooling layer to satisfy this purpose and add more details to feature maps.Hybrid methods that combine(2D)2RP with DWT or DCT have been proposed.These methods are namely(2D)2RP DWT and(2D)2RP DCT based on the matrices R and C as indicated in Tab.3.

    Table 3: Variations of (2D)2 RP

    6 Dataset Descriptions

    Data were obtained from the Ribosomal Database Project (RDP) [23], Release 11.A file in the FASTA record was obtained from the repository, which includes data on 1423984 outstanding bacterial gene sequences.For each bacterium, we have data on which taxonomic categories belong to certain genetic sequences.In addition, we have information on the phylum, class, order,family, and genus of a given 16S rRNA gene sequence.The bacterial genome contains the smallsubunit ribosomal RNA transcript and is useful as a general genetic marker.It is often used to determine bacterial diversity, identification, and genetic similarity, and it is the basis for molecular taxonomy [24].Two different sequences were used for comparison; (a) full-length sequences with a length of approximately 1200 - 1500 nucleotides and (b) 500 bp DNA sequence fragments.The total set of data includes sequences of the 16SrRNA gene of bacteria belonging to 3 different phylum, 5 different classes, 19 different orders, 65 different families, and 100 different genera, as shown in Tab.4.

    Table 4: 16S Bacteria dataset composition

    7 Results and Discussions

    One of the key parameters that affect the DNA classification based on CNN is avoiding dimensionally problem and the sensitivity to the positions of the features.Even though the complex nature of DNA sequences is improved by convolutional layers, it is still necessary to ensure that the multi-layer CNN feature map has as suitable dimensions as possible.Therefore,there is a bad need to provide a downsampling layer that improves the generation ability of the original features.In this work, the CNN is utilized as a choice for deep learning, FCGR is applied for data preprocessing method, and different types of downsampling layers are introduced, such as DCT, DWT, 2D RP,(2D)2RP,(2D)2RP DCT, and(2D)2RP DWT.A comparison is presented for the performance of CNN based on different downsampling layers.Finally, a random search method is applied to optimize the hyperparameters.

    7.1 Comparison between Different Types of Downsampling Based on CNN

    The effectiveness of different downsampling layers has been investigated to classify bacterial sequences to reach the highest possible accuracy.First, the given DNA sequences have been mapped using the FCGR algorithm withk= 6, 7, and 8.Then, the proposed CNN models based on different downsampling layers have been trained for each taxon.These models are.

    ? Model_1 (Max-CNN): Rizzo paper [1].

    ? Model_2 (RP-CNN): CNN classification followed by max-pooling or 2D RP.

    ? Model_3 (DWT-CNN): CNN classification followed by max-pooling or DWT.

    ? Model_4 (DCT-CNN): CNN classification followed by max-pooling or DCT.

    ? Model_5((2D)2ZRP-CNN): CNN classification followed by max-pooling or(2D)2RP.

    ? Model_6 ((2D)2RP DCT-CNN): CNN classification followed by max-pooling or(2D)2RP DCT.

    ? Model_7((2D)2ZRP DWT-CNN): CNN classification followed by max-pooling or(2D)2RP DWT.

    To demonstrate the effectiveness of the proposed models, two simulation experiments are conducted.In the first case, the efficiency of the prediction for each taxonomic level is measured separately by taking into account the whole bacteria sequence.In the second case, instead of the whole sequence, we consider only the 500 bp long sequences.The simulation results are demonstrated in Tabs.5-7, and Fig.5 introduces the experimental results for the full-length DNA sequences, while Tabs.8-10 and Fig.6 present the results for 500 bp-length sequences.The classification is obtained for the same sequence with the representation of images at different values ofk.From these tables and figures, it is clear that the proposed CNN model based on DWT and(2D)2RP DWT always achieves the best performance.Furthermore, the(2D)2RP DWT-CNN model consumes less running time.The best choice for mapping is atk= 8, because it improves the accuracy and F-score compared with those achieved atk= 6 and 7.Moreover,the proposed CNN based on(2D)2RP DWT has a processing time that is less than that of the max-CNN by about 135 sec on average.From the mentioned results, the proposed(2D)2RP DWT-CNN model withkequal to 8 provides superior results compared with other models.

    Table 5: Comparison of accuracy scores between created models based on different pooling layers considering full length at k= 6

    Table 6: Comparison of accuracy scores between created models based on different pooling layers considering full length at k= 7

    Table 7: Comparison of accuracy scores between created models based on different pooling layers considering full length at k= 8

    Figure 5: F-scores of the proposed model at k= 8, for the full length case

    Table 8: Comparison of accuracy scores between created models based on different pooling layers for 500 bp-length sequences at k= 6

    Table 9: Comparison of accuracy scores between created models based on different pooling layers for 500 bp-length sequences at k= 7

    Table 10: Comparison of accuracy scores between created models based on different pooling layers for 500 bp-length sequences at k= 8

    Figure 6: F-scores of the proposed model at k= 8, for 500 bp-length sequences

    Tabs.11 and 12 present comparisons between the performance of the proposed(2D)2RP DWT-CNN and the state-of-the-art models; VGG16, VGG19, and ResNet-50 atk= 8 and different DNA sequences using the full-length and 500 bp-length sequences, respectively.The results indicate that the proposed(2D)2RP DWT-CNN achieves better accuracies at the genus level, by about 4.23% and 7.34% compared to the VGG16 model for the full-length and 500 bp-length sequences, respectively.The proposed model consumes 53 min, which is the lowest computational time compared to the VGG16, VGG19, and ResNet-50.For VGG16, VGG19, and ResNet-50,the computational times were recorded as 62, 87, 134 min, and also they have lower accuracies of classification.Finally, a comparison is conducted among the proposed ((2D)2RP DWT-CNN model and the mentioned state-of-the-art models based on different datasets for the three most popular taxonomic trees (RDP, SILVA, and green genes) [24].

    Table 11: Comparison of the proposed (2D)2 RP DWT-CNN and the state-of-the-art CNNs for Genus level at k= 8 and full-length sequences

    Table 12: Comparison of the proposed (2D)2 RP DWT-CNN and the state-of-the-art CNNs for Genus level at k= 8 and 500 bp-length sequences

    Tab.13 indicates the different datasets used for the full-length implementation.Tab.14 summarizes the experimental results for the proposed model and the state-of-the-art models.It is shown that the proposed model is superior, and it achieves a classification accuracy equal to 97.94% against 97.14%, 96.27%, and 96.27% for RDP 11, SILVA dataset [25], and greengenes dataset [26], respectively.

    Table 13: The input datasets for the full-length implementation

    Table 14: Comparison results between the proposed (2D)2 RP DWT-CNN and the state-of-the-art CNNs for different datasets considering the full-length implementation

    7.2 Hyperparameter Tuning

    The training process may be quite difficult due to the enormous number of initial variables called hyperparameters.These values are defined before the start of the learning process.Some examples of hyperparameters include the learning rate, the minibatch size, and the number of epochs.In this paper, some changes in hyperparameters are applied to iteratively configure and train the proposed model.This section can be divided into subsections as follows:

    7.2.1 Learning Rate Results

    In this subsection, the effect of the learning rate on the CNNs with different downsampling layers at the genus level is investigated in the case of full-length and 500 bp-length sequences.These downsampling layers include Max-CNN, RP-CNN, DCT-CNN, DWT-CNN,(2D)2RP DCT-CNN,(2D)2RP-CNN and(2D)2RP DWT-CNN.The parameters used in the simulation are mini-batch with 64, and the number of epochs for training is equal to 20.The comparison among the mentioned models at different learning rates is shown in Tabs.15-18 for the full-length sequences.

    Table 15: CNN metrics with different downsampling layers at learning rate = 0.01 considering full-length implementation at the genus level

    Table 16: CNN metrics with different downsampling layers at the learning rate = 0.001

    Table 17: CNN metrics with different downsampling layers at a learning rate = 0.0001

    Table 18: CNN metrics with different downsampling layers at a learning rate = 0.00001

    It can be noted that the highest accuracy is obtained at the learning rate equal to 0.0001 and 0.00001, but processing time increases, where 0.0001 learning rate has a processing time less than that of the 0.00001 learning rate.The same comparison is conducted for 500 bp-length sequences to trust the achieved results as demonstrated in Tabs.19-21.Therefore, at a 0.0001 learning rate,superior accuracy for the training set can be attained for any length of the DNA sequences.

    Table 19: CNN metrics with different downsampling layers at a learning rate = 0.01

    Table 20: CNN metrics with different downsampling layers at a learning rate = 0.001

    Table 21: CNN metrics with different downsampling layers at a learning rate = 0.0001

    7.2.2 Mini-batch Size and Number of Epochs

    In this subsection, the evaluation using different mini-batch sizes is investigated in the training process against different iterations for the proposed(2D)2RP DWT-CNN model (at genus level considering full-length implementation) with the number of epochs = 20 and the learning rate equal to 0.0001.The experimental results are illustrated in Fig.7.It is clear at mini-batch size equal to 128, the proposed(2D)2RP DWT-CNN achieved less accuracy performance, while at mini-batch sizes equal to 32 and 64, the proposed model has a better trade-off between the accuracy score and the processing time.

    From the mentioned results, we can conclude that the best performance of the proposed DWT-CNN model is achieved at the learning rate equal to 0.0001 and the mini-batch size equal to 64.We can select a suitable number of epochs considering these values.Fig.8 reveals the training progress of(2D)2RP DWT-CNN model atkequal to 6 considering the full-length implementation at a different numbers of epochs.It can be observed that best accuracy is obtained at 20 epochs.Finally, after several experiments, we give the best hyperparameters in Tab.22.

    Figure 7: Training progress of (2D)2 RP DWT-CNN model (k = 6) considering full-length implementation at different mini-batch sizes (a) 32, (b) 64, and (c) 128

    Figure 8: Training progress of (2D)2 RP DWT-CNN model (k = 6) considering full-length implementation at different numbers of epochs (a) 10 and (b) 30

    Table 22: The best hyperparameters used

    8 Conclusions and Future Research Directions

    This paper presented two contributions to the bacterial classification of DNA sequences.The first one is represented in the proposed models for bacterial classification using an improved CNN.In these models, the 2D RP,(2D)2RP,(2D)2RP DCT,(2D)2RP DWT, and DT methods are applied to reduce the dimensionality of the feature maps, while preserving the structure information.The proposed models make the data reduction process faster and more reliable.The simulation results revealed that selecting the appropriate downsampling layer with the training CNN could greatly influence the accuracy with an optimized computational time.According to the obtained results, it can be concluded that the CNN based on(2D)2RP DWT gives a high accuracy.Furthermore, this model can achieve a good trade-off between the accuracy score and the processing time for a suitable size of the frequencyk-lengthen words in DNA sequences.Finally, the experimental results on different datasets reveal that the proposed(2D)2RP DWT model outperforms the state-of-the-art CNNs models.The second contribution lies in evaluating the effectiveness of the hyperparameters through the created CNNs based on different downsampling layers to select the best results.It is possible to say that the best accuracy is provided by using(2D)2RP DWT as a downsampling layer withk= 6.This study confirms that with a learning rate equal to 0.0001, the mini-batch size equal to 64, and the number of epochs equal to 20 are suitable to achieve the best performance on the given DNA dataset.For future work, the performance of different frequency-domain transforms for DNA classification can be investigated.In addition, deep CNN models developed from scratch can be designed to improve the DNA classification efficiency.

    Acknowledgement:The authors would like to thank the support of the Deanship of Scientific Research at Princess Nourah Bint Abdulrahman University.

    Funding Statement:This research was funded by the Deanship of Scientific Research at Princess Nourah Bint Abdulrahman University through the Fast-track Research Funding Program.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    美女 人体艺术 gogo| 欧美黄色片欧美黄色片| 成人亚洲精品一区在线观看| 制服诱惑二区| 成年女人毛片免费观看观看9| 国产在线精品亚洲第一网站| 精品国产亚洲在线| 亚洲,欧美精品.| 男女之事视频高清在线观看| 99久久国产精品久久久| 国产成人精品久久二区二区免费| 少妇粗大呻吟视频| 久久精品影院6| 又大又爽又粗| 亚洲国产中文字幕在线视频| 中文字幕精品免费在线观看视频| 99国产极品粉嫩在线观看| 19禁男女啪啪无遮挡网站| 男人的好看免费观看在线视频 | 国产av又大| 高清在线国产一区| 亚洲成人免费电影在线观看| 欧美激情 高清一区二区三区| 亚洲国产精品成人综合色| 亚洲一区高清亚洲精品| 欧美国产日韩亚洲一区| 两个人看的免费小视频| 欧美激情久久久久久爽电影 | √禁漫天堂资源中文www| 好看av亚洲va欧美ⅴa在| 国产欧美日韩一区二区三区在线| 国产午夜精品久久久久久| 久久亚洲真实| 亚洲成国产人片在线观看| 免费一级毛片在线播放高清视频 | 午夜精品久久久久久毛片777| 午夜久久久久精精品| 少妇熟女aⅴ在线视频| 99久久久亚洲精品蜜臀av| 岛国视频午夜一区免费看| 十八禁人妻一区二区| 亚洲欧美精品综合久久99| 黄片播放在线免费| 亚洲性夜色夜夜综合| 啦啦啦免费观看视频1| 亚洲精品在线观看二区| 亚洲精品在线观看二区| 人人妻人人澡欧美一区二区 | 给我免费播放毛片高清在线观看| 一进一出抽搐动态| 一卡2卡三卡四卡精品乱码亚洲| 天堂影院成人在线观看| 丝袜美足系列| 亚洲午夜精品一区,二区,三区| 日本黄色视频三级网站网址| 久久久久亚洲av毛片大全| 男人舔女人下体高潮全视频| 叶爱在线成人免费视频播放| 国产成人欧美在线观看| 色老头精品视频在线观看| 美女午夜性视频免费| 一夜夜www| 少妇的丰满在线观看| 搞女人的毛片| 国产真人三级小视频在线观看| 一个人免费在线观看的高清视频| 国产成人欧美在线观看| 又紧又爽又黄一区二区| 一二三四社区在线视频社区8| 两性夫妻黄色片| 久久热在线av| 老汉色∧v一级毛片| 亚洲精品美女久久av网站| 精品欧美国产一区二区三| 亚洲在线自拍视频| 此物有八面人人有两片| 99国产综合亚洲精品| 欧美丝袜亚洲另类 | 最近最新免费中文字幕在线| 黄片播放在线免费| 性少妇av在线| 精品久久久精品久久久| av中文乱码字幕在线| av在线天堂中文字幕| 成人国语在线视频| 女同久久另类99精品国产91| 女生性感内裤真人,穿戴方法视频| 亚洲人成电影免费在线| 男男h啪啪无遮挡| 又大又爽又粗| 成人免费观看视频高清| 99国产精品一区二区蜜桃av| 最新在线观看一区二区三区| 国产精品国产高清国产av| 午夜a级毛片| 视频在线观看一区二区三区| 黄色丝袜av网址大全| 九色亚洲精品在线播放| 欧美日本中文国产一区发布| 女人被狂操c到高潮| 757午夜福利合集在线观看| 久久久国产欧美日韩av| 大型av网站在线播放| 精品久久久久久成人av| 国产99白浆流出| 亚洲熟妇熟女久久| 国产精品美女特级片免费视频播放器 | 熟妇人妻久久中文字幕3abv| 亚洲精品久久成人aⅴ小说| 日本 欧美在线| 中文字幕最新亚洲高清| 18禁观看日本| 99国产综合亚洲精品| 老司机午夜福利在线观看视频| 757午夜福利合集在线观看| 久热这里只有精品99| 一边摸一边抽搐一进一小说| 国产精品电影一区二区三区| 国产野战对白在线观看| 18美女黄网站色大片免费观看| 国产精品一区二区在线不卡| 91字幕亚洲| 欧美激情高清一区二区三区| 久久久久精品国产欧美久久久| 国产亚洲av高清不卡| 长腿黑丝高跟| x7x7x7水蜜桃| 亚洲国产精品合色在线| 日本精品一区二区三区蜜桃| 亚洲成国产人片在线观看| 欧美日韩精品网址| 琪琪午夜伦伦电影理论片6080| 99久久精品国产亚洲精品| 在线十欧美十亚洲十日本专区| 日韩精品免费视频一区二区三区| 男人操女人黄网站| 一本综合久久免费| 欧美黑人精品巨大| 操出白浆在线播放| 欧美 亚洲 国产 日韩一| 99re在线观看精品视频| 后天国语完整版免费观看| 国产1区2区3区精品| 老司机靠b影院| 欧美老熟妇乱子伦牲交| 欧美激情 高清一区二区三区| 757午夜福利合集在线观看| 手机成人av网站| 乱人伦中国视频| 国产私拍福利视频在线观看| 男女之事视频高清在线观看| 操出白浆在线播放| 级片在线观看| 欧美日韩瑟瑟在线播放| 少妇熟女aⅴ在线视频| 欧美亚洲日本最大视频资源| 日韩欧美一区视频在线观看| 19禁男女啪啪无遮挡网站| 国产一区二区在线av高清观看| 制服丝袜大香蕉在线| 少妇裸体淫交视频免费看高清 | 日韩三级视频一区二区三区| 99久久综合精品五月天人人| 亚洲精品在线美女| 国产精品av久久久久免费| 欧美成人免费av一区二区三区| 亚洲精品一卡2卡三卡4卡5卡| 国产成年人精品一区二区| 人人妻人人澡人人看| 欧美乱色亚洲激情| 亚洲精品美女久久久久99蜜臀| 国产麻豆69| 熟妇人妻久久中文字幕3abv| 国产黄a三级三级三级人| 亚洲免费av在线视频| 欧美不卡视频在线免费观看 | 一区二区日韩欧美中文字幕| 日本一区二区免费在线视频| 97人妻天天添夜夜摸| 91精品三级在线观看| e午夜精品久久久久久久| 一级a爱视频在线免费观看| 88av欧美| 久久久国产成人精品二区| 亚洲精品一卡2卡三卡4卡5卡| 变态另类丝袜制服| 欧美丝袜亚洲另类 | bbb黄色大片| 久久国产乱子伦精品免费另类| 中亚洲国语对白在线视频| 免费在线观看日本一区| 亚洲片人在线观看| 欧美日韩一级在线毛片| 亚洲欧美精品综合一区二区三区| 90打野战视频偷拍视频| 久久人人爽av亚洲精品天堂| 美女大奶头视频| 日韩高清综合在线| 无限看片的www在线观看| 国产精品免费一区二区三区在线| 少妇的丰满在线观看| 精品国产亚洲在线| 日韩av在线大香蕉| 好看av亚洲va欧美ⅴa在| 国产一卡二卡三卡精品| 精品久久蜜臀av无| 精品国产亚洲在线| 日韩av在线大香蕉| 午夜福利影视在线免费观看| 国产一区二区三区综合在线观看| 国产精品永久免费网站| 人人妻人人澡欧美一区二区 | 亚洲 国产 在线| 日日干狠狠操夜夜爽| 久久久久久久久免费视频了| 巨乳人妻的诱惑在线观看| 这个男人来自地球电影免费观看| 香蕉久久夜色| 久久亚洲精品不卡| 中文字幕人妻熟女乱码| 黄网站色视频无遮挡免费观看| 在线天堂中文资源库| 男人的好看免费观看在线视频 | 精品国产乱码久久久久久男人| 久久香蕉激情| 窝窝影院91人妻| 亚洲精品在线美女| 国产精品九九99| 亚洲色图 男人天堂 中文字幕| 伦理电影免费视频| 国产一区二区三区视频了| 18禁国产床啪视频网站| av超薄肉色丝袜交足视频| 成人三级做爰电影| 9热在线视频观看99| 欧美av亚洲av综合av国产av| 男女下面插进去视频免费观看| 久久久国产成人精品二区| 好看av亚洲va欧美ⅴa在| 久久人人97超碰香蕉20202| 天堂√8在线中文| 国产精华一区二区三区| 久久国产亚洲av麻豆专区| 12—13女人毛片做爰片一| 精品乱码久久久久久99久播| 波多野结衣一区麻豆| 最近最新中文字幕大全电影3 | 国产1区2区3区精品| 999久久久国产精品视频| 99国产精品一区二区蜜桃av| 午夜福利影视在线免费观看| 老汉色av国产亚洲站长工具| 久久久久精品国产欧美久久久| 亚洲伊人色综图| 久久天躁狠狠躁夜夜2o2o| 亚洲午夜理论影院| 长腿黑丝高跟| 欧美国产日韩亚洲一区| 亚洲国产精品sss在线观看| 一区在线观看完整版| 岛国在线观看网站| 日韩欧美免费精品| 国产成人免费无遮挡视频| 国产精品久久久久久精品电影 | 搡老熟女国产l中国老女人| 91麻豆精品激情在线观看国产| 两人在一起打扑克的视频| 亚洲成人免费电影在线观看| а√天堂www在线а√下载| 日日夜夜操网爽| 精品人妻在线不人妻| 两人在一起打扑克的视频| 久久久久久免费高清国产稀缺| 国产成人精品久久二区二区91| 国产精品电影一区二区三区| 亚洲久久久国产精品| 自拍欧美九色日韩亚洲蝌蚪91| 伊人久久大香线蕉亚洲五| 国产av精品麻豆| 成人国语在线视频| 成人永久免费在线观看视频| 国产99白浆流出| 国产精品免费一区二区三区在线| 久久九九热精品免费| 欧美一级a爱片免费观看看 | 久久久久久亚洲精品国产蜜桃av| 老司机深夜福利视频在线观看| 丁香欧美五月| 日日干狠狠操夜夜爽| 欧美国产日韩亚洲一区| 国产精品亚洲一级av第二区| 在线免费观看的www视频| 亚洲一卡2卡3卡4卡5卡精品中文| 日日夜夜操网爽| 久久久国产欧美日韩av| 精品无人区乱码1区二区| 国产午夜福利久久久久久| 免费人成视频x8x8入口观看| 欧美绝顶高潮抽搐喷水| 国产精品爽爽va在线观看网站 | 精品高清国产在线一区| 亚洲久久久国产精品| 国内久久婷婷六月综合欲色啪| 久久久国产欧美日韩av| 亚洲免费av在线视频| 国产欧美日韩一区二区三区在线| 午夜精品久久久久久毛片777| 在线观看免费视频日本深夜| 高清黄色对白视频在线免费看| 午夜日韩欧美国产| 老汉色av国产亚洲站长工具| 91老司机精品| 亚洲成人国产一区在线观看| av天堂久久9| 97人妻精品一区二区三区麻豆 | 欧洲精品卡2卡3卡4卡5卡区| 巨乳人妻的诱惑在线观看| 老司机午夜十八禁免费视频| 自线自在国产av| 欧美乱妇无乱码| 国产精品98久久久久久宅男小说| 免费搜索国产男女视频| 久久久久国产精品人妻aⅴ院| 国产精品av久久久久免费| 窝窝影院91人妻| 88av欧美| 最近最新中文字幕大全电影3 | 中文字幕人妻丝袜一区二区| 欧美日韩福利视频一区二区| 色老头精品视频在线观看| 欧美老熟妇乱子伦牲交| 欧美黄色淫秽网站| 国产亚洲av高清不卡| 国产午夜精品久久久久久| 激情视频va一区二区三区| 女性生殖器流出的白浆| 亚洲avbb在线观看| 亚洲专区中文字幕在线| 99国产综合亚洲精品| 欧美中文综合在线视频| 一进一出抽搐gif免费好疼| 亚洲性夜色夜夜综合| 午夜久久久在线观看| 精品一区二区三区视频在线观看免费| 国产精品精品国产色婷婷| 人妻久久中文字幕网| 亚洲午夜精品一区,二区,三区| 午夜福利,免费看| 久久久精品国产亚洲av高清涩受| 在线观看免费视频日本深夜| 日韩欧美免费精品| 怎么达到女性高潮| 久久精品aⅴ一区二区三区四区| 欧美乱妇无乱码| 欧美乱色亚洲激情| 一夜夜www| 欧美性长视频在线观看| 99国产精品一区二区蜜桃av| 国产精品98久久久久久宅男小说| 午夜福利在线观看吧| 777久久人妻少妇嫩草av网站| 51午夜福利影视在线观看| 精品久久久精品久久久| 免费观看人在逋| av电影中文网址| 久久中文字幕人妻熟女| 国产免费男女视频| 国产成人精品久久二区二区91| 久久久久国产精品人妻aⅴ院| 亚洲av第一区精品v没综合| 成人永久免费在线观看视频| 一夜夜www| www日本在线高清视频| 亚洲精品中文字幕一二三四区| 国产成+人综合+亚洲专区| or卡值多少钱| 亚洲一区二区三区不卡视频| 男女之事视频高清在线观看| 又黄又爽又免费观看的视频| 最近最新免费中文字幕在线| 黄网站色视频无遮挡免费观看| 午夜福利视频1000在线观看 | 两人在一起打扑克的视频| 精品人妻1区二区| 亚洲人成伊人成综合网2020| 韩国精品一区二区三区| 日本撒尿小便嘘嘘汇集6| 成人国产一区最新在线观看| 国内精品久久久久久久电影| 日韩成人在线观看一区二区三区| 婷婷丁香在线五月| 在线观看一区二区三区| 国产人伦9x9x在线观看| 国产精品二区激情视频| 成人精品一区二区免费| av在线天堂中文字幕| 欧美 亚洲 国产 日韩一| 51午夜福利影视在线观看| 日韩精品中文字幕看吧| 午夜福利在线观看吧| 一边摸一边做爽爽视频免费| 如日韩欧美国产精品一区二区三区| 亚洲电影在线观看av| 成年人黄色毛片网站| 黄色 视频免费看| 久9热在线精品视频| 亚洲 国产 在线| 极品人妻少妇av视频| 最好的美女福利视频网| 成年女人毛片免费观看观看9| 国产一区在线观看成人免费| 日韩欧美在线二视频| 在线永久观看黄色视频| 久久人人爽av亚洲精品天堂| 国产精品九九99| 99久久99久久久精品蜜桃| 国内毛片毛片毛片毛片毛片| 国产精品国产高清国产av| 亚洲精华国产精华精| 国产亚洲精品久久久久久毛片| 在线观看免费日韩欧美大片| 午夜福利高清视频| 夜夜夜夜夜久久久久| 搡老岳熟女国产| 亚洲男人的天堂狠狠| 国产欧美日韩一区二区三| 欧美久久黑人一区二区| 国产精品九九99| 91在线观看av| 日本a在线网址| 97人妻天天添夜夜摸| 午夜两性在线视频| 精品免费久久久久久久清纯| 曰老女人黄片| 青草久久国产| 此物有八面人人有两片| 成熟少妇高潮喷水视频| 久久中文看片网| 可以在线观看毛片的网站| 色综合站精品国产| 搞女人的毛片| 国产av精品麻豆| 国产成人免费无遮挡视频| 他把我摸到了高潮在线观看| 夜夜夜夜夜久久久久| 亚洲精品国产区一区二| 久久性视频一级片| 黑人巨大精品欧美一区二区蜜桃| 自拍欧美九色日韩亚洲蝌蚪91| 国产蜜桃级精品一区二区三区| 黄色a级毛片大全视频| 美女高潮到喷水免费观看| 国产精品亚洲美女久久久| 日韩av在线大香蕉| 亚洲精品中文字幕在线视频| 老司机午夜十八禁免费视频| 一a级毛片在线观看| 精品第一国产精品| 90打野战视频偷拍视频| 一边摸一边抽搐一进一出视频| 一边摸一边抽搐一进一小说| 久久婷婷成人综合色麻豆| 欧美色欧美亚洲另类二区 | 日韩欧美国产在线观看| 99在线人妻在线中文字幕| 国产亚洲精品一区二区www| 男女下面插进去视频免费观看| 亚洲人成网站在线播放欧美日韩| 欧美日韩福利视频一区二区| 亚洲自拍偷在线| 色尼玛亚洲综合影院| 99热只有精品国产| 咕卡用的链子| 色精品久久人妻99蜜桃| 欧美一区二区精品小视频在线| 国产97色在线日韩免费| 日韩欧美一区二区三区在线观看| 国产在线精品亚洲第一网站| 亚洲狠狠婷婷综合久久图片| 9热在线视频观看99| 亚洲成av人片免费观看| 免费看a级黄色片| 日本在线视频免费播放| 国产午夜精品久久久久久| 久久精品亚洲精品国产色婷小说| 久久精品成人免费网站| 国产欧美日韩一区二区三区在线| 中出人妻视频一区二区| 日本精品一区二区三区蜜桃| 好看av亚洲va欧美ⅴa在| 国产精品久久视频播放| 欧美+亚洲+日韩+国产| 欧美成人免费av一区二区三区| 欧美黄色片欧美黄色片| 搡老熟女国产l中国老女人| 久久天躁狠狠躁夜夜2o2o| 欧美一区二区精品小视频在线| 欧美中文日本在线观看视频| 欧美黄色片欧美黄色片| 精品国产国语对白av| 最近最新免费中文字幕在线| 午夜精品国产一区二区电影| 亚洲欧美精品综合久久99| 亚洲第一青青草原| 国产熟女xx| 久久天躁狠狠躁夜夜2o2o| 午夜视频精品福利| 久久草成人影院| 91麻豆精品激情在线观看国产| 精品高清国产在线一区| 久久久久久久久久久久大奶| 大型黄色视频在线免费观看| 法律面前人人平等表现在哪些方面| 亚洲av成人av| 亚洲自偷自拍图片 自拍| 99riav亚洲国产免费| 亚洲伊人色综图| 日本在线视频免费播放| 91成年电影在线观看| 亚洲色图av天堂| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲第一av免费看| 国产99久久九九免费精品| 黄频高清免费视频| 日本黄色视频三级网站网址| 亚洲免费av在线视频| 国产一卡二卡三卡精品| 亚洲一区二区三区不卡视频| 嫁个100分男人电影在线观看| 亚洲精品一区av在线观看| 久久伊人香网站| 欧美激情极品国产一区二区三区| 亚洲 欧美一区二区三区| 欧美日韩乱码在线| 身体一侧抽搐| 欧美日韩亚洲国产一区二区在线观看| 亚洲av成人av| 激情在线观看视频在线高清| 99精品久久久久人妻精品| 日韩 欧美 亚洲 中文字幕| 人妻丰满熟妇av一区二区三区| 男女之事视频高清在线观看| 黄色 视频免费看| 亚洲欧美精品综合久久99| 欧美日本中文国产一区发布| 岛国在线观看网站| 女警被强在线播放| 日韩欧美一区视频在线观看| 9色porny在线观看| 亚洲精品美女久久久久99蜜臀| 亚洲色图综合在线观看| 青草久久国产| 我的亚洲天堂| 亚洲熟妇中文字幕五十中出| 欧美最黄视频在线播放免费| 欧美激情极品国产一区二区三区| 国产精品野战在线观看| 91成年电影在线观看| 久久精品国产综合久久久| 日韩欧美免费精品| 757午夜福利合集在线观看| 电影成人av| 亚洲欧美激情在线| 一区在线观看完整版| 大型黄色视频在线免费观看| 欧美丝袜亚洲另类 | 亚洲欧美精品综合一区二区三区| 麻豆av在线久日| 好男人电影高清在线观看| 亚洲第一电影网av| 日韩欧美免费精品| 亚洲精品中文字幕在线视频| 校园春色视频在线观看| 淫秽高清视频在线观看| 黄网站色视频无遮挡免费观看| 久久草成人影院| 国产一区二区激情短视频| 亚洲欧美精品综合一区二区三区| 两人在一起打扑克的视频| 男女床上黄色一级片免费看| 国产精品国产高清国产av| 久久这里只有精品19| 国产av一区二区精品久久| 91在线观看av| 欧美成狂野欧美在线观看| 日韩精品免费视频一区二区三区| 色播在线永久视频| 大香蕉久久成人网| 搡老岳熟女国产| 国产成人免费无遮挡视频| 99riav亚洲国产免费| 日本欧美视频一区| 免费一级毛片在线播放高清视频 | 黑人巨大精品欧美一区二区蜜桃| av天堂久久9| 国产高清有码在线观看视频 | 91九色精品人成在线观看| 久久九九热精品免费| av欧美777| 精品国产国语对白av| 两性夫妻黄色片| 国产高清激情床上av| 精品卡一卡二卡四卡免费| 国产精华一区二区三区| 日韩精品免费视频一区二区三区| 亚洲av成人一区二区三| 精品欧美国产一区二区三| av视频免费观看在线观看| 欧美乱色亚洲激情| 女警被强在线播放| 岛国视频午夜一区免费看| 激情在线观看视频在线高清| 亚洲成a人片在线一区二区| 99国产精品免费福利视频| 一二三四在线观看免费中文在|