• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    AntiFlamPred:An Anti-Inflammatory Peptide Predictor for Drug Selection Strategies

    2021-12-10 11:56:46FahadAlotaibiMuhammadAttiqueandYaserDaanialKhan
    Computers Materials&Continua 2021年10期

    Fahad Alotaibi,Muhammad Attique and Yaser Daanial Khan

    1Department of Information System,Faculty of Computing and Information Technology,King Abdulaziz University,Jeddah,21589,Saudi Arabia

    2Department of Computer Science,University of Management and Technology,Lahore,54000,Pakistan

    3Department of Information Technology,University of Gujrat,Gujrat,50700,Pakistan

    Abstract:Several autoimmune ailments and inflammation-related diseases emphasize the need for peptide-based therapeutics for their treatment and established substantial consideration.Though,the wet-lab experiments for the investigation of anti-inflammatory proteins/peptides(“AIP”)are usually very costly and remain time-consuming.Therefore,before wet-lab investigations,it is essential to develop in-silico identification models to classify prospective anti-inflammatory candidates for the facilitation of the drug development process.Several anti-inflammatory prediction tools have been proposed in the recent past,yet,there is a space to induce enhancement in prediction performance in terms of precision and efficiency.An exceedingly accurate antiinflammatory prediction model is proposed,named AntiFlamPred(“Antiinflammatory Peptide Predictor”),by incorporation of encoded features and probing machine learning algorithms including deep learning.The proposed model performs best in conjunction with deep learning.Rigorous testing and validation were applied including cross-validation,self-consistency,jackknife,and independent set testing.The proposed model yielded 0.919 value for area under the curve(AUC)and revealed Mathew’s correlation coefficient(MCC)equivalent to 0.735 demonstrating its effectiveness and stability.Subsequently,the proposed model was also extensively probed in comparison with other existing models.The performance of the proposed model also out-performs other existing models.These outcomes establish that the proposed model is a robust predictor for identifying AIPs and may subsidize well in the extensive lab-based examinations.Subsequently,it has the potential to assiduously support medical and bioinformatics research.

    Keywords:Prediction;feature extraction;machine learning;bootstrap aggregation;deep learning;bioinformatics;computational intelligence;antiinflammatory peptides

    1 Introduction

    Inflammation occurs as a reaction caused by several diverse reasons,one such reason is the irregular response of the body’s immune system to some kind of physical injury or damage[1–4].Under normal conditions,it is self-controlled while in some disorders the inflammatory process becomes pathological,subsequently causing chronic autoimmune and inflammatory disorders,i.e.,multiple sclerosis,rheumatoid,arthritis,cancer,psoriasis,diabetes,and neurodegenerative disease.Immune homeostatic maintenance and preventing the onset of increased inflammation and autoimmunity essentially requires the initiation of immune forbearance[5–8].Non-specific immune-suppressants and anti-inflammatory medications are recently in practice for the treatment of autoimmunity and inflammation syndromes.Such treatments are usually ineffective against inflammatory syndromes,however,these may cause further infectious diseases[8].

    Various mechanisms are employed and considered necessary to conserve the state of forbearance against inflammation[9,10].Recurrent endogenic peptide identification as anti-inflammatory agents through inflammatory reactions can be utilized for inflammatory and autoimmune therapies[11,12].Immunotherapeutic capability makes AIPs clinically applicable especially due to their specificity to generate regulatory T-cells and reticence against antigen explicit to Th-1 driven reactions[13].

    AIPs are currently used to treat various inflammatory diseases[14,15].Compared to proteinbased conventional biopharmaceutical drugs,the complexity of production and cost is lower[16]as well as high specificity and low toxicity make them potential therapeutic agents[17,18].Besides natural,synthetic peptides also have the potential to constrain the signal transduction-pathways(“STD”)for the manifestation of inflammatory cytokines[19].For example,chronic nasal treatment of amyloid-beta(“A-beta”)peptide in mice,a pathological marker of Alzheimer’s disease,results in reduced A-beta plaque load besides anti-inflammatory cytokines[20–23].Vasoactive intestinal peptide(“VIP”),a neuropeptide,is useful for decreasing inflammation components of rheumatoid arthritis by mutating the immune response experimentally[24].In recent years,numerous active peptides have been identified by experimental methods.However,experimental analysis-based identification and empirical development of new peptide-based drugs are particularly expensive,time-taking,and laborious.Overall,the availability of experimental data makes it possible to evaluate the relationships among the amino acid sequences and their properties and computationally predict prospective candidates before synthesis.Up till now,three methods have been suggested for computational prediction of potential AIPs[25]specifically and an effort has been made to propose a generic predictor for the prediction of several therapeutic peptides including anti-angiogenic peptides(“AAP”),anti-bacterial peptides(“ABP”),anticancer peptides(“ACP”),AIP,anti-viral peptides(“AVP”),cell-penetrating peptides(“CPP”),quorum-sensing peptides(“QSP”)and surface binding peptides(“SBP”),respectively.

    Gupta et al 2017 developed an anti-inflammatory predictor using a support vector machine(“SVM”)classifier and hybrid peptide features,performance analysis revealed an area under the curve(AUC)value of 78.1 and Matthews correlation coefficient(MCC)equal to 0.58 using tripeptide hybrid features.AIPpred,an AIP predictor,proposed by Manavalan et al.utilizing random forest(RF)classifier and sequence encoding features exhibited prediction performance having AUC=0.814 while MCC was 0.479.PreAIP[25]was developed by Khatun et al.through a random forest classifier incorporating manifold features like primary sequence and structural information.The performance evaluation showed an AUC value of 0.840 and MCC of 0.512 on the test dataset.Wei et al.used hybrid sequence-based features which were further optimized to select widely discriminative features and trained 8 random-forest models to predict 8 functionally different peptides yielding an AUC value of 0.75.Subsequently,the performance concerning the accuracy of the above discussed existing AIP predictors is insufficient and demands further improvements for precise AIPs prediction.In the pursuance of this purpose,an improved AIP predictor has been proposed termed as AntiFlamPred(“Anti-inflammatory Peptide Predictor”).

    The next segment of this article is articulated as:Section 2 has been designated for the materials and methodology.Prediction algorithms and the proposed approach used in this study for experiments have been described in section 3.Section 4 is dedicated to details about experiments and result-acquisition methods,and section 5 represents the obtained results and provides a detailed discussion on them.Finally,section 6 narrates the conclusion.

    2 Material and Methodology

    In the proposed methodology to build a prediction model,“Chou’s 5-steps rule”[26]has been followed,and the flow process is depicted in Fig.1,quite similar to the methodology adopted in some latest research papers[27]to predict proteomic attributes.The stepwise methodology involves(i)collection of benchmark dataset(to be used in testing and training of prediction model);(ii)formulation of sequence samples;(iii)training of prediction algorithm;(iv)Validation and Testing;and(v)easily manageable webserver.The studies involved in the development of sequence analysis or prediction method by adopting Chou’s 5-step rules have the subsequent noticeable advantages[28–32]:(1)clearer logic development,(2)fully operational transparency,(3)easy for other investigators to replicate experiments to obtained reported findings,(4)with a strong potential to stimulate other sequence analysis methods,and(5)quite convenient experimental usage by the scientists.The implementation of these steps is specified hereunder.

    Figure 1:Flow process of the proposed methodology

    2.1 Benchmark Dataset Collection

    To construct the AntiFlamPred,the dataset has been collected from newly published research papers of Manavalan et al.,Khatun et al.,Wei et al.,and immune epitope database and analysis resource(IEDB)[33].A peptide is anti-inflammatory if it induces any one of the cytokines in T Cell assays of mouse and human such as interferon-alpha/beta(IFN-a/b),transforming growth factor-beta(TGFb),interleukin-4(IL-4),interleukin-10(IL-10),interleukin-13(IL-13)and interleukin-22(IL-22)were considered positive AIPs(“pAIPs”).

    Similarly,peptides examined inflammatory,proinflammatory,or found adverse for antiinflammation were rated negatively(“nAIPs”).The dataset obtained from IEDB and other published papers contain 2549 positive and 4516 negative samples.To remove redundancy from a dataset,CD-HIT[34]was applied at a 0.6 sequence identity threshold.Stricter criterion at 0.3 or 0.4 thresholds could lead to more credible performance as practiced in[3,15,25],but data limitation restricts the usage of such criteria.As a preprocessing step to enhance the pAIPs and nAIPs dataset non-amino acid letters(‘B’,‘J’,‘O’,‘U’,‘X’,and ‘Z’)were cleaned.Finally,after preprocessing and applying CD-HIT we have obtained a dataset consisting of 1911 pAIPs and 4240 nAIPs.Eq.(1)represents the general formulation of an arbitrary peptide sample[35–38].

    where α denotes a residue and 1,2,3...n in subscript represents its sequential order in a peptide sequenceS.Further,the benchmark dataset used in this study is described in formally as:

    whereDSdenotes complete dataset,DS+representing the positive samples,DS?represents the negative sample set,and ∪representing the union of both.

    A relatively smaller or medium benchmark dataset is usually distributed into two subsets;training set and testing set in conventional prediction models[39,40].The prediction model can be tested using validation techniques like jackknife or k-fold subsampling,where the outcome is assessed with different groupings of independent datasets.Thus,successively,the benchmark dataset division into subsets is not required[41].

    2.2 Feature Encoding

    Biological sequences emerged day by day and gained importance due to their therapeutic activity.The use of graphical methodologies for the study of medical and biological structures can offer instinctive insight and useful information for analyzing the complex relationships within them,as shown by the eight masterpieces of the founding article of the Chairman of the Nobel Prize Committee Sture Forsen(see,e.g.,[42])and numerous follow-up articles(see,e.g.,[43]and a long list of articles cited in a full review[44]).Further,computational prediction of such biological sequences is the need of the day to support in medicine and challenging task to convert these sequences into discrete or vector models yet maintaining the sequence order information.All this is due to the requirement of all highly performed machine learning algorithms[45,46].However,during the conversion from sequential to discrete representation of a protein,there is a chance to lose the necessary information a pattern of sequence might have while using a discrete model.To retain this important information,PseAAC or “pseudo amino acid composition”,a fixed-size transformation was proposed by Chou[47]and is widely in practice in Bioinformatics nowadays[48–51].As it became more and more widely used,four powerful open source software called “PseAAC”,“PseAAC-Builder”[52],“propy”[53]and “PseAAC-General”[54]were established:The first three are supposed to generate different modes of Chou’s special PseAAC;while the fourth for the general Chou PseAAC,which not only contains all distinct types of proteins encodings but also superordinate feature encodings,like the “functional domains” mode(see Eqs.(9)–(10)of[26]),the mode “Genetic Ontology”(see Eqs.(11)–(12)of[26])and the mode “PSSM”or “Sequential Evolution”(see Eqs.(13)–(14)of[26]).Stimulated by the successful PseAAC usage,PseKNC(pseudo-K-tuple nucleotide composition)was established to encode several features for the sequences of DNA/RNA and proved very successful.Specifically,a powerful and generic webserver was developed in 2015,capable to generate several types of feature encodings for the sequences of protein/peptide as well as for DNA/RNA,names as “Pse-in-One” and one of its modernized version “Pse-in-One2.0”.The discrete fixed-size representation of an arbitrary length protein/peptide sequence based on the composition of amino acids can be expressed as:

    wherePis the transformed fixed-sized form ofS(Eq.1),T is a required transpose operator applied on α of Eq.(1)to obtain discrete component coefficients Φi[55]andi=1,2,3,...,ωrepresents the length of the sequenceS.These components are further utilized to extract features.From this discrete representation a 2-dimensional(“2D”)matrixP‘withk*kdimensions is formed to accommodate all amino-acid residue in a peptidePcan be represented as:

    where each componentP‘ is a residue of the sequenceS,and whereThe detailed derivation of this matrix is described in[35,36].

    2.3 Determination of PRIM and RPRIM

    The principal sequence is the key to assessing unknown peptide properties.The model’s key mathematical criterion is based on position relative information of residues in a peptide of the benchmark dataset.A matrix of 20 × 20,in size,was formed to quantify the corresponding location of residues for innate sequences,called position relative incidence matrix(“PRIM”).The reverse position relative incidence matrix(“RPRIM”)has the same specification as PRIM,just calculated on a reversed variant of a sequence,called here as RPRIM.The PRIM metric determination was done as:

    where every component of the matrix describes the sum of the corresponding location of the jth residue relative to the ith.Likewise,reversed sequences were used to determine RPRIM as:

    The size of both matrices represented by Eqs.(5)and(6)is 20 × 20,resulting in 400 elements each.

    2.4 Frequency Vector(FV)Encoding

    The frequency vector depicts the frequency distribution;how many times an amino acid occurs in a peptide sequence,and can be expressed as:

    whereviis frequency occurrence ofith residue in a peptide sequence,purposely determined to retrieve important compositional information from the sequence.The feature vectorfvis of 20 dimensions.

    2.5 Determination of AAPIV and RAAPIV

    Compositional information can be assessed using frequency vector,but it is unable to provide position relative information of a residue.To extract position relative information,an accumulative absolute position incidence matrix(“AAPIV”)of 20 components in length is determined.AAPIV contains information regarding the sum of all ordered values of each amino acid in a sequence corresponding to their location.In the same way as of the PRIM and RPRIM,the computational mechanism of AAPIV is based on an originally ordered sequence,but a reversed version of sequence is used to compute reverse accumulative absolute position incidence matrix(“RAAPIV”).AAPIV is computed as:

    where μifor anith component ofkAAPIVis determined bydescribed in[41,54].

    RAAPIV is also assessed similarly as AAPIV but just reversed sequences are used.Both the AAPIV and RAAPIV are feature vectors of 20 dimensions in length.

    3 Prediction Algorithm(Proposed Approach)

    The next phase in the development of a prediction model is to incorporate a prediction algorithm.Numerous investigations in the field of bioinformatics and pattern recognition have employed ensemble approaches like bootstrap aggregation(bagging)and boosting[55]for the solution of classification or regression problems.Amongst these approaches tree-based ensemble methods like,decision tree,extra-trees classifier,and random forest have shown excellent performance[56,57].Random forests(“RF”)utilized a mechanism of randomization for the creation of a group of separate trees being used as individual classifiers.Bagging is another approach utilized in the random forest to train each tree with a different copy of the training samples(subsampling)also knows as bootstrap.The bootstrap is a randomization approach for subsampling of training data with replacement policy and random feature selection approach to train each tree node with different subspace[58]and outperforms comparatively to several other competitive classifiers,like SVM,linear discriminant analysis(“LDA”),logistic regression(“LR”),etc.

    In this study,a deep neural network(“DNN”)has been utilized for the development of a prediction model.Amongst several deep learning algorithms,we consider the convolutional neural network(“CNN”)for its capability to further recognize numerous obscure patterns that may remain hidden otherwise[59].In the proposed model,DNN uses a convolutional layer to generate and batch-normalization layer to normalize the output feature map respectively,to generalize the patterns,and finally uses fully connected(Dense)layers to characterize a potentially very complex order in which these patterns may appear[60].Fig.2 represents the complete workflow of the proposed model and an outline of the DNN architecture is shown in Fig.3.

    As described in the “Materials and Methods” section,AAPIV,RAAPIV,FV,P’,PRIM,and RPRIM feature vectors were created using the benchmark dataset.The benchmark dataset is used in this study contains both positive and negative peptide sequences.Finalized Feature Input Vector(“FIV”)was formed using these assessed feature vectors which were 880 in total.The FIV represents all the features,and each row of the FIV corresponds to each sample of the dataset.Similarly,the Expected Output Vector(“EOV”)was formed by each example resource according to their class.This FIV is further divided into training and independent test set and used to train,evaluate,and test the several machine learning algorithms and obtained significant with DNN.According to Fig.3,these encoded features are fed to the convolutional layer followed by batchnormalization to normalize the output feature map of the convolutional layer batch-wise and a flatten layer to convert these outputs in-accordance with the compatibility of the fully connected layer for final recognition.

    Figure 2:Block diagram of the proposed prediction model

    Figure 3:Architectural framework of the proposed model

    4 Experiments and Results

    The assessment of algorithms was carried out using 10-fold cross-validation.The area under the receiver operating characteristic curve(AUC)and accuracies for each model was calculated on each fold and combined to evaluate the models.The most prospective algorithm was selected that performs best for AUC and accuracy,i.e.,DNN,and chooses to develop the finalized model due to its excellent performance on the given feature set.

    One of the most important processes in the development of a new classification model is to empirically assess its expected success rate[55].To address this,we need to consider two matters.(1)What performance metrics should be utilized to quantitatively represents the quality of the predictor?(2)What type of test approaches should be applied to obtain scoring metrics?

    4.1 Metrics Formulation

    The following metrics are generally used to measure the prediction quality from four different angles:(1)Measure of predictor’s overall accuracy(ACC),(2)the capacity of correctly predict a positive class(true positive rate)is known as Specificity(SPEC),(3)the capacity of correctly predict a negative class(true negative rate)is known as Sensitivity(SENS),(4)stability and quality of classification(MCC)[45,60].These metrics were generally used to measure the quality of a classification model and can be expressed as:

    4.2 Cross-validation Testing

    Three cross-validation tests are usually used for the performance evaluation of the classification model.The main three techniques are the “l(fā)eave-one-out” / “jackknife” test,“k-fold test”also known as sub-sampling,and independent test[45].In this study,we used all these three tests for the performance evaluation of the proposed classifier.

    5 Results and Discussions

    To build,train and evaluate the classification model,we use python language and experiments were carried out using the Tensorflow python package based on Keras framework for DNN and PyCaret package for other machine learning models used for comparison purpose in this study.PyCaret is a python package that wrapped up several frameworks,such as scikit-learn,and machine learning models like XGBoost(“XGB”),Gradient Boosting Machine(“GBM”),LightGBM(“LGBM”),AdaBoost(“ADA”),Decision Tree(“DT”),RF,etc.It is a low-code and easy-to-use library that provides the simplest way to compare several models with k-fold crossvalidation.For experimentation,to train and test the model,out of the total dataset,4305(1329 pAIPs and 2976 nAIPs)samples were selected for the training and validation of the model and the rest 1846(582 pAIPs and 1264 nAIPs)samples were selected as independent test-set.

    The performance of any machine learning algorithm significantly depends on the parameters used while developing a model.For this purpose,we have utilized a grid-search module of Scikitlearn by providing the range of several parameters to obtain the parameters that may best fit in DNN to get significant results[45].The parameters that were acquired and being used in this study are described as follows:the convolutional layer was being utilized with three main influential parameters(filters:32,kernel size(convolutional window):3,and activation function:relu).The batch-normalization and flatten layers were used with default parameters.In the proposed model,next to the convolutional process,we have utilized two fully connected/dense layers(hidden and output)to achieve the expected outputs.In the hidden layer,256 neurons were used with ‘relu’activation function and in the final output layer,only one neuron was used with a sigmoid activation function to acquire the output within the range of 0’s and 1’s only.For model generalization and overfitting prevention,a dropout layer was being adopted with a 0.2 neuron dropout rate in between the hidden and output layer.

    Fairly,as compared to existing AIP prediction models,our proposed classification model outperforms and has achieved 0.919 AUC and 0.735 MCC using FIV and DNN based classifiers.Initially,the results of the self-consistency test are represented in Fig.4 in the form of a confusion matrix and performance metrics in Tab.1.In the self-consistency test,the model is trained and tested on the same benchmark dataset[35].

    Figure 4:Self-consistency testing results of the proposed model

    Table 1:Performance metric of Self-consistency test

    The performance of the current prediction model using a 10-fold cross-validation test on the benchmark dataset is depicted in Fig.5 and the performance metric is listed in Tab.2.Additionally,the jackknife test was also conducted using the DNN classifier to evaluate model performance.Jackknife is an extensive test generally used to test the accuracy and stability of the classification model where the acquisition of a new experimentally validated dataset might not be possible or dataset is accessible however inadequate to conceive results.The jackknife testing is also known as “l(fā)eave-one-out” cross-validation testing,in which one protein/peptide sequence is kept out for a test and the model is trained on the rest whole dataset,in this way each sequence is being tested.The jackknife test results are shown in Tab.3.

    Figure 5:ROC-Curve of 10-Fold CV using DNN Classifier

    Table 2:Performance metric of 10_Fold CV using DNN Classifier

    Table 3:Results of Jackknife testing

    In this study,we have compared the classification performance of seven well-known classifiers,i.e.,ADA,DT,XGB,GB,RF,bagging(BAG),and DNN from deep learning algorithms.Where DNN performed quite the best among the rest.The performance metric of these classifiers is listed in Tab.4 as well as the roc-curve for each classifier is shown in Fig.6 as a comparison.These results demonstrate that the proposed DNN algorithm performs best among the other tried machine learning models with a 0.914 AUC and 0.706 MCC.

    The performance comparison with the existing prediction models for anti-inflammatory peptides has been carried out using the available web-servers of three state-of-the-art computational models to obtain and compare the results with the proposed model.This comparison was carried out using an independent test-set to compare the performance of the proposed model with existing state-of-the are models;AIPpred[3],PEPred-suite[10],and PreAIP[25].

    Table 4:Performance comparison of DNN with other ML models

    Figure 6:ROC-Curve for comparison of DNN with other ML Models

    The AIPpred was developed with RF classifier by exploring protein/peptide sequence-based features,such as amino acid composition(AAC),dipeptide composition(DC),composition transition and distribution(CTD),amino acid index(AAI),and physiochemical properties(PCP),but finally built with DC.The PreAIP model was built by combining k-spaced amino acid pairs(KSAAP),AAI,and(KSAAP)acquired from(position-specific-scoring-matrix)pKSAAP using a random forest(RF)classifier.In the PEPred-suite,several physio-chemical and compositions-based discrete representations of peptide sequences were used along with RF to develop their prediction model.Zhang et al.[10]developed this model to predict a total of eight types of different peptide sequences including the AIP.

    The results of an independent test of our proposed model and existing state-of-the-art predictors are listed in Tab.5 and their AUCs are represented in Fig.7 as a comparison.Results demonstrated that our proposed model outperforms the existing classifier with an extensive difference.Openly accessible webservers of existing predictors were used to acquire the results with the same independent test set as discussed earlier.On the independent test set,AIPpred predictor achieved 0.664 value of AUC,PEPred-Suite achieved 0.799 AUC,and the value of AUC of PreAIP was 0.695,while a much higher area under the curve has been achieved with 0.907 AUC value and 0.681 MCC showing the outstanding performance and stability of the proposed model.

    Table 5:Performance comparison of the proposed model with existing AIP prediction models

    There are few more things for further comparison among proposed and existing models withrespect to approach and methodology:AIPpred uses simple composition features such as AAC and DC,while such type of composition based features may lose the information obscured in the ordered sequence,while the moment based features used in the proposed model are capable to extract out such type of recurrent patterns.While PreAIP also has some limitations,like they only utilize the sequence of max length 25,even adjust a sequence with “-” if the length of the sequence is less than 25 to adjust its length equal to 25 residues[25],which may also cause the ordered information loss,secondly,PreAIP is also time extensive as it takes approximately up to 3 min for the prediction of a single peptide/protein.

    Furthermore,the existing predictors were only cross-validated with 5-fold or 10-fold crossvalidation techniques.Among them,no one uses the jackknife test,while we perform both the 10-fold cross-validation test as well as the extensive jackknife test to precisely estimate the performance of the model.Moreover,the proposed model using DNN classifier with the encoded features fairly outperforms comparative to the existing proposed model and demonstrating that the employed feature encoding technique is fairly capable to extract out the necessary and obscure information from the given anti-inflammatory peptide sequences which was otherwise not possible.Likewise,as shown in a series of profound publications in demonstrating new findings or approaches,user-friendly and publicly accessible web-servers will significantly enhance their impacts,driving medicinal chemistry into an unprecedented revolution,we shall make efforts in our future work to provide a web-server to display the findings that can be manipulated by users according to their need.

    Figure 7:ROC-Curve for the independent test to compare proposed and existing AIP prediction models

    6 Conclusion

    Conjointly utilizing the FIV and deep learning,a reliable,effective,and efficient classification model has been designed to predict the AIPs.The proposed classification model outperforms the present AIP prediction models.Comparative to these models,the proposed classification model has attained the largest AUC of 0.919 and MCC of 0.735 using 10-Fold cross-validation test on the benchmark dataset and achieve 0.907 AUC and 0.681 MCC on the independent test set,which proved it as a cost-effective and powerful classification model.Therefore,it may provide comprehensive support for AIPs classification at a large-scale,facilitate and assist in designing extensive hypothesis-based examinations or experiments.

    Acknowledgement:This project was funded by the Deanship of Scientific Research(DSR),King Abdulaziz University(https://www.kau.edu.sa/),Jeddah,under Grant No.(D-49-611-1441).The authors,therefore,gratefully acknowledge DSR technical and financial support.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产毛片在线视频| 中文字幕最新亚洲高清| a 毛片基地| 伊人久久国产一区二区| 国产免费现黄频在线看| 精品国产一区二区久久| 久久精品久久精品一区二区三区| 女的被弄到高潮叫床怎么办| 免费在线观看完整版高清| 欧美精品人与动牲交sv欧美| 国产男人的电影天堂91| 777久久人妻少妇嫩草av网站| 国产精品久久久av美女十八| 永久网站在线| 午夜激情久久久久久久| 亚洲国产精品成人久久小说| 在线观看一区二区三区激情| 欧美xxⅹ黑人| 亚洲综合色惰| 午夜老司机福利剧场| 国产爽快片一区二区三区| 亚洲精品美女久久久久99蜜臀 | 久久精品aⅴ一区二区三区四区 | 欧美国产精品va在线观看不卡| 亚洲美女搞黄在线观看| 老司机影院毛片| 最新中文字幕久久久久| 亚洲色图综合在线观看| 久久人人97超碰香蕉20202| 中文字幕人妻丝袜一区二区 | 久久久久久久久免费视频了| 国产成人欧美| 欧美人与善性xxx| 久久免费观看电影| 在线观看国产h片| 在线免费观看不下载黄p国产| 成年女人在线观看亚洲视频| 一级,二级,三级黄色视频| √禁漫天堂资源中文www| 亚洲精品日本国产第一区| 国产一区有黄有色的免费视频| 人体艺术视频欧美日本| 国产精品亚洲av一区麻豆 | 在线观看人妻少妇| av.在线天堂| 女人精品久久久久毛片| 老司机影院成人| 国产xxxxx性猛交| 制服丝袜香蕉在线| 欧美少妇被猛烈插入视频| 黄色视频在线播放观看不卡| 久久毛片免费看一区二区三区| 天天躁夜夜躁狠狠久久av| 亚洲内射少妇av| 国产精品嫩草影院av在线观看| 国产日韩欧美亚洲二区| 成人毛片60女人毛片免费| 亚洲综合色网址| 女性被躁到高潮视频| 日本欧美视频一区| 日韩一本色道免费dvd| 国产精品 欧美亚洲| 777久久人妻少妇嫩草av网站| 久久精品aⅴ一区二区三区四区 | 免费黄色在线免费观看| 国产不卡av网站在线观看| 日韩人妻精品一区2区三区| 亚洲欧洲日产国产| 一级a爱视频在线免费观看| 少妇的逼水好多| 亚洲精品视频女| 国产视频首页在线观看| 纯流量卡能插随身wifi吗| 在线天堂中文资源库| 看非洲黑人一级黄片| 日日摸夜夜添夜夜爱| 男人舔女人的私密视频| 永久网站在线| 精品第一国产精品| 久久久久久人妻| 国产成人精品在线电影| 国产男人的电影天堂91| 亚洲精品成人av观看孕妇| 国产精品国产av在线观看| 汤姆久久久久久久影院中文字幕| 久久99热这里只频精品6学生| 欧美日韩亚洲高清精品| 日本免费在线观看一区| 一级毛片电影观看| 欧美日韩亚洲高清精品| 永久网站在线| 久久热在线av| 亚洲图色成人| 国产av精品麻豆| 国产精品亚洲av一区麻豆 | 精品人妻在线不人妻| 亚洲精品视频女| 精品第一国产精品| 免费黄网站久久成人精品| 久久久久久久久久久免费av| xxx大片免费视频| 9热在线视频观看99| av视频免费观看在线观看| 最新的欧美精品一区二区| 美女xxoo啪啪120秒动态图| 国产精品久久久久久久久免| 熟女电影av网| 97在线视频观看| av在线播放精品| 国产精品一二三区在线看| 亚洲第一av免费看| 日韩熟女老妇一区二区性免费视频| 亚洲综合色惰| 国产成人aa在线观看| 蜜桃在线观看..| 黄色一级大片看看| 精品久久蜜臀av无| 少妇 在线观看| 在线精品无人区一区二区三| 十分钟在线观看高清视频www| 美国免费a级毛片| 精品视频人人做人人爽| 亚洲精品日韩在线中文字幕| 毛片一级片免费看久久久久| 在线观看免费视频网站a站| 婷婷色麻豆天堂久久| 我的亚洲天堂| 亚洲美女搞黄在线观看| 麻豆av在线久日| 我的亚洲天堂| 日本欧美视频一区| 国产熟女午夜一区二区三区| 欧美 亚洲 国产 日韩一| 日韩一区二区视频免费看| 只有这里有精品99| 精品卡一卡二卡四卡免费| 国产又爽黄色视频| 亚洲av国产av综合av卡| 欧美少妇被猛烈插入视频| 亚洲成人一二三区av| 国产精品.久久久| 亚洲精品乱久久久久久| 侵犯人妻中文字幕一二三四区| 精品人妻熟女毛片av久久网站| 久久这里只有精品19| 国产精品熟女久久久久浪| 赤兔流量卡办理| 国产精品无大码| 毛片一级片免费看久久久久| 欧美变态另类bdsm刘玥| 国产精品无大码| av国产久精品久网站免费入址| 欧美97在线视频| 一区二区三区精品91| 国产免费视频播放在线视频| 成人午夜精彩视频在线观看| 亚洲国产av影院在线观看| 热99国产精品久久久久久7| 80岁老熟妇乱子伦牲交| 少妇人妻 视频| 免费日韩欧美在线观看| 久久鲁丝午夜福利片| 色吧在线观看| 婷婷色av中文字幕| 男女免费视频国产| 亚洲 欧美一区二区三区| 国产日韩欧美亚洲二区| 国产精品一国产av| 成年人午夜在线观看视频| 黄片小视频在线播放| 人人妻人人澡人人看| 热99国产精品久久久久久7| 亚洲第一青青草原| 国产一区亚洲一区在线观看| 如何舔出高潮| 天天躁夜夜躁狠狠久久av| 亚洲 欧美一区二区三区| 一个人免费看片子| 最黄视频免费看| av不卡在线播放| a级片在线免费高清观看视频| 又大又黄又爽视频免费| 国产片特级美女逼逼视频| 久久av网站| 桃花免费在线播放| 欧美成人精品欧美一级黄| 国产熟女欧美一区二区| 日本爱情动作片www.在线观看| 免费黄频网站在线观看国产| 欧美人与性动交α欧美精品济南到 | 一级黄片播放器| 伦理电影大哥的女人| 国产亚洲欧美精品永久| 天天躁夜夜躁狠狠躁躁| 一区二区三区精品91| 亚洲欧美一区二区三区久久| 亚洲成av片中文字幕在线观看 | 熟妇人妻不卡中文字幕| 国产精品国产三级专区第一集| 久久午夜福利片| 日韩一区二区视频免费看| 26uuu在线亚洲综合色| 五月天丁香电影| 亚洲久久久国产精品| 一本—道久久a久久精品蜜桃钙片| 丝袜脚勾引网站| 在线天堂中文资源库| 久久久国产一区二区| 国产精品久久久久久久久免| 国产黄色免费在线视频| av不卡在线播放| 国产精品二区激情视频| 国产精品三级大全| 久久精品国产鲁丝片午夜精品| 视频在线观看一区二区三区| 中文字幕精品免费在线观看视频| 欧美日本中文国产一区发布| 国产精品三级大全| 亚洲国产毛片av蜜桃av| 亚洲天堂av无毛| 亚洲四区av| 伦精品一区二区三区| 成年女人毛片免费观看观看9 | 中文字幕av电影在线播放| 男女无遮挡免费网站观看| 少妇被粗大猛烈的视频| 亚洲五月色婷婷综合| 夫妻性生交免费视频一级片| 最近中文字幕高清免费大全6| 国产xxxxx性猛交| av福利片在线| 亚洲精品第二区| 精品国产一区二区三区久久久樱花| 亚洲欧洲精品一区二区精品久久久 | 妹子高潮喷水视频| 91午夜精品亚洲一区二区三区| 热re99久久国产66热| 精品人妻在线不人妻| 极品人妻少妇av视频| 青春草国产在线视频| av.在线天堂| av国产精品久久久久影院| 亚洲精品久久成人aⅴ小说| 少妇被粗大猛烈的视频| 在线观看www视频免费| 精品国产露脸久久av麻豆| 青青草视频在线视频观看| 久久国产精品男人的天堂亚洲| 免费观看a级毛片全部| 五月天丁香电影| 久久久久久久久久久免费av| 免费观看性生交大片5| 亚洲精品久久成人aⅴ小说| 精品一区二区免费观看| 中文字幕制服av| 男女边摸边吃奶| 欧美日韩国产mv在线观看视频| 欧美黄色片欧美黄色片| 免费av中文字幕在线| 精品人妻一区二区三区麻豆| 国产国语露脸激情在线看| 一级黄片播放器| 欧美xxⅹ黑人| 亚洲欧美清纯卡通| 夫妻性生交免费视频一级片| 国产成人免费无遮挡视频| 日韩欧美一区视频在线观看| 国产一区亚洲一区在线观看| 精品亚洲成a人片在线观看| 极品人妻少妇av视频| 热re99久久精品国产66热6| 边亲边吃奶的免费视频| 老司机影院成人| 亚洲成av片中文字幕在线观看 | 欧美av亚洲av综合av国产av | 丝袜美足系列| 欧美黄色片欧美黄色片| 在线观看国产h片| 男人添女人高潮全过程视频| 精品久久蜜臀av无| 高清在线视频一区二区三区| 日韩伦理黄色片| 麻豆乱淫一区二区| 一级,二级,三级黄色视频| 99久久精品国产国产毛片| 国产一区二区激情短视频 | 午夜福利在线免费观看网站| 国产成人免费无遮挡视频| 亚洲国产精品999| 亚洲精品国产一区二区精华液| 日日啪夜夜爽| 亚洲欧洲国产日韩| 男人舔女人的私密视频| 日本色播在线视频| 久久人人爽人人片av| 久久久久久久久久人人人人人人| 毛片一级片免费看久久久久| 黄色配什么色好看| 国产成人av激情在线播放| 亚洲精品久久成人aⅴ小说| 中国三级夫妇交换| 丝瓜视频免费看黄片| 国产一区二区三区av在线| 97精品久久久久久久久久精品| 寂寞人妻少妇视频99o| av国产久精品久网站免费入址| 久久久久精品人妻al黑| 在线观看美女被高潮喷水网站| 久久这里有精品视频免费| 日韩三级伦理在线观看| 亚洲欧美色中文字幕在线| 搡女人真爽免费视频火全软件| 美女视频免费永久观看网站| 国产精品无大码| 久久人人爽人人片av| 亚洲av欧美aⅴ国产| 一本久久精品| 久久久久久久久久久免费av| 国产精品久久久久久av不卡| 女性生殖器流出的白浆| 91精品伊人久久大香线蕉| 国产探花极品一区二区| 黄色怎么调成土黄色| 日韩在线高清观看一区二区三区| 男女国产视频网站| 国产成人午夜福利电影在线观看| 国产亚洲最大av| 久久精品熟女亚洲av麻豆精品| 99久久中文字幕三级久久日本| 天天躁夜夜躁狠狠躁躁| 亚洲av成人精品一二三区| 久久久久久久大尺度免费视频| 欧美精品一区二区免费开放| 蜜桃国产av成人99| 美女主播在线视频| 亚洲精品国产av蜜桃| 午夜av观看不卡| 国产精品国产三级国产专区5o| 日韩人妻精品一区2区三区| 男人爽女人下面视频在线观看| 99国产综合亚洲精品| 精品卡一卡二卡四卡免费| 看十八女毛片水多多多| 亚洲成人手机| 亚洲成色77777| 在线观看免费高清a一片| 十八禁网站网址无遮挡| 午夜免费鲁丝| 午夜福利乱码中文字幕| 亚洲欧美成人综合另类久久久| 欧美国产精品va在线观看不卡| 免费av中文字幕在线| 美女视频免费永久观看网站| 在线看a的网站| 午夜久久久在线观看| 在线观看国产h片| 国产在线一区二区三区精| 国产黄频视频在线观看| 日韩 亚洲 欧美在线| 免费女性裸体啪啪无遮挡网站| 在线观看免费高清a一片| h视频一区二区三区| 搡女人真爽免费视频火全软件| 亚洲三区欧美一区| 你懂的网址亚洲精品在线观看| 久久久久久久久免费视频了| 欧美av亚洲av综合av国产av | 2022亚洲国产成人精品| 人妻人人澡人人爽人人| 亚洲成av片中文字幕在线观看 | 欧美成人午夜精品| 精品国产乱码久久久久久男人| 午夜免费鲁丝| 成年女人在线观看亚洲视频| 日本wwww免费看| 麻豆乱淫一区二区| 黄频高清免费视频| 啦啦啦中文免费视频观看日本| 欧美av亚洲av综合av国产av | 大片电影免费在线观看免费| 伊人久久大香线蕉亚洲五| 男女啪啪激烈高潮av片| av国产久精品久网站免费入址| 午夜福利,免费看| 精品午夜福利在线看| 亚洲av在线观看美女高潮| 久久精品亚洲av国产电影网| 亚洲欧美中文字幕日韩二区| 日韩熟女老妇一区二区性免费视频| 久久久久久久久免费视频了| 在线观看免费视频网站a站| 大片电影免费在线观看免费| 十八禁高潮呻吟视频| 男女边摸边吃奶| 春色校园在线视频观看| 日本黄色日本黄色录像| 亚洲国产欧美在线一区| 两个人免费观看高清视频| 男女国产视频网站| 亚洲成av片中文字幕在线观看 | 免费看不卡的av| 久久ye,这里只有精品| 免费在线观看完整版高清| 久久久久久久精品精品| 高清视频免费观看一区二区| 免费久久久久久久精品成人欧美视频| 亚洲av在线观看美女高潮| 欧美黄色片欧美黄色片| 日韩熟女老妇一区二区性免费视频| av又黄又爽大尺度在线免费看| 飞空精品影院首页| 亚洲激情五月婷婷啪啪| 精品少妇内射三级| 久久精品国产亚洲av涩爱| 久久99热这里只频精品6学生| 精品酒店卫生间| 大片免费播放器 马上看| 国产av码专区亚洲av| av不卡在线播放| 欧美精品人与动牲交sv欧美| 亚洲精品在线美女| 人人妻人人澡人人看| 丁香六月天网| 国产精品三级大全| 99re6热这里在线精品视频| 免费观看无遮挡的男女| 欧美日韩一区二区视频在线观看视频在线| 欧美日韩av久久| 亚洲精品日韩在线中文字幕| 晚上一个人看的免费电影| 久久久精品国产亚洲av高清涩受| 国产日韩欧美在线精品| 国产人伦9x9x在线观看 | 中文字幕最新亚洲高清| 岛国毛片在线播放| 久久国内精品自在自线图片| 亚洲激情五月婷婷啪啪| 18禁动态无遮挡网站| 菩萨蛮人人尽说江南好唐韦庄| 久久久a久久爽久久v久久| 欧美精品国产亚洲| 黄片小视频在线播放| 久久97久久精品| 精品国产露脸久久av麻豆| a级毛片黄视频| 男女啪啪激烈高潮av片| 香蕉精品网在线| 亚洲国产av影院在线观看| 国产成人精品一,二区| 激情五月婷婷亚洲| 日韩,欧美,国产一区二区三区| 2018国产大陆天天弄谢| 男女午夜视频在线观看| 国产亚洲午夜精品一区二区久久| 午夜福利,免费看| 美女福利国产在线| 国产成人精品久久二区二区91 | 精品99又大又爽又粗少妇毛片| 七月丁香在线播放| 午夜福利乱码中文字幕| 免费观看性生交大片5| 国产av精品麻豆| 午夜激情av网站| 香蕉国产在线看| 亚洲精品一区蜜桃| 免费久久久久久久精品成人欧美视频| 99re6热这里在线精品视频| 色播在线永久视频| 老鸭窝网址在线观看| 老汉色∧v一级毛片| 色婷婷av一区二区三区视频| 日韩一区二区视频免费看| 青青草视频在线视频观看| 欧美最新免费一区二区三区| 亚洲欧美中文字幕日韩二区| 欧美人与性动交α欧美精品济南到 | 五月开心婷婷网| 高清欧美精品videossex| 1024视频免费在线观看| 午夜精品国产一区二区电影| av又黄又爽大尺度在线免费看| 亚洲欧美成人精品一区二区| 精品少妇内射三级| 电影成人av| 999精品在线视频| 99久久中文字幕三级久久日本| av视频免费观看在线观看| 久久97久久精品| 亚洲欧美精品自产自拍| 免费看av在线观看网站| 成人国语在线视频| 亚洲精华国产精华液的使用体验| 精品卡一卡二卡四卡免费| 欧美精品一区二区大全| 久久久久久久大尺度免费视频| 各种免费的搞黄视频| 少妇被粗大的猛进出69影院| 国产成人a∨麻豆精品| 永久网站在线| 亚洲成人一二三区av| 一级毛片 在线播放| 赤兔流量卡办理| 在线观看免费高清a一片| 国产女主播在线喷水免费视频网站| 亚洲成人一二三区av| 中国三级夫妇交换| 亚洲第一青青草原| 美国免费a级毛片| freevideosex欧美| 国产日韩欧美视频二区| 国产黄色免费在线视频| 国产精品嫩草影院av在线观看| 国产精品国产三级国产专区5o| a级毛片在线看网站| 久久国内精品自在自线图片| 人体艺术视频欧美日本| 最近手机中文字幕大全| 中文字幕另类日韩欧美亚洲嫩草| 久久人人爽人人片av| 国产精品秋霞免费鲁丝片| 在线观看免费视频网站a站| 亚洲,一卡二卡三卡| 成人毛片60女人毛片免费| 黄色 视频免费看| 日韩三级伦理在线观看| 亚洲久久久国产精品| 久久国内精品自在自线图片| 午夜av观看不卡| 巨乳人妻的诱惑在线观看| 我的亚洲天堂| 高清黄色对白视频在线免费看| 国产精品久久久久久久久免| 高清av免费在线| 捣出白浆h1v1| 欧美日韩av久久| 99热国产这里只有精品6| 青春草视频在线免费观看| 久久97久久精品| 国产女主播在线喷水免费视频网站| 国产精品蜜桃在线观看| 午夜久久久在线观看| 国产白丝娇喘喷水9色精品| 水蜜桃什么品种好| 亚洲国产av影院在线观看| 免费看不卡的av| 精品久久久精品久久久| 亚洲美女黄色视频免费看| 国产成人精品一,二区| 免费观看性生交大片5| 亚洲精品aⅴ在线观看| 纯流量卡能插随身wifi吗| 多毛熟女@视频| 国产精品一区二区在线观看99| 亚洲精品成人av观看孕妇| 国产成人免费观看mmmm| 亚洲第一青青草原| 亚洲四区av| 中文字幕人妻熟女乱码| 精品人妻在线不人妻| videossex国产| 国产精品成人在线| 天天影视国产精品| 日日摸夜夜添夜夜爱| 一级,二级,三级黄色视频| 亚洲少妇的诱惑av| 两个人看的免费小视频| 永久网站在线| 精品99又大又爽又粗少妇毛片| 丰满乱子伦码专区| 亚洲av电影在线进入| 欧美成人午夜精品| 成年女人毛片免费观看观看9 | 一级黄片播放器| 精品一区二区三区四区五区乱码 | 欧美亚洲日本最大视频资源| 亚洲av欧美aⅴ国产| 少妇 在线观看| 黄片无遮挡物在线观看| 国产爽快片一区二区三区| 人人妻人人澡人人看| 国产日韩欧美在线精品| 国产97色在线日韩免费| 精品亚洲成a人片在线观看| 大片电影免费在线观看免费| 18禁国产床啪视频网站| 9色porny在线观看| 国产成人一区二区在线| 在线观看美女被高潮喷水网站| 99re6热这里在线精品视频| 日韩成人av中文字幕在线观看| 久久精品国产亚洲av涩爱| 精品少妇久久久久久888优播| 精品人妻熟女毛片av久久网站| 久久青草综合色| 国产一区二区三区综合在线观看| 最近中文字幕高清免费大全6| 热99久久久久精品小说推荐| 午夜激情久久久久久久| 国产精品久久久av美女十八| 在线观看三级黄色| 亚洲三级黄色毛片| www.熟女人妻精品国产| 国产不卡av网站在线观看| 国产一区二区 视频在线| 日韩,欧美,国产一区二区三区| 男男h啪啪无遮挡| 你懂的网址亚洲精品在线观看| 色婷婷久久久亚洲欧美| 日韩人妻精品一区2区三区| 麻豆乱淫一区二区| 日本色播在线视频| 国产深夜福利视频在线观看| 桃花免费在线播放| 日韩电影二区| 国产视频首页在线观看|