• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    RNAGCN:RNA tertiary structure assessment with a graph convolutional network

    2022-11-21 09:32:26ChengweiDeng鄧成偉YunxinTang唐蘊芯JianZhang張建
    Chinese Physics B 2022年11期
    關(guān)鍵詞:王駿張建

    Chengwei Deng(鄧成偉) Yunxin Tang(唐蘊芯) Jian Zhang(張建)

    Wenfei Li(李文飛)1,2, Jun Wang(王駿)1,2, and Wei Wang(王煒)1,2,?

    1Collaborative Innovation Center of Advanced Microstructures,School of Physics,Nanjing University,Nanjing 210008,China

    2Institute for Brain Sciences,Nanjing University,Nanjing 210008,China

    RNAs play crucial and versatile roles in cellular biochemical reactions.Since experimental approaches of determining their three-dimensional (3D) structures are costly and less efficient, it is greatly advantageous to develop computational methods to predict RNA 3D structures. For these methods, designing a model or scoring function for structure quality assessment is an essential step but this step poses challenges. In this study, we designed and trained a deep learning model to tackle this problem. The model was based on a graph convolutional network(GCN)and named RNAGCN.The model provided a natural way of representing RNA structures, avoided complex algorithms to preserve atomic rotational equivalence,and was capable of extracting features automatically out of structural patterns. Testing results on two datasets convincingly demonstrated that RNAGCN performs similarly to or better than four leading scoring functions.Our approach provides an alternative way of RNA tertiary structure assessment and may facilitate RNA structure predictions. RNAGCN can be downloaded from https://gitee.com/dcw-RNAGCN/rnagcn.

    Keywords: RNA structure predictions, scoring function, graph convolutional network, deep learning, RNApuzzles

    1. Introduction

    RNAs play crucial and versatile roles in cellular biochemical reactions, such as encoding, decoding, catalysis,[1]gene regulations,[2]and others. These functions are closely related to the three-dimensional (3D) structures of the RNAs.To determine RNA 3D structure, experimental technics, including cryo-electron microscopy, x-ray crystallography, and nuclear magnetic resonance (NMR) spectroscopy are usually employed. Since these technics are costly and inefficient,computational methods have been developed to predict RNA structures.[3–19]Computational algorithms usually include two steps: (i) generating structural candidates and (ii) selecting structures most likely to be the native. The second step requires a good scoring function that can assess the quality of the candidates.

    Traditional scoring functions[20,21]are physics-based or knowledge-based, to name a few, Rosetta,[4,22]3dRNAscore,[23]RASP,[24]RNA KB potential,[25]DFIRERNA,[26]and rsRNASP.[27]Among them, 3dRNAscore is an all-atom statistical potential that combines distance-dependent and dihedral-dependent energies;[23]it is more efficient than other potentials in recognizing and ranking native state from a pool of near-native decoys. The rsRNASP potential is composed of short- and long-ranged energies, distinguished by residue separation along sequence;[27]extensive tests showed that it has higher or comparable performance against other leading potentials, dependent on specific testing datasets. In recent years, machine learning approaches achieved great success in many fields, including computer vision, natural language modelling,[28,29]medical diagnosis,[30]physics,chemistry, computational biology, and so on.[31–34]Inspired by these successes, our group developed a scoring function for the assessment of RNA tertiary structure based on a three-dimensional convolutional network and named it RNA3DCNN.[35]

    Applications of graph convolutional network(GCN)[36–41]to represent molecular structures have been quite successful.[42–47]For example, Foutet al.[43]modelled proteins as graphs at residue level and then predicted protein interfaces. Federicoet al.[44]introduced GCNs (GraphQA)to tackle the problem of quality assessment of protein structures, and showed that using only a few features will result in state-of-the-art results. Soumyaet al.[45]developed ProteinGCN for protein structures assessment; they built graphs to model spatial and chemical relations between pairs of atom.Zheet al.[46]built a GCN network, named GraphCPI, which aggregated chemical context of protein sequences and structural information of compounds to tackle the problem of compound–protein interaction. Huanget al.[47]introduced a model called GCLMI based on GCN and an auto-encoder to predict IncRNA–miRNA interactions.

    Inspired by the notion that a graph provides a more natural representation of 3D RNA structures and hence may bring better assessing performance, in this study, we upgrade our previous model by changing the convolutional network to a GCN network. Specifically, we developed a network model based on GCN, named RNA graph convolutional network(RNAGCN),to perform quality assessment of RNA structures.We trained and tested the model and compared the results with several leading scoring functions.

    2. Materials and methods

    In this section,we first introduce the input and output of RNAGCN, and then the architecture of the model. Next, we present the datasets used for training and evaluation,followed by a description of the metrics used to evaluate the performance of the model. At last,we describe the loss function and the training procedures.

    2.1. Graph representation of RNA structures

    RNA structures can be naturally represented as graphs,with nodes modeling atoms and edges modeling their relative spatial position in three-dimensional space. Since RNA sizes vary, we split an input RNA into many “l(fā)ocal environments”and converted each into a graph. Specifically, for theithnucleotide along the sequence, we defined it as the central nucleotide of its “l(fā)ocal environment” constructed by including all its nucleotide neighbors. A nucleotide was defined to be a neighbor if any of its heavy atoms (other than a hydrogen atom) was within a threshold spatial distance of any heavy atoms of the central nucleotide. The distance threshold was set to be 14 ?A.This is to construct a large enough local environment for the central nucleotide.With this design,the model can be scaled to handle RNAs of arbitrary size. Thus, for an RNA of lengthNs,we constructedNslocal environments represented byNsgraphs.

    In general,a graph is defined as a set

    whereVdenotes the set of nodes andEdenotes the set of edges,respectively.X ∈R|V|×NandU ∈R|?|×Eare the feature matrices ofVandE,respectively.N,Eare the number of nodes and number of edges,respectively.

    All heavy (non-hydrogen) atoms in a local environment were treated as nodes. A node was represented as a one-hot vector of lengthNt,whereNtis the total number of atom types,which is 54,based on AMBER99SB force field.

    Edges were defined between neighboring atoms. Each atom was connected by an edge with itsKnearest atoms in space, whereKwas set to be 14. Edges were directed and each included five features. The first was the spatial distance betweenSandT, which was one of its neighbors. The 2nd,3rd,and 4th were direction features,which were the three projections of the unit vector pointing fromStoTin the internal coordinate system centered onS,i.e.,

    whereAdenotes the ‘C1’ atom andBthe ‘C5’ atom in a nucleotide. The fifth feature had a value of 1 or 0,depending on whether there is a chemical bond betweenSandT.

    2.2. Output of RNAGCN:quality score

    The graphs representing the local nucleotide environment constructed for each nucleotide from its neighbors were used as input to the GCN model. The model output was a scalar score indicating the quality of the input. During training,the output score measured the difference of local environment from the ground truth, i.e., the RMSD of the input structure with respect to the experimental one. During the inferring operation,the network output was the predicted score indicating the RMSD of the input versus the experimental structure that in this operation was an unknown to the network. The scores thus obtained for theNsinput environments of an RNA were averaged to get the final score,which reflected the overall difference of the input structure to the experimental one, with a lower score corresponding to a higher input quality (a better approximation of the unknown experimental structure).

    2.3. Architecture of RNAGCN

    RNAGCN is a deep learning model based on GCN that is used to extract features from 3D structures of RNA. The model architecture is shown in Fig. 1. The input to the model is an RNA local graph,converted from an RNA local environment. Below the input layer,there are five serially connected graph convolution layers that operate on the graph sequentially.Residual modules and skip connections were adopted to solve the gradient vanishing problem often seen in deep neural networks.[38,40]Serially connected below the GCN layers,there is a convolution layer with a 1×1 kernel, followed by a graph global max pooling layer, and then the final layer of a fully-connected network that outputs a score as prediction.The model contains about 378k parameters in total.

    In the five GCN layers, graph convolution operations were used to update the nodes’representations iteratively. The details of the operations are shown on the right side of Fig.1.

    Fig.1. Overview of the RNAGCN model. The input is an RNA local graph. The network contains five graph convolution layers,a convolution layer with a 1×1 kernel,a graph global max pooling layer,and a fully-connected network that outputs a score as prediction. The right panel presents details of feature operations using graph convolution with residual connections.

    Following the GCN layers, a convolution layer with a 1×1 kernel was used to perform the aggregation of information fromLGCN nodes with different weights. These weights were learned from data during training. As a result,Xwas reduced fromRL×|V|×NtoR|V|×N.

    A global max pool layer was used to mix nodes representations. Max values ofX ∈R|V|×Nalong|V|were computed one by one. As a result,Xwas further reduced,fromR|V|×NtoRN.

    Finally,a fully-connected network was used to reduceXfromRNtoR1.It included a layer of size 64 with ReLU activation function,and a layer of size 1 without activation function.

    Notably, the representation of edges was kept static during graph convolution,according to the generally accepted notion that updating nodes’features alone is usually sufficient to achieve reasonable model performance.

    2.4. Datasets

    We built our datasets based on the non-redundant set of RNA 3D Hub[48](http://rna.bgsu.edu/rna3dhub/nrlist,Release 3.102, 2019-11-27). RNAs forming complex with proteins or other molecules were removed; only pure RNA structures were kept. Short RNAs with length smaller than eight nucleotides were also removed. The remaining 610 RNAs were split into training, validation and testing datasets. The infernal program[49]was used to ensure that no RNA in the testing dataset belonged to the same RFAM family as in the training and validation datasets. The resulting three datasets contained 426, 92, and 92 RNAs, respectively. Hereafter, the testing dataset of 92 RNAs is referred to as Test-I.

    RNA decoys are needed to train and test scoring functions. From the experimental structure of each RNA, we generated the corresponding decoys with high-temperature molecular dynamics (MD) simulations, carried out using Gromacs.[50]In brief, each RNA molecule was solved in a TIP3P water box,and metal ions were added to neutralize the system. The system was then subjected sequentially to energy minimization, NVT equilibration, and NPT equilibration at 300 K.Finally,high temperature MD was used to denature the structure by gradually increasing the temperature from 300 K to 660 K.The force field was AMBER99SB.Each simulation lasted for 10 ns. For each trajectory,we calculated the RMSD of each frame with respect to the corresponding experimental structure and randomly selected 200 decoys in the RMSD range from 0 to 20 ?A. The decoys, together with all the experimental structures,constructed the training,validation,and testing datasets,as summarized in Table 1.

    Fig.2. Distributions of(a)RMSD and(b)length for four datasets. Note that RNA 6qkl in the training set is not shown,for its length is too long(1158 nucleotides).

    We prepared another testing dataset based on the RNApuzzles-standardized dataset,[51]which included 22 RNAs.Again, for each RNA in this set, the same denaturing procedure was carried out by high-temperature MD to generate decoys, and 200 decoys were randomly selected. The resulting testing dataset,referred to as Test-II hereafter,contained 4422 RNA structures.

    The distribution of RMSDs and lengths for the abovementioned datasets are given in Fig.2.

    Table 1. Information of datasets.

    2.5. Evaluation metrics

    Enrichment score (ES) and Pearson correlation coefficient(PCC)were used to evaluate the performance of scoring function.

    Enrichment score(ES)[23,25–27,35,52]was defined as

    whereStop10%andRtop10%indicated the best-scored 10% decoys and the 10% with the lowest RMSDs, respectively.Ndecoyswas the number of decoys associated with the RNA of concern. ES values ranged from 0 to 10, with larger value indicating better performance.

    The Pearson correlation coefficient (PCC)[24,27,52]indicates the magnitude of linear correlation between two variables and is defined as

    whereSiandRiare the scores given by the scoring function and the RMSD of the structure, respectively. PCC assumes a value between 0 and 1.

    2.6. Loss function

    The loss function, used as the optimizing target of the model,was defined as

    whereRiandSiwere RMSD and the predicted score of theithlocal environment,respectively.Nenvswas the total number of local environments in the whole training dataset. In this study,it amounts to about 107.

    2.7. Training strategies

    All the RNAs were partitioned into local environments,which were then randomly shuffled and fed to the network in batches. The batch size per GPU was set to 64 and the total batch size was 384, since 6 GPUs were used in parallel computation. Python modules Deep Graph Library and Pytorch.Distributed were used for model construction and computation parallelization,respectively. Mini-batch gradient descent algorithm was employed to train the model.Specifically,the Adam optimizer with parametersβ1=0.9,β2=0.98 was used. The initial learning rate was 0.004 and it would be decreased by half if the training loss no longer dropped for 4 epochs.

    3. Results

    In this section,we present the results of our model experiments and compare the performance of our model with four statistical potentials, including RASP, Rosetta, 3dRNAscore,and rsRNASP, which are currently the most popular in the field.

    3.1. Performance on Test-I

    Table 2 shows the performance of the five models in assessing the quality of structures in the Test-I dataset, which contained 92 RNAs and 200 decoys associated with each.We used two criteria, labeled as Top-1 and Top-5, to reflect whether the experimental RNA was ranked the first,or ranked among the best five, respectively. As judged by Top-1, 3dRNAscore and rsRNASP performed best;they identified 91 and 86 out of 92 RNAs,respectively. RNAGCN(our model)performed as a third (79/92). Judged by the criterion Top-5, all models performed similarly. According to comparisons based on these two criteria, our model is the third best among all.This outcome is tentatively attributed to the strict criteria we used to construct the testing set: we made sure there are no overlapping of RFAM families between the testing set and the training datasets.

    Table 2. Performance of five models on Test-I.

    Fig.3.(a)Experimental structure of RNA 5dcv.(b)Score-RMSD plots,showing the correlation between the scores predicted by the model and the RMSDs of the structures. The purple crosses mark the experimental structures, orange crosses indicate the structures that were scored better than the experimental ones, and red crosses mark the best-scored structures if they were not the experimental one.

    Table 2 also presents the average values for ES and PCC that indicate the strength of correlations between the ground truth and prediction.Our model is superior to the others by the ES measure, close second best behind RASP by the average PCC, and clearly superior by the measure sensitive to nearnative structures (PCC with RMSD<4 ?A). The latter result indicates that our model excels at discriminating small structural changes with respected to the native one.

    In Fig. 3(b) we present the score-RMSD plots for each of the five models, computed using RNA-5dcvas an example(Fig.3(a)),which is a 95-nucleotid-long fragment fromP.horikoshiiRNase.[53]The data show that RASP and Rosetta failed to identify the experimental structure as the best one.In contrast,3dRNAscore,rsRNASP and our model ranked the experimental one as the best structure(as indicated by overlapping symbols for best score and native). Moreover,our model appeared to show a better score-RMSD correlation, particularly at small RMSD ranges.For a further proof,we calculated the average PCC values for RNAs within the 0 to 4 ?A range of RMSD and found them to be 0.77,0.47,-0.65,0.61,and 0.70 for RNAGCN, RASP, Rosetta, 3dRNAscore, and rsRNASP,respectively. By this measure, our model clearly exhibited a significantly higher correlation between scores and RMSDs.This result is also consistent with that bottom line in Table 2,which shows that our model gives the best correlation in the small RMSD range. Moreover, for this specific RNA,the ES values of RNAGCN, RASP, Rosetta, 3dRNAscore, and rsRNASP are 5.0,3.0,1.0,0.5,and 5.0,respectively,also consistent with the ES result presented in Table 1.

    The ES, PCC, and score-RMSD plots for each RNA in Test-I are listed in the supplementary material.

    3.2. Performance on Test-II

    We also analyzed the performance of the five models on the Test-II testing dataset, which is composed of 22 RNAs taken from RNA-puzzles-standardized datasets. The result is summarized in Table 3.

    Table 3 shows that 3dRNAscore identified all 22 experimental structures correctly and our model identified just one less, 21 out of 22 RNAs, a close second. Judged by the ES values, our model is 6.5, which is very closed to the best 6.7(rsRNASP).The average PCC of our model is 0.88,also very close second to the best value 0.90 (rsRNASP). Within the small RMSD range (<4 ?A), the PCC value of our model is again close to the best 0.76 (rsRNASP). Detailed results for each RNA in Test-II can be found in the supplementary material.

    Table 3. Performance of five models on Test-II.

    Fig. 4. (a) Experimental structure of puzzle-2 in the RNA-puzzles dataset. (b) Score-RMSD plots, showing the correlation between the predicted scores and the RMSDs of the structure. The purple, orange,and red crosses are defined in the same way as in Fig.3.

    Figure 4(b)shows the score-RMSD plots for puzzle-2 in Test-II.The pattern of performances is similar to that of Test-I in Fig.3. Three models, including 3dRNAscore, rsRNASP,and ours,correctly identified the experimental structure out of decoys. For this specific RNA, the ES value of our model is 9.5,very closed to perfect value 10;and the PCC value is 0.95.Our model and RASP appear to show a good correlation between scores and RMSDs, while Rosetta, 3dRNAscore, and rsRNASP feature a big gap in scores between the experimental structure and the decoys. The score-RMSD plots for other RNAs in Test-II are generally similar to Fig.4 and listed in full in the supplementary material.

    We compiled a detailed ranking of the five models for Test-II and presented the results as a colormap in Fig.5. For each RNA in Test-II and each scoring function, we selected the best-scoredNb(a pre-set variable, “b” indicates best) decoys and computed their average RMSD. We then ranked five scoring functions based on these RMSDs and represented the ranks with different colors, with lighter colors indicating higher ranks. It can be seen that RNAGCN has the largest area of light colors(light yellow and gold),indicating that our model gives the lowest RMSDs more often than the others.

    Following the usual practice in the field of machine learning,we plotted receiver operating characteristic(ROC)curves on difference choices ofNb, used as a variable threshold to select structures as the best predicted ones. For example, ifNbwas 10, we trusted the best scored 10 structures ranked by the model as the predicted native structures. To test the model performance under different choices ofNb,we first defined a native ensemble as all structures with RMSD smaller than 2 ?A,and then defined all structures within the native ensemble as positive samples and the others as negative. These structures and the associated binary classification labels were used to construct the ground-truth dataset, following conventions in machine learning field.During predictions,the bestNbstructures given by model were treated as the predicted positives,while the leftover was treated as the predicted negatives.Under varying choices ofNb, we calculated the true-positive rate (TPR) and false-positive rate (FPR) and plotted their relationship as ROC curves in Fig. 6. It can be seen that the ROC curve of our model almost envelopes (i.e., maximizes)the other four curves. Quantitatively, the areas under ROC(AUC) of our model for Test-I and Test-II are 0.97 and 0.89,respectively; both represent the best performance among the all scoring functions.

    Fig. 5. Ranking of five models based on the average RMSDs on the best-scored Nb structures, where Nb is a variable plotted on the x-axis.Each row corresponds to one RNA in Test-II(a total of 22,note their labels are not continuous in the standard dataset). The ranks are indicated by the color bar on the right;the lighter the color,the higher the rank.

    Fig.6. ROC curves of five scoring functions on different choices of threshold Nb. Panels (a) and (b) are results obtained for the datasets Test-I and Test-II,respectively.

    4. Discussion and conclusions

    Applications of deep learning technology in molecular representation learning have achieved impressive progress in recent years. Inspired by previous works,we explored the application of graph convolutional network in RNA 3D structure assessment tasks. We compiled two datasets with MD simulations and trained a GCN neural network. We tested the model and evaluated its performance using four leading scoring functions. The testing dataset Test-I contained 92 RNAs. Dataset Test-II contained 22 RNAs,a subset of the RNA-puzzles standardized dataset.

    For both testing datasets,the ability of our GCN model to identify experimental structures closely approached the best–3dRNAscore on Test-I and rsRNASP on Test-II,respectively.The ES metric,measuring the overlap of the near-native structures and the best scored structures, indicated that our model is the best one for distinguish near-native structures (<4 ?A)on Test-I while the second one on Test-II. Additionally, we ranked five models based on average RMSD of the best scored structures, and compared their ROC curves and AUCs; both experiments ranked our model as superior to the others.

    It is interesting to compare our machine learning model with the statistical potentials based on inverse-Boltzmann equation,particularly,3dRNAscore and rsRNASP.According to the presented results, our model seems to perform slightly worse in identifying native structures (Tables 2 and 3) and slightly better in comparison of AUCs. This outcome may be tentatively attributed to the strict criteria we used to construct the testing set: we made sure there are no overlapping of RFAM families between the testing set and the training datasets. Besides, we did not train the model over and over to increase the metrics,since too much training might lead to over-fitting and then decrease generalization ability, even the training and testing datasets were strictly separated.

    In general, the advantages of statistical potentials based on inverse-Boltzmann equation are their clear physical picture,good generalization ability to unseen structures,and fast computation speed. In contrast,machine learning based potentials are black-boxes and have unclear generalization ability and arguably slow speed.However,the advantages of machine learning approaches are also prominent. First,it needs not years of studies looking for proper energy forms and choice of reference states.[52]For RNA structures,it has taken people a dozen years to achieve the current performance. While a similar performance can be achieved by simply taking a general deep network(with slightly modifying the input and output layers)and training it properly. This advantage becomes more prominent if people need to migrate model to new molecules. Second,deep neural networks have found physical patterns similar to those found by human,and have the potential to find new patterns not-known yet. Machine learning based scoring models deserve to be studied.

    A recently introduced geometric deep learning approach named ARES[54]showed good performance at scoring RNA structures, but unfortunately, it is not yet possible to directly compare it to ours. The authors of ARES have not released their trained model, and we failed to reproduce their results by attempting to rebuild the model from downloaded codes and training the same network with their datasets. Moreover,ARES was trained on datasets generated with FARFAR2,different from the datasets we generated from MD simulations.A fair comparison of these two models requires identical data or data sampled from independently identical distribution for training and testing.

    Our model has several advantages. First, it is natural to represent the geometric and topologic information about RNA tertiary structures as a graph. Second, because a graph automatically provides geometric invariant representation regarding atomic translation and rotation, there is no need to design complex convolution operations such as those used in ARES.Third,because our model can directly learn from spatial patterns of atoms on its own, it requires, unlike physicsbased scoring functions,no prior physical knowledge. Therefore, our approach can be easily extended to other molecular systems with little modification. Forth, the design of splitting structures into local environments makes the model scalable,enabling us to treat RNAs of arbitrary size. At last,this study showed that our model performed well particularly for near-native structures. We tentatively attributed this feature to the graph representation of tertiary structures, for both graph topology and edge features are sensitive to the changes in positions of atoms.

    There are two notable limitations of the present work.One limit is computational, stemming from the huge memory consumption of graphs and the massive computations involved in graph convolution operations. As both factors limit the size of the graph,and the number of neighbor atoms around the central nucleotide,they may raise problems handling very long-range interactions that are relevant. The second is algorithmic,as in the current version of the model only node features in the network were updated while edge features were kept static. Presumably,updating both in an upgraded version may slightly improve model performance.

    Acknowledgements

    This study was funded by the National Natural Science Foundation of China(Grant Nos.11774158 to JZ,11934008 to WW,and 11974173 to WFL).The authors acknowledge High Performance Computing Center of Advanced Microstructures,Nanjing University for the computational support.

    猜你喜歡
    王駿張建
    護苗
    中國火炬(2024年4期)2024-04-17 11:27:44
    趕海記憶
    鷸蚌相爭,漁人得利
    Diffusion of nucleotide excision repair protein XPA along DNA by coarse-grained molecular simulations?
    豆腐里的愛
    本期焦點人物:張建一
    綠色中國(2019年13期)2019-11-26 07:10:50
    《CT成像:基本原理、偽影與誤區(qū)》已出版
    書寫精氣神
    榮譽雜志(2015年2期)2015-07-08 14:23:56
    肩關(guān)節(jié)生物力學(xué)
    真情假意
    故事林(2007年12期)2007-05-14 15:37:51
    国产69精品久久久久777片| 久久久久久久国产电影| 波多野结衣巨乳人妻| 国产午夜精品论理片| 亚洲国产高清在线一区二区三| 亚洲国产精品sss在线观看| 中文精品一卡2卡3卡4更新| 免费不卡的大黄色大毛片视频在线观看 | 日本午夜av视频| 久久久久久伊人网av| 九色成人免费人妻av| 一级毛片久久久久久久久女| 99久久无色码亚洲精品果冻| 精品久久久久久久末码| 国产男人的电影天堂91| 长腿黑丝高跟| 一边亲一边摸免费视频| av又黄又爽大尺度在线免费看 | 午夜福利视频1000在线观看| 国产精品久久久久久av不卡| 99久久精品国产国产毛片| 国产69精品久久久久777片| 女的被弄到高潮叫床怎么办| 一级黄片播放器| 国产单亲对白刺激| 国产 一区 欧美 日韩| 免费av观看视频| 国产成人freesex在线| 久久精品国产亚洲av天美| 亚洲av免费高清在线观看| 亚洲成人久久爱视频| 老司机福利观看| 亚洲图色成人| 蜜桃久久精品国产亚洲av| 成人综合一区亚洲| 免费av观看视频| 久久鲁丝午夜福利片| 亚洲成人av在线免费| 国产极品精品免费视频能看的| 国产午夜福利久久久久久| 身体一侧抽搐| 国产爱豆传媒在线观看| 男人舔奶头视频| 免费观看a级毛片全部| 久久久成人免费电影| 国产av不卡久久| 嫩草影院精品99| 亚洲精品成人久久久久久| 亚洲av中文av极速乱| h日本视频在线播放| 欧美最新免费一区二区三区| 精品不卡国产一区二区三区| 日本-黄色视频高清免费观看| 日韩欧美 国产精品| 亚洲精品亚洲一区二区| 国产极品天堂在线| 久久99精品国语久久久| 最新中文字幕久久久久| 在线免费观看的www视频| 国产男人的电影天堂91| 亚洲最大成人av| 日韩成人av中文字幕在线观看| 国产免费视频播放在线视频 | 国产国拍精品亚洲av在线观看| 久久久久久久午夜电影| 一个人免费在线观看电影| 免费搜索国产男女视频| 亚洲国产最新在线播放| 尤物成人国产欧美一区二区三区| 亚洲久久久久久中文字幕| 久久精品国产99精品国产亚洲性色| 国产久久久一区二区三区| 中文在线观看免费www的网站| 99热这里只有是精品50| 成人一区二区视频在线观看| 国产成人午夜福利电影在线观看| 精品少妇黑人巨大在线播放 | 午夜亚洲福利在线播放| 六月丁香七月| 一本一本综合久久| 欧美激情在线99| av视频在线观看入口| videossex国产| 国产人妻一区二区三区在| 非洲黑人性xxxx精品又粗又长| 久99久视频精品免费| 99久国产av精品国产电影| 精品无人区乱码1区二区| 桃色一区二区三区在线观看| 51国产日韩欧美| 一级黄片播放器| 免费黄网站久久成人精品| 国产亚洲av片在线观看秒播厂 | 久久精品国产亚洲网站| 91精品国产九色| 亚洲精品一区蜜桃| 99热这里只有是精品50| 人人妻人人看人人澡| 国产午夜精品论理片| 夫妻性生交免费视频一级片| 纵有疾风起免费观看全集完整版 | 看片在线看免费视频| 亚洲性久久影院| 岛国毛片在线播放| 成人毛片60女人毛片免费| av播播在线观看一区| 婷婷六月久久综合丁香| АⅤ资源中文在线天堂| 淫秽高清视频在线观看| 亚洲最大成人av| 少妇猛男粗大的猛烈进出视频 | 欧美最新免费一区二区三区| 欧美+日韩+精品| .国产精品久久| 成人毛片a级毛片在线播放| av在线观看视频网站免费| 亚洲av成人精品一二三区| 日韩欧美三级三区| 久久精品影院6| 在线观看美女被高潮喷水网站| 性插视频无遮挡在线免费观看| 精品一区二区三区视频在线| 国产精品国产三级国产专区5o | 汤姆久久久久久久影院中文字幕 | 三级男女做爰猛烈吃奶摸视频| 成人毛片60女人毛片免费| 亚洲成人中文字幕在线播放| 亚洲18禁久久av| 美女xxoo啪啪120秒动态图| 欧美xxxx黑人xx丫x性爽| 午夜爱爱视频在线播放| 91精品国产九色| 欧美一级a爱片免费观看看| 欧美一级a爱片免费观看看| 成人一区二区视频在线观看| 久久久精品大字幕| 日韩一区二区视频免费看| 波多野结衣巨乳人妻| 久久久久久久久中文| 成人高潮视频无遮挡免费网站| 国产熟女欧美一区二区| 日韩精品青青久久久久久| 26uuu在线亚洲综合色| 美女黄网站色视频| 国产精品精品国产色婷婷| 91久久精品国产一区二区成人| 久久国内精品自在自线图片| 日韩一区二区三区影片| 久久精品国产亚洲av天美| 青春草视频在线免费观看| 中国美白少妇内射xxxbb| 国产精品国产高清国产av| 美女脱内裤让男人舔精品视频| 一区二区三区高清视频在线| 国产高潮美女av| 免费看a级黄色片| 少妇人妻一区二区三区视频| 男女下面进入的视频免费午夜| 国产精品一区二区三区四区久久| 日日干狠狠操夜夜爽| 成人性生交大片免费视频hd| 精品人妻一区二区三区麻豆| 精品久久久久久久人妻蜜臀av| 日韩,欧美,国产一区二区三区 | 国产黄片视频在线免费观看| 国语对白做爰xxxⅹ性视频网站| 国产成人aa在线观看| 观看美女的网站| 18禁在线播放成人免费| 好男人视频免费观看在线| 一本一本综合久久| 国产精品久久久久久久久免| 久久精品国产鲁丝片午夜精品| 麻豆一二三区av精品| 免费无遮挡裸体视频| 亚洲中文字幕一区二区三区有码在线看| 日本五十路高清| 国产伦精品一区二区三区视频9| 欧美精品国产亚洲| 美女xxoo啪啪120秒动态图| 毛片一级片免费看久久久久| 乱系列少妇在线播放| 午夜福利成人在线免费观看| 大又大粗又爽又黄少妇毛片口| 国产伦精品一区二区三区四那| 国产精品不卡视频一区二区| 天天一区二区日本电影三级| 国产精品一区二区性色av| 久久久午夜欧美精品| 日韩制服骚丝袜av| 97在线视频观看| 天天躁日日操中文字幕| 一级黄片播放器| 日本一本二区三区精品| 高清视频免费观看一区二区 | 国产色婷婷99| 黄色一级大片看看| 男女啪啪激烈高潮av片| 在线观看一区二区三区| 国产亚洲午夜精品一区二区久久 | 日本黄色视频三级网站网址| 只有这里有精品99| 国产一区亚洲一区在线观看| 久久久久久大精品| 久久国内精品自在自线图片| 欧美成人精品欧美一级黄| 免费一级毛片在线播放高清视频| 亚洲欧美日韩东京热| 九九在线视频观看精品| 亚洲精品,欧美精品| 国产精品,欧美在线| 日韩成人av中文字幕在线观看| 一本一本综合久久| 晚上一个人看的免费电影| 欧美高清成人免费视频www| 全区人妻精品视频| 色噜噜av男人的天堂激情| 久久久精品94久久精品| 一边摸一边抽搐一进一小说| 欧美3d第一页| 国产成人a区在线观看| 免费无遮挡裸体视频| 村上凉子中文字幕在线| 丰满人妻一区二区三区视频av| 欧美一区二区亚洲| 日本欧美国产在线视频| 欧美另类亚洲清纯唯美| 看片在线看免费视频| 熟女电影av网| 国产成人免费观看mmmm| 岛国在线免费视频观看| 美女cb高潮喷水在线观看| 一级爰片在线观看| 看免费成人av毛片| 成人毛片a级毛片在线播放| 久久久久久久久大av| 国产黄色小视频在线观看| 成人性生交大片免费视频hd| 又黄又爽又刺激的免费视频.| 免费av不卡在线播放| 免费观看a级毛片全部| 中文字幕熟女人妻在线| 好男人在线观看高清免费视频| 亚洲国产成人一精品久久久| 国产免费男女视频| 国产精品国产三级专区第一集| av免费观看日本| 狂野欧美激情性xxxx在线观看| 熟女人妻精品中文字幕| 亚洲国产精品久久男人天堂| 国产又色又爽无遮挡免| 久久热精品热| 国产黄a三级三级三级人| 国产精品熟女久久久久浪| 观看免费一级毛片| 精品久久久久久成人av| 午夜福利成人在线免费观看| 欧美日本视频| 日韩av在线免费看完整版不卡| 床上黄色一级片| 在现免费观看毛片| 成人综合一区亚洲| 国产免费一级a男人的天堂| 亚洲精品色激情综合| 亚洲欧美清纯卡通| 99热这里只有是精品50| 1000部很黄的大片| 国产一区二区亚洲精品在线观看| 国产人妻一区二区三区在| 美女被艹到高潮喷水动态| 久久热精品热| 九九热线精品视视频播放| 韩国av在线不卡| 伦精品一区二区三区| 国产不卡一卡二| 卡戴珊不雅视频在线播放| 嘟嘟电影网在线观看| 国产毛片a区久久久久| 免费播放大片免费观看视频在线观看 | 国产在线一区二区三区精 | 亚洲精品日韩av片在线观看| 99九九线精品视频在线观看视频| 国产老妇伦熟女老妇高清| 国产探花在线观看一区二区| 寂寞人妻少妇视频99o| 国产私拍福利视频在线观看| 综合色av麻豆| 国产黄色小视频在线观看| 国产一级毛片在线| 亚洲,欧美,日韩| 国产极品天堂在线| 日韩成人av中文字幕在线观看| 91精品国产九色| 白带黄色成豆腐渣| 婷婷色综合大香蕉| av视频在线观看入口| 亚洲图色成人| 插逼视频在线观看| 亚洲熟妇中文字幕五十中出| 一本久久精品| 亚洲丝袜综合中文字幕| 欧美日本亚洲视频在线播放| 青青草视频在线视频观看| 国产v大片淫在线免费观看| 三级经典国产精品| 国产欧美另类精品又又久久亚洲欧美| 人妻少妇偷人精品九色| 少妇的逼水好多| 自拍偷自拍亚洲精品老妇| 女人十人毛片免费观看3o分钟| 日韩欧美国产在线观看| 亚洲电影在线观看av| 国产精品国产三级国产av玫瑰| 午夜福利视频1000在线观看| 亚洲美女搞黄在线观看| 特大巨黑吊av在线直播| 毛片一级片免费看久久久久| 高清毛片免费看| 国产高清不卡午夜福利| 亚洲精品成人久久久久久| 狂野欧美激情性xxxx在线观看| 一本久久精品| 精品99又大又爽又粗少妇毛片| 日韩成人伦理影院| 亚洲国产高清在线一区二区三| 少妇人妻一区二区三区视频| 亚洲国产精品专区欧美| 国产黄a三级三级三级人| 毛片一级片免费看久久久久| 国产成人aa在线观看| 亚洲精品一区蜜桃| 成人欧美大片| 成年版毛片免费区| 国产极品天堂在线| 亚洲av福利一区| 欧美zozozo另类| 免费黄色在线免费观看| 国产在视频线在精品| 人人妻人人看人人澡| 夜夜看夜夜爽夜夜摸| 国产精品福利在线免费观看| 美女黄网站色视频| 中文字幕熟女人妻在线| 禁无遮挡网站| 最近的中文字幕免费完整| 国产高清三级在线| 看免费成人av毛片| 午夜日本视频在线| 黑人高潮一二区| 99久久无色码亚洲精品果冻| 国产精品国产三级国产专区5o | 在线a可以看的网站| 边亲边吃奶的免费视频| 日韩中字成人| 青春草亚洲视频在线观看| 一边摸一边抽搐一进一小说| 久热久热在线精品观看| 亚洲人成网站在线观看播放| 国产高潮美女av| 日本五十路高清| 变态另类丝袜制服| 一本久久精品| av线在线观看网站| 内射极品少妇av片p| 午夜福利高清视频| 精品一区二区免费观看| 91在线精品国自产拍蜜月| 欧美丝袜亚洲另类| 别揉我奶头 嗯啊视频| 一级毛片aaaaaa免费看小| 九九在线视频观看精品| 五月伊人婷婷丁香| 久久精品综合一区二区三区| 最后的刺客免费高清国语| 成人鲁丝片一二三区免费| 国产在线男女| 亚洲激情五月婷婷啪啪| 草草在线视频免费看| 最近视频中文字幕2019在线8| 国产淫片久久久久久久久| 国产av一区在线观看免费| 尾随美女入室| 国产高潮美女av| 亚洲av.av天堂| av在线播放精品| 成人二区视频| 国产伦理片在线播放av一区| 成人毛片a级毛片在线播放| 色综合亚洲欧美另类图片| 欧美成人a在线观看| 一卡2卡三卡四卡精品乱码亚洲| 麻豆成人av视频| 国产三级在线视频| 男人舔女人下体高潮全视频| 好男人视频免费观看在线| 九九爱精品视频在线观看| 久久99蜜桃精品久久| 日韩强制内射视频| 成人av在线播放网站| 欧美潮喷喷水| 一级黄色大片毛片| 亚洲精品成人久久久久久| 爱豆传媒免费全集在线观看| 最近的中文字幕免费完整| 99久久精品一区二区三区| 我的老师免费观看完整版| 亚洲av.av天堂| 成人无遮挡网站| 十八禁国产超污无遮挡网站| 一级黄色大片毛片| 草草在线视频免费看| 天天躁日日操中文字幕| 亚洲国产精品国产精品| 久久久久久久久久久免费av| 村上凉子中文字幕在线| 69av精品久久久久久| 免费看av在线观看网站| 内射极品少妇av片p| 日产精品乱码卡一卡2卡三| 成人特级av手机在线观看| 日韩一区二区三区影片| 亚洲无线观看免费| 精品人妻视频免费看| 久久综合国产亚洲精品| 免费看av在线观看网站| 国产精品人妻久久久久久| 国产亚洲5aaaaa淫片| 久久综合国产亚洲精品| 欧美3d第一页| 免费看光身美女| 亚洲国产精品sss在线观看| 免费播放大片免费观看视频在线观看 | 亚洲五月天丁香| av在线老鸭窝| 日本黄大片高清| 亚洲精品日韩av片在线观看| 岛国在线免费视频观看| 亚洲av电影不卡..在线观看| 大香蕉久久网| 国内揄拍国产精品人妻在线| 性插视频无遮挡在线免费观看| 内地一区二区视频在线| 熟妇人妻久久中文字幕3abv| 免费看a级黄色片| 国产精品精品国产色婷婷| 韩国高清视频一区二区三区| 亚洲,欧美,日韩| 成人三级黄色视频| 深爱激情五月婷婷| 欧美成人一区二区免费高清观看| 久久99热这里只频精品6学生 | 欧美日本亚洲视频在线播放| 色综合站精品国产| 国产亚洲av嫩草精品影院| 精品人妻一区二区三区麻豆| 一个人看的www免费观看视频| 亚洲成人精品中文字幕电影| 中国国产av一级| 麻豆乱淫一区二区| 18+在线观看网站| 纵有疾风起免费观看全集完整版 | 激情 狠狠 欧美| 国产麻豆成人av免费视频| 成人欧美大片| 成人毛片a级毛片在线播放| 久久久久久久午夜电影| 免费黄色在线免费观看| 尾随美女入室| 成人国产麻豆网| 毛片女人毛片| 伦理电影大哥的女人| 久久久成人免费电影| 1024手机看黄色片| 成人高潮视频无遮挡免费网站| 午夜a级毛片| 日韩欧美精品免费久久| 亚洲国产精品久久男人天堂| 波多野结衣高清无吗| 在线观看美女被高潮喷水网站| 久久久国产成人精品二区| 免费看光身美女| 亚洲av成人av| 久久人人爽人人片av| 精品少妇黑人巨大在线播放 | 99热6这里只有精品| 麻豆成人av视频| 婷婷六月久久综合丁香| 一区二区三区乱码不卡18| 久久精品国产亚洲av涩爱| 亚洲欧洲国产日韩| 性色avwww在线观看| 成人综合一区亚洲| 亚洲精品久久久久久婷婷小说 | 日本av手机在线免费观看| 晚上一个人看的免费电影| 精品不卡国产一区二区三区| 亚洲av福利一区| 久久人人爽人人片av| 网址你懂的国产日韩在线| 色综合亚洲欧美另类图片| 嫩草影院入口| 天堂网av新在线| 亚洲精品日韩av片在线观看| 精品人妻熟女av久视频| 中国美白少妇内射xxxbb| 国内揄拍国产精品人妻在线| 一本久久精品| 精品久久久噜噜| 我要看日韩黄色一级片| 建设人人有责人人尽责人人享有的 | 我要看日韩黄色一级片| 亚洲欧美精品综合久久99| 永久网站在线| 亚洲精品国产成人久久av| 亚洲欧美精品综合久久99| 久久这里有精品视频免费| 欧美日韩在线观看h| 狠狠狠狠99中文字幕| 国产综合懂色| 麻豆久久精品国产亚洲av| 最近视频中文字幕2019在线8| 亚洲不卡免费看| 熟女电影av网| 97在线视频观看| 国产v大片淫在线免费观看| 亚洲经典国产精华液单| 国产私拍福利视频在线观看| 久99久视频精品免费| 日韩,欧美,国产一区二区三区 | 91久久精品国产一区二区三区| 99热全是精品| 色综合色国产| 久久人人爽人人爽人人片va| 麻豆乱淫一区二区| 一个人看的www免费观看视频| 色综合亚洲欧美另类图片| 国产精品嫩草影院av在线观看| 色5月婷婷丁香| eeuss影院久久| 毛片女人毛片| 久久精品熟女亚洲av麻豆精品 | 成人亚洲欧美一区二区av| 国产亚洲午夜精品一区二区久久 | a级毛色黄片| 91狼人影院| 久久久久精品久久久久真实原创| 1024手机看黄色片| 特级一级黄色大片| 亚洲综合色惰| 少妇人妻精品综合一区二区| 亚洲成人av在线免费| 精品人妻视频免费看| 亚洲五月天丁香| 99热精品在线国产| 午夜日本视频在线| 亚洲高清免费不卡视频| 夜夜爽夜夜爽视频| 亚洲欧洲国产日韩| 1024手机看黄色片| 精品少妇黑人巨大在线播放 | 久久久成人免费电影| 最近2019中文字幕mv第一页| 黄色一级大片看看| 我要看日韩黄色一级片| 中文字幕制服av| 精品人妻偷拍中文字幕| 观看美女的网站| 国产精品久久电影中文字幕| 在线a可以看的网站| 久久久久久国产a免费观看| 国产精品伦人一区二区| 成人三级黄色视频| 午夜免费激情av| 人体艺术视频欧美日本| 一级二级三级毛片免费看| 中文在线观看免费www的网站| 神马国产精品三级电影在线观看| 女人十人毛片免费观看3o分钟| kizo精华| 免费看美女性在线毛片视频| 秋霞伦理黄片| 97超视频在线观看视频| 国产精品久久久久久久久免| 日韩欧美 国产精品| 日韩亚洲欧美综合| 久久久久免费精品人妻一区二区| 七月丁香在线播放| 久久精品夜色国产| 婷婷色麻豆天堂久久 | 亚洲久久久久久中文字幕| 成人毛片60女人毛片免费| 亚洲国产精品成人久久小说| av天堂中文字幕网| 18禁在线播放成人免费| 国产在视频线在精品| 七月丁香在线播放| 美女高潮的动态| 我的老师免费观看完整版| 99在线视频只有这里精品首页| 国产精品国产高清国产av| 国产精品日韩av在线免费观看| 久久久久久久久久久丰满| 婷婷六月久久综合丁香| 麻豆av噜噜一区二区三区| 国产午夜精品久久久久久一区二区三区| 国内精品宾馆在线| 精品人妻偷拍中文字幕| 99久久精品国产国产毛片| 伊人久久精品亚洲午夜| 狠狠狠狠99中文字幕| 久久亚洲国产成人精品v| 黑人高潮一二区| 免费黄网站久久成人精品| 汤姆久久久久久久影院中文字幕 | 国产精品电影一区二区三区|