• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    APTAnet: an atom-level peptide-TCR interaction affinity prediction model

    2024-05-16 04:43:58PengXiongAnyiLiangXunhuiCaiTianXia1
    Biophysics Reports 2024年1期

    Peng Xiong,Anyi Liang,Xunhui Cai,Tian Xia1,?

    1 School of Artificial Intelligence and Automation,Huazhong University of Science and Technology,Wuhan 430074,China

    2 Institute of Pathology,Tongji Hospital,Tongji Medical College,Huazhong University of Science and Technology,Wuhan 430030,China

    Abstract The prediction of affinity between TCRs and peptides is crucial for the further development of TIL(Tumor-Infiltrating Lymphocytes) immunotherapy.Inspired by the broader research of drug-protein interaction (DPI),we propose an atom-level peptide-TCR interaction (PTI) affinity prediction model APTAnet using natural language processing methods.APTAnet model achieved an average ROC-AUC and PR-AUC of 0.893 and 0.877,respectively,in ten-fold cross-validation on 25,675 pairs of PTI data.Furthermore,experimental results on an independent test set from the McPAS database showed that APTAnet outperformed the current mainstream models.Finally,through the validation on 11 cases of real tumor patient data,we found that the APTAnet model can effectively identify tumor peptides and screen tumor-specific TCRs.

    Keywords Immunotherapy,TCR,Antigen,Natural language processing,Transfer learning

    INTRODUCTION

    Tumor Infiltrating Lymphocytes (TIL) immunotherapy(Paijenset al.2021) is one of the methods for treating cancer.It involves isolating and purifying T cells from tumor tissue,expanding them throughin vitrostimulation and cultivation,and then reintroducing them into the patient’s body.This process amplifies the immune response,leveraging the patient’s own immune capabilities to combat the tumor.However,there are as many as 1015different types of T cells in the human body.Different T cells express distinct T Cell Receptors (TCR) on their surfaces.Only T cells with specific TCRs can selectively recognize tumor antigens and carry out targeted cytotoxic activities.These T cells are referred to as tumor-specific T cells.Therefore,the key challenge in TIL tumor immunotherapy is how to select tumor-specific T cells from a large pool of TIL cells and massively expand their numbers.

    In the immune response,tumor antigens are proteolytically cleaved into small peptide molecules(Schumacher and Schreiber 2015).These peptides subsequently bind to the Major Histocompatibility Complex (MHC) and are presented on the cell membrane.The presented peptides are then recognized by the receptor protein TCR on the surface of T cells.Tumor cells recognized by TCR are subsequently eliminated by the immune system.Developing an algorithm to predict the affinity between MHC,peptides,and TCR is crucial for the rapid and precise selection of tumor-specific T cells,further advancing TIL immunotherapy.This will significantly reduce the time and cost required for immune testing experiments and greatly complement experimental methods (Desai and Kulkarni-Kale 2014).

    Early methods for predicting TCR-peptide affinity were primarily based on TCR sequence similarity.TCRGP (Jokinenet al.2021) used Gaussian process regression for prediction,while GLIPH (Glanvilleet al.2017) and TCRdist (Dashet al.2017) measured similarity by defining weighted distances between different TCRs based on different similarity metrics.TCRtopo (Biet al.2019) constructed a TCR similarity network using DeepWalk (Perozziet al.2014),and DeepTCR (Sidhomet al.2021) utilized a variational autoencoder to learn implicit representations of TCR sequences and perform TCR clustering.Subsequently,with the introduction of CNN (Jurtzet al.2018) and LSTM (Springeret al.2020) methods,ImRex (Luuet al.2021) transformed sequence features into interaction graphs and applied CNN for prediction.BiAttCNN (Biet al.2022) combined bidirectional LSTM and attention mechanisms to comprehensively extract key information from amino acid sequences.pMTnet (Luet al.2021),on the other hand,utilized extensive PMI data and unsupervised TCR data for pre-training and finetuning with limited PTI data to enhance generalization.TITAN (Weberet al.2021) proposed a bimodal attention network that extensively learned the binding of protein receptors and ligands and significantly improved performance through transfer learning with PTI data.However,these methods did not fundamentally address the issue of data scarcity and did not fully harness the potential of deep learning and natural language processing (NLP).There is still significant room for improvement in model architecture and training in this context.

    Here we introduce a neural network algorithm called APTAnet (Atomic-level Peptide TCR Attention network).This algorithm uses an atomic-level crossattention mechanism to simulate the interaction process between peptide-TCR sequences and predict interaction affinity.We utilize the ProtBERT model to encode the TCR amino acid sequences and the ChemBERTa2 (Ahmadet al.2022) model to encode the peptide amino acid sequences.Additionally,we employ a cross-attention mechanism to simulate the interaction recognition process between the ligand and receptor sequences.We evaluate the performance of APTAnet on a dataset curated from the McPAS database and a real tumor dataset,demonstrating excellent performance exceeding existing methods.

    MATERIALS AND METHODS

    Data collection

    In this study,a total of 45,120 human TCR β sequences were downloaded from the VDJdb database(https://vdjdb.cdr3.net/).These TCR sequences were assigned to 215 peptides.Additionally,all human TCR β sequences related to COVID-19 were downloaded from the ImmuneCODE database (https://www.adaptivebiotech.com/immunecode/),totaling 154,320 TCRs assigned to 289 peptides.The VDJdb database was merged with the ImmunoCODE dataset,retaining only the PTI data related to human MHC I.Furthermore,to limit differences in sequence lengths,we restricted the amino acid length of TCR CDR3β sequences to be between 10 and 20 and the amino acid length of peptide sequences to be between 8 and 14 to retain the majority of valid data.

    Next,the data was complemented using the gene reference sequences and VDJ gene information provided by the IMGT database (https://www.imgt.org).Having full-length TCR β sequences provides richer protein context information,which is beneficial for subsequent sequence embedding.Regarding peptides,we used the RDKit Python toolkit for cheminformatics (https://rdkit.org/) to convert peptide sequences into typical SMILES sequences.To ensure the uniformity of pairing relationships,we adopted a sampling approach inspired by Moriset al.(Moriset al.2021).Peptides with fewer than 15 TCR pairings were filtered,and downsampling was applied to limit each peptide to a maximum of 400 TCR pairings.Finally,a total of 25,675 pairs of PTI data were obtained,including 23,143 unique TCR sequences and 244 unique peptide sequences.

    Negative sample generation

    In the database,output labels are continuous affinity values.To reduce the complexity of the problem and account for potential affinity bias in experimental measurements,we disregarded the continuous affinity values and transformed the regression problem into a binary classification problem.All data stored in the database is labeled as positive data,while negative samples are generated manually.

    We selected random mismatch (Fischeret al.2020)as the method for generating negative data.Compared to adding TCR libraries from other sources,the random mismatch approach not only limits the overestimation of model performance but also balances the number of negative and positive samples,thus avoiding dataset class imbalance.To reduce the interference from false negative data,we further refined the data generation method to minimize the similarity between the sampled TCR and the real TCR sequences.The steps for generating negative samples are as follows:

    (1) Under specific random numbers,uniformly sample a sequencepifrom the peptide set P.It is known that in the positive samples,the set of all TCRs paired withpiis {tj}.

    (2) Sample a sequencetifrom the TCR setTin such a way thattiis not an element of {tj}.

    (3) Calculate the average edit distancesbetweentiand the CDR3 sequences in {tj},using the formula as follows.If the distancesis greater than 5,then consider(pi,ti) as a negative sample.

    where |T| represents the number of sequences in the set,and D (ti,tj) represents the edit distance between sequencestiandtj,which is the minimum number of editing operations required to transform one sequence into the other.

    (4) Repeat Steps 1 through 3 until the number of negative samples and positive samples are equal.

    Through this process,after combining positive and negative samples,the training dataset contained a total of 51,350 pairs of data,with 50% being positivesamples and 50% being negative samples.

    Data augmentation

    For deep learning models,achieving good generalization on complex problems requires ensuring both the quantity and quality of the dataset.However,due to the limitations of the PTI dataset,the number of high-quality pre-processed data falls far short of what is required for PTI prediction.Research indicates that using data augmentation strategies can mitigate this issue (Wuet al.2021).Data augmentation enriches the diversity of input data,thereby improving the model’s performance.

    In this study,SMILES (Weiningeret al.1989)sequences data augmentation was implemented using the RDkit library.First,the original SMILES sequences were converted into corresponding molecular objects.Then,based on the molecular structure of the objects,atom identifiers were obtained.Subsequently,the order of atom identifiers was randomly shuffled,and the molecules were renumbered to generate new SMILES sequences.This process was repeated until the desired augmentation level was reached.

    SMILES sequence data augmentation can help alleviate the problem of limited antigen diversity in peptides.By greatly expanding the number of peptide sequences through augmentation,it significantly increased the diversity of peptides in the dataset,enriched the input space,and enhanced the model’s generalization ability.

    Data set split

    Due to the limited diversity of peptide data in the current dataset,the model is highly prone to overfit when it comes to recognizing entirely new peptide sequences.Therefore,our research was focused on predicting binding between known antigen peptides and any TCR within a specified collection.During data splitting,a “soft split” approach was used,allowing peptides to be distributed randomly,while ensuring that each type of TCR can only appear in either the training set or the test set.

    In this study,a 10-fold cross-validation was employed to evaluate the model.A “soft split” ensured that each type of TCR was only restricted to the same fold,while peptides could be randomly distributed in each fold.During each training iteration,nine of the folds were used as the training set,and one fold was used as the test set.

    Pre-training data set

    The pre-training data set used in this study was obtained from the BindindDB pre-processed data set provided by Bornet al.(https://ibm.biz/active_site_data).To ensure the similarity in length distribution between the DPI data and PTI data and avoid significant differences in sequence lengths,ligands with SMILES sequences longer than 300 and proteins with amino acid sequences longer than 512 were excluded from the DPI data.The subsequent processing was similar to the PTI data set,where all data in the dataset were treated as positive samples,and negative samples were generated through random mismatches.In the end,the DPI data set contained 177,225 ligand molecules and 1679 protein receptors,totaling 409,859 pairs of data,with 90% used for pretraining and 10% used to validate model performance.

    Independent data set

    We tested the model using an independent test set generated from the McPAS database (http://friedmanlab.weizmann.ac.il/McPAS-TCR/).To create this test set,we initially excluded TCR sequences that were already present in the VDJdb,ensuring the test set’s heterogeneity.Subsequently,we categorized the independent test set into two groups based on peptide presence in the training set: SP (Seen Peptide) and UP(Unseen Peptide).In the SP category,peptide sequences had been part of the training sets for all models,while the UP category included peptide sequences not present in any model’s training set.Finally,after applying data filtering and generating negative samples,the SP test set contained 5232 data pairs,and the UP test set contained 928 data pairs.

    K-NN baseline

    To better assess the general performance of the PTI problem,we employed a K Nearest Neighbor (KNN)model based on sequence similarity as the baseline classifier.We used the sum of normalized Levenshtein distances between amino acid sequences of peptides and TCRs as the distance metric between samples.

    where ∣·∣ is sequence length andLev(·,·) is the Levenshtein distance.

    Model performance was evaluated for all odd values of k (1 <k< 20).Whenkis greater than or equal to 11,the ROC-AUC and PR-AUC values of the model tend to be consistent.We chose KNN withk=19 as the baseline model to assess the benchmark classification results for the PTI problem.

    APTAnet model

    Figure 1 displays the overall structure of APTAnet,the model simulates the interaction process between peptide-TCR sequences and predicts the affinity of their interaction.APTAnet consists of three main components: the encoding module,the cross-attention module,and the fully connected module.

    The encoding module

    In the encoding module,we use pre-trained Transformer-based models to embed the SMILES sequences and amino acid sequences.The encoding module parameters are as shown in Table 1.For encoding amino acid sequences,we utilize the ProtBERT pre-trained model (https://huggingface.co/Rostlab/prot_BERT_bfd).ProtBERT is based on BERT,with 30 layers of Transformer structures,each having 16 attention heads.Each amino acid is embedded into a 1,024-dimensional vector.For encoding SMILES sequences,we use the ChemBERTa2 pre-trained model(https://huggingface.co/DeepChem/ChemBERTa-77MMTR).ChemBERTa2 is based on RoBERTa,which is an optimized model based on BERT.It deploys 12 layers of Transformer structures,each with six attention heads.Each SMILES symbol is embedded into a 384-dimensional vector.In the encoding process,a tokenizer first segments the input sequences at the character level.The pre-trained model adds a special token [CLS] (classification) at the beginning of each sequence,and the vector obtained from embedding this symbol can represent the semantic information of the entire sequence.In addition,multiple [PAD] (padding)tokens are added to the end of each sequence to pad it to a fixed length,L,ensuring consistency for sequences of different lengths.Finally,the tokenized vectors and positional encodings of the sequence are input into the model.The model embeds each character into a Ddimensional vector and concatenates them to form the encoding of the entire sequence.

    Table 1 Encoding module parameters

    Cross-attention module

    Taking inspiration from the self-attention (Vaswaniet al.2017) mechanism,we apply paired cross-attention mechanisms to the PTI problem.By using two paired cross-attention layers,attention is computed for the receptor (T) based on the ligand (P) to calculate the attention (P→T),and vice versa,where the ligand (P)becomes the source to calculate attention on the receptor (T) to obtain attention (T→P).This enables the receptor to utilize information from the ligand to learn the importance of each symbol in the input sequence(Fig.2).The formula for calculating cross-attention weights is as follows:

    Considering the multi-head attention mechanism,we use eight parallel cross-attention heads to construct an attention layer,and the calculation formula is as follows:

    Fully connected module

    The fully connected module initially concatenates the interaction vectors obtained from the cross-attention module and normalizes the data distribution through batch normalization.Subsequently,it applies multiple Dense modules for nonlinear transformations of the data to extract relationships between features.Finally,it maps the input to a value between 0 and 1 using the Sigmoid function,resulting in the output affinity score.

    Each Dense module consists of four layers.The first layer is the Linear layer,which performs a linear transformation from a high-dimensional vector to a lower-dimensional one.The second layer is the Batch Normalization layer,which standardizes the input data distribution of each layer by normalizing each small batch of samples,making it closer to a standard normal distribution.The introduction of Batch Normalization can effectively alleviate the vanishing gradient problem,accelerate model training,and suppress overfitting.

    The third layer is the Activation layer,which introduces non-linearity through an activation function,enhancing the neural network’s expressive power.We employed the ReLU activation function here.The fourth layer is the Dropout layer,which,during forward propagation,deactivates each neuron with a certain probability (p),preventing the model from relying too much on specific local features and,to some extent,achieving regularization.

    Model training

    Since PTI is a binary classification problem,the loss function used is binary cross-entropy (BCE),and the calculation formula is as follows:

    whereyis the true label andpis the predicted affinity score.Cross-entropy is a good measure of the difference between the true probability distribution and the predicted probability distribution.

    We use the AdamW optimizer (Loshchilov and Hutter 2017) to update the model parameters during training,which adjusts model parameters based on the gradients of the loss function.AdamW is known for its excellent performance in deep learning,particularly in tasks such as natural language processing and computer vision,and can yield better results.

    In terms of the learning rate adjustment strategy,we employ a OneCycleLR (Smith and Topin 2017) learning rate adjustment strategy based on the cosine function.Throughout the training process,the learning rate first increases and then decreases,forming an approximate cosine curve.

    The training process of the APTAnet model is divided into two stages: pre-training and fine-tuning.In the pretraining stage,affinity data between drugs and proteins from the DPI dataset are used to learn a broad range of ligand-receptor interactions.By learning from a large amount of ligand data,the input space of the SMILES channel is significantly enriched,enhancing the model’s generalization capability to different data.In the finetuning stage,the model is fine-tuned using PTI data.After converting peptides into SMILES sequences,the PTI task can be considered a specific subtask of DPI,allowing for transfer learning.However,due to the differences in SMILES data between DPI and PTI,when importing the pre-trained model for fine-tuning,a “Semi-Frozen” approach is employed.This means that the model weights for the Protein channel are frozen,and only the model for the SMILES channel and the fully connected modules are trained.This approach allows for better adaptation of the model to the specific requirements of the PTI task while leveraging the knowledge gained during pre-training.

    RESULTS

    Overview of approach

    APTAnet model was trained on data from the DPI dataset,which was obtained from BindingDB,containing 177,255 ligand molecules and 1679 protein receptors (see the MATERIALS AND METHODS section).After learning the general binding modes of protein receptors and drug ligands,the model is finetuned using PTI data.We downloaded TCR data from VDJdb,performed data transformation,and applied filtering,obtaining 23,143 unique TCRs and 244 unique peptides (see the MATERIALS AND METHODS section).We adopted a “Semi-Frozen” approach,which involves freezing the model weights of the protein channel and training only the model weights of the SMILES channel and the fully connected module (Fig.3).

    Performance analysis of different encoding methods

    To determine which sequence encoding method works best,we first compared the performance of different encoding methods (Table 2).The BERT pre-trained model achieved the highest ROC-AUC (0.859),ahead of VHSE8 (ROC-AUC=0.755),BLOSUM62 (ROC-AUC=0.757),Word2Vec (ROC-AUC=0.789),and K-NN baseline (ROC-AUC=0.774) (Fig.4).ChemBERTa2 and ProtBERT have been trained on vast amounts of sequence language,allowing them to learn the representation of amino acids and SMILES language expressions comprehensively.They can capture the semantics of sequences based on context and extract key sequence features.Therefore,in downstream classification predictions,encoding methods based on BERT pre-trained models tend to provide superior performance compared to other encoding methods.

    Table 2 Encoding method table

    We next compared the performance of different training strategies (Table 3).We incrementally added three training strategies on top of training directly on the PTI dataset: DPI pre-training,weight freezing,and data augmentation.Without pre-training,“APTAnet”achieved average ROC-AUC and PR-AUC values of 0.856 and 0.831.After adding the “Pretrain” approach,the model’s average ROC-AUC and PR-AUC improved to 0.872 and 0.854.Following that,with the protein channel weights frozen in the transfer learning stage,the model’s average ROC-AUC and PR-AUC increased to 0.877 and 0.860.Finally,with the introduction of“Augmentation”,the model’s performance saw a substantial boost,with average ROC-AUC reaching 0.893 and mean PR-AUC reaching 0.877 (Fig.5).These results indicate that representing ligand peptides using SMILES notation is superior and that utilizing pretraining and data augmentation can lead to significant improvements in the model’s performance.

    Table 3 Training strategy table

    Finally,we replaced the cross-attention module with self-attention module and concatenation,and compared their performance (Table 4).APTAnet with crossattention performed best (ROC-AUC=0.893),ahead of self-attention (0.857) and concatenation (0.791).Crossattention can effectively capture the complex relationship between TCR and peptide.This attention mechanism allows the model to focus on the parts of the sequences with significant mutual influence when processing TCR and peptide sequences,thereby better capturing the associations between them.Additionally,cross-attention also aids the model in better understanding and leveraging the heterogeneity of information between the TCR and peptide.

    Table 4 Ablation study

    Outstanding performance of APTAnet compared to existing tools

    We next compared APTAnet with current leading models on independent and publicly available datasets.The test set was generated from the McPAS database and divided into two groups: SP (Seen Peptide) and UP(Unseen Peptide) (see the MATERIALS AND METHODS section).The SP data set contained 5232 data pairs,and the UP data set contained 928 data pairs.On the SP data set,APTAnet achieved the highest ROC-AUC (0.888)and PR-AUC (0.876),ahead of TITAN (ROC-MUC=0.846 and PR-AUC=0.829),pMTnet (ROC-MUC=0.835 and PR-AUC=0.817),ImRex (ROC-AUC=0.612 and PRAUC=0.602),and K-NN baseline (ROC-AUC=0.731,PR-AUC=0.705) (Fig.6).On the UP data set,APTAnet still achieved the highest ROC-AUC (0.732) and PR-AUC(0.701),ahead of TITAN (ROC-MUC=0.624 and PRAUC=0.587),pMTnet (ROC-MUC=0.521 and PR-AUC=0.512),ImRex (ROC-AUC=0.494 and PR-AUC=0.506),and K-NN baseline (ROC-AUC=0.424 and PR-AUC=0.468) (Fig.7).In summary,APTAnet outperformed all other models in our analysis,indicating its competitive advantage in predicting known peptide data and strong generalization ability to entirely new peptides.

    Effective tumor peptide identification with APTAnet in real tumor data

    We further assessed the APTAnet’s performance on the real tumor data.The tumor data set was collected from 11 tumor patients with different cancer types from cancer immunotherapy clinical research (Hanadaet al.2022;Loweryet al.2022;Periet al.2021;Tranet al.2015),comprising a total of 86 data pairs.The APTAnet model (ROC-AUC=0.785,PR-AUC=0.709) exhibits significantly better predictive performance compared to the TITAN model (ROC-AUC=0.579,PR-AUC=0.538) (Fig.8).These results indicate that APTAnet possesses excellent generalization capabilities for unknown peptides and the identification of important PTI binding sites.

    TCR sequence generalization analysis

    We next examined the model’s generalization ability.We calculated the edit distance between each TCR in the test set and all TCRs in the training set (see the MATERIALS AND METHODS section) and classified the test set into five groups: “l(fā)ow”,“mid-low”,“mid”,“midhigh”,and “high” based on the magnitude of the inter-group distance.APTAnet indeed performs better in predicting TCRs with higher similarity (Fig.9).The“l(fā)ow”,“mid-low”,and “mid” groups perform above the overall mean (ROC-AUC=0.893,PR-AUC=0.877),with the highest similarity “l(fā)ow” group achieving ROC-AUC and PR-AUC values of 0.907 ± 0.003 and 0.895 ± 0.005,respectively.The “mid-high” and “high” groups perform below the overall mean,but even the lowest similarity“high” group still exhibits relatively high performance(ROC-AUC=0.866 ± 0.008,PR-AUC=0.851 ± 0.010).This indicates that APTAnet has essentially learned the pairing features of PTIs and has a high degree of generalization for different TCRs,allowing it to recognize sequence patterns of different TCRs effectively.

    Attention mechanism interpretability

    Finally,we sought to explain the principles of APTAnet using attention mechanisms.Through statistical analysis of attention results across numerous sequences,it was observed that the average variance of attention scores at the same position for different TCRs is 9.3 × 10-5,while for different peptides,it is 1.2 × 10-4on average.Particularly,the TCR CDR3 region exhibits a significantly higher average variance of 3.5 × 10-4.This indicates that APTAnet demonstrates certain attention preferences for specific positions in the sequences,and the model can adaptively adjust its focus points for different sequences.

    To gain a more intuitive understanding of the biological significance of APTAnet’s attention mechanism,we employed experimentally validated protein complexes (PDB ID: 1MI5) to visualize the atomic-level attention for peptides and the amino acidlevel attention for TCRs (Fig.10).In this context,the amino acid sequence of the peptide is “FLRGRAYGL”,and the CDR3 β sequence of the TCR is“CASSLGQAYEQYF”.According to the predictions from the APTAnet model,the affinity score between them is 0.857,indicating a strong interaction.

    Fig.1 APTAnet architecture.APTAnet consists of three main components: the encoding module,the cross-attention module,and the fully connected module.It begins by embedding the peptide and TCR sequences separately using the encoding module.Then,it allows interaction between the encoded vectors using cross-attention,and finally,the fully connected module extracts relationships between the features to predict the affinity score

    Fig.2 Cross-Attention involves two input matrices,S1 and S2,which have different dimensions.It is necessary to project Q,K,and V to different attention spaces using distinct projection matrices W Q,W K,and W V .Afterward,attention weights are calculated through dotproduct operations and softmax activation functions.The final output matrix has the same dimensions as the receptor matrixS2

    Fig.3 APTAnet architecture.The training process of the APTAnet model is divided into two stages: pre-training and fine-tuning.In the pre-training stage,affinity data between drugs and proteins from the DPI dataset are used to learn a broad range of ligand-receptor interactions.In the fine-tuning stage,the model is fine-tuned using PTI data

    Fig.4 ROC curves for different encoding methods

    Fig.5 Box plots of ten-fold cross-validation ROC-AUC (left) and PR-AUC (right) for different training modes.Group 1 is APTAnet without pre-training,Group 2 is APTAnet+Pretrain,Group 3 is APTAnet+Pretrain+Semifrozen,and Group 4 is APTAnet+Pretrain +Semifrozen+Augmentation

    Fig.6 Performance comparison of multiple tools in the SP test set.The bars display the values of ROC-AUC (red) and PR-AUC(green) in different models

    Fig.7 Performance comparison of multiple tools in the UP test set.The bars display the values of ROC-AUC (red) and PR-AUC(green) in different models

    Fig.8 Performance comparison between APTAnet and TITAN in real cancer data.The green line represents APTAnet,and the red line represents TITAN

    Fig.9 Box plots of ten-fold cross-validation ROC-AUC (left) and PR-AUC (right) for different similarity distances.The red dashed line represents the mean of the overall results

    Fig.10 Structure of LC13 TCR in complex with HLAB8-EBV peptide complex (1MI5).The left side in cyan represents TCR β,while the right side in green represents the peptide.The red-highlighted regions indicate the key interaction sites confirmed based on the threedimensional structure of the PTI.The black-boxed area on the TCR marks the position of CDR3 β,a region known for its high variability and widely recognized for its interaction with peptides

    We next analyzed the attention scores on amino acids of the TCR sequence.The high-attention points identified by the model in the CDR3 region exhibit a high degree of consistency with the contact points confirmed through three-dimensional structural analysis.Moreover,10 out of 133 amino acids have high attention scores,and 4 out of the 10 (C91,Q98,A99,Q101) are located in the PTI binding region of the CDR3.This indicates that the model effectively recognizes the PTI recognition region (P-value=0.0018) (Fig.11).Although other amino acids not in proximity to the binding sites may also have high attention scores,the model’s predictions are still significant,suggesting these amino acids might play a potential role in PTI interactions.

    Fig.11 Amino acid-level attention mechanism analysis.The amino acid sequence is horizontally arranged in the matrix,and the symbols in the middle of the matrix represent the corresponding amino acid abbreviations (blank areas indicate the subsequent PAD region of the sequence).The black-boxed region on the TCR corresponds to the CDR3 area.The colors filled in the heatmap represent the strength of attention,with darker colors indicating stronger attention to specific amino acids

    We further analyzed the attention scores at the atomic level of the peptide.There is a high degree of consistency between the key atoms confirmed through structural analysis and the high-attention atoms predicted by the model.We found that,on the peptide’s backbone,the model exhibits higher attention to oxygencontaining groups,such as carboxyl groups and peptide bonds.On the peptide’s side chains,the model shows higher attention to polar amino acids,like tryptophan,and charged amino acids,such as arginine,while exhibiting lower attention to non-polar amino acids,such as glycine (Fig.12).This indicates that the model is capable of recognizing and focusing on key chemical features related to the peptide-TCR interaction.

    Fig.12 Atom-level attention mechanism analysis.The peptide sequence “FLRGRAYGL” is represented as a corresponding molecular structure diagram,where each node in the molecular graph represents a corresponding atom,the connecting lines represent the chemical bonds between atoms,and the shading of each atom indicates the level of attention that the model places on it in relation to the TCR.The blue boxes represent oxygen-containing functional groups,the red box represents polar amino acids,the purple box represents charged amino acids,and the green box represents nonpolar amino acids

    Overall,these results indicate that APTAnet has the capability to recognize essential atomic groups or amino acids involved in PTI processes.

    DISCUSSION

    The immune response to tumors forms the foundation of TIL immunotherapy.Currently,traditional immunological experimental techniques cannot meet the demands of large-scale applications of TIL immunotherapy.Therefore,the development of a PTI affinity prediction algorithm to enable the rapid and accurate screening of tumor-specific T cells is crucial for the further advancement of TIL immunotherapy.However,the scarcity of data and the vast sequence matching space make this task challenging.Previous models have been limited by data and methods,achieving only limited performance.

    This study draws inspiration from the DPI problem and NLP methods,proposing APTAnet -a model for atomic-level PTI affinity prediction.APTAnet provides an excellent solution for accurate PTI affinity prediction and the application of TIL immunotherapy.Due to variations in experimental techniques,conditions,and sample sources,different experimental results may exhibit potential numerical biases in affinity values.Therefore,to reduce batch effects in experiments and simplify the complexity of the problem,we binarize the continuous regression values.The results obtained in this manner are more interpretable and practical.A drawback of binary tasks is the inability to compare the binding strengths between different TCR-peptide pairs.

    The superior performance of APTAnet may be attributed to the design of a model training method that enriches the training data to mitigate the limitations of existing PTI data.This method treats the PTI problem as a subtask of the DPI problem,further representing the peptide’s amino acid sequence as an atomic-level,fine-grained SMILES sequence.This approach not only enables data augmentation,enriching the input data space but also leverages the large-scale dataset from BindingDB (Gilsonet al.2016) for pre-training.This pretraining extensively learns the binding patterns of proteins and ligands,followed by fine-tuning with PTI data,significantly improving the model’s generalization capability.The predicted results can be validated using MHC multimer technology: This method utilizes multimers that bind to MHC molecules,allowing the binding of multiple T cells specific to the same antigen.This aids in the detection and analysis of T cell responses to specific antigens.

    Compared to existing tools,APTAnet has achieved precise predictive performance with limited data.This not only provides valuable support for the application of TIL immunotherapy research but also strongly demonstrates the feasibility of applying NLP and deep learning methods to biological sequences.

    The future of PTI research depends on constructing high-quality,large-scale,and standardized PTI affinity databases,proposing more general PTI model methods,and implementing end-to-end tumor affinity prediction methods.The immune response process of tumors mainly involves the interaction of MHC,peptide,and TCR complexes.Designing models that predict all three inputs end-to-end,though more complex,has the potential to further enhance model performance by providing more input information.

    In summary,we have designed APTAnet,which can predict the affinity of peptide-TCR binding based on the amino acid sequences of peptides and TCRs at the sequence level.This allows for the screening of potential tumor-specific T cells to assist in TIL cancer immunotherapy.

    AcknowledgementsThis project originated from the National Natural Science Foundation of China (61571202)

    Code availabilityAPTAnet is available as a Python package at https://github.com/AY-LIANG/APTAnet.

    Compliance with Ethical Standards

    Conflict of interestPeng Xiong,Anyi Liang,Xunhui Cai,and Tian Xia declare that they have no conflict of interest.

    Human and animal right and informed consentThis article does not contain any studies with human or animal subjects performed by any of the authors.

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License,which permits use,sharing,adaptation,distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s) and the source,provide a link to the Creative Commons licence,and indicate if changes were made.The images or other third party material in this article are included in the article’s Creative Commons licence,unless indicated otherwise in a credit line to the material.If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.To view a copy of this licence,visit http://creativecommons.org/licenses/by/4.0/.

    精品国产乱子伦一区二区三区| 免费在线观看影片大全网站| 国产又色又爽无遮挡免费看| 叶爱在线成人免费视频播放| 午夜精品久久久久久毛片777| 国产aⅴ精品一区二区三区波| 看黄色毛片网站| 亚洲国产欧美网| 亚洲无线观看免费| 老熟妇乱子伦视频在线观看| 国内精品美女久久久久久| 精品国产三级普通话版| 两个人视频免费观看高清| 热99re8久久精品国产| 99热这里只有是精品50| h日本视频在线播放| 美女免费视频网站| 听说在线观看完整版免费高清| 别揉我奶头~嗯~啊~动态视频| 亚洲天堂国产精品一区在线| av天堂中文字幕网| 一a级毛片在线观看| 免费看美女性在线毛片视频| 午夜激情欧美在线| 99精品在免费线老司机午夜| 国产高潮美女av| 亚洲精品色激情综合| 国模一区二区三区四区视频 | 美女午夜性视频免费| 成人特级av手机在线观看| 国产欧美日韩一区二区精品| 观看免费一级毛片| 亚洲午夜精品一区,二区,三区| 欧美av亚洲av综合av国产av| 亚洲av第一区精品v没综合| 免费在线观看成人毛片| 国产精品野战在线观看| 麻豆久久精品国产亚洲av| 91麻豆精品激情在线观看国产| 免费人成视频x8x8入口观看| 在线播放国产精品三级| 国产精品国产高清国产av| 久久久国产欧美日韩av| 国产亚洲欧美在线一区二区| 亚洲人成电影免费在线| 脱女人内裤的视频| 身体一侧抽搐| 19禁男女啪啪无遮挡网站| 真人一进一出gif抽搐免费| 国产又黄又爽又无遮挡在线| 欧美三级亚洲精品| 99久久精品热视频| 欧美黑人巨大hd| 午夜福利成人在线免费观看| 在线a可以看的网站| 国产高清激情床上av| 99国产精品一区二区蜜桃av| 亚洲国产精品久久男人天堂| 亚洲专区国产一区二区| 黄色女人牲交| 欧美乱妇无乱码| 亚洲人成网站高清观看| 国产极品精品免费视频能看的| 丰满人妻一区二区三区视频av | 麻豆成人午夜福利视频| 欧美一级a爱片免费观看看| 国产一级毛片七仙女欲春2| 岛国视频午夜一区免费看| 美女cb高潮喷水在线观看 | 19禁男女啪啪无遮挡网站| 精品熟女少妇八av免费久了| 中文字幕av在线有码专区| 12—13女人毛片做爰片一| 久久久水蜜桃国产精品网| 亚洲成av人片免费观看| 99热这里只有是精品50| 亚洲成人免费电影在线观看| 亚洲18禁久久av| 精品欧美国产一区二区三| 天堂网av新在线| 国内精品美女久久久久久| 国产成年人精品一区二区| 成年人黄色毛片网站| 亚洲在线观看片| 身体一侧抽搐| 极品教师在线免费播放| 国产69精品久久久久777片 | 精品熟女少妇八av免费久了| 亚洲成av人片在线播放无| 女人高潮潮喷娇喘18禁视频| 99国产综合亚洲精品| 成人鲁丝片一二三区免费| 国产精品乱码一区二三区的特点| 国产一区二区在线观看日韩 | 男人和女人高潮做爰伦理| 精品国产乱子伦一区二区三区| 国产一区二区三区视频了| 19禁男女啪啪无遮挡网站| 成人性生交大片免费视频hd| 免费在线观看视频国产中文字幕亚洲| 少妇丰满av| 欧美成人一区二区免费高清观看 | 国产视频一区二区在线看| 色噜噜av男人的天堂激情| 色精品久久人妻99蜜桃| 亚洲欧洲精品一区二区精品久久久| 全区人妻精品视频| 99热只有精品国产| av女优亚洲男人天堂 | 一个人观看的视频www高清免费观看 | 亚洲精品乱码久久久v下载方式 | 国产三级黄色录像| 18禁黄网站禁片午夜丰满| 99久久99久久久精品蜜桃| 男女午夜视频在线观看| 日日干狠狠操夜夜爽| 国产v大片淫在线免费观看| 免费无遮挡裸体视频| 久久亚洲精品不卡| 男人的好看免费观看在线视频| 精品99又大又爽又粗少妇毛片 | 欧美成狂野欧美在线观看| 国产精品电影一区二区三区| 99国产极品粉嫩在线观看| 91av网站免费观看| 欧美一级毛片孕妇| av福利片在线观看| 亚洲激情在线av| 欧美成狂野欧美在线观看| 美女被艹到高潮喷水动态| av天堂在线播放| 我要搜黄色片| 观看免费一级毛片| 成人av一区二区三区在线看| 最近视频中文字幕2019在线8| 久久精品国产99精品国产亚洲性色| 欧美不卡视频在线免费观看| 亚洲熟妇中文字幕五十中出| 黄色片一级片一级黄色片| 欧美日韩国产亚洲二区| 国内揄拍国产精品人妻在线| 久久九九热精品免费| 窝窝影院91人妻| 人人妻人人澡欧美一区二区| 免费看十八禁软件| 脱女人内裤的视频| 人人妻,人人澡人人爽秒播| а√天堂www在线а√下载| 色尼玛亚洲综合影院| 精品国产超薄肉色丝袜足j| 女生性感内裤真人,穿戴方法视频| 国内精品久久久久久久电影| 婷婷精品国产亚洲av| 国产精品一区二区三区四区久久| 国产69精品久久久久777片 | 国产成人欧美在线观看| 亚洲av五月六月丁香网| 精品国产美女av久久久久小说| 给我免费播放毛片高清在线观看| 国产成人精品久久二区二区免费| 无遮挡黄片免费观看| 一个人观看的视频www高清免费观看 | 精品一区二区三区视频在线观看免费| 天天添夜夜摸| 91麻豆av在线| 日韩欧美 国产精品| 欧美性猛交╳xxx乱大交人| 精品日产1卡2卡| 国产精品1区2区在线观看.| 久久久久久久久久黄片| 99久久国产精品久久久| 免费看光身美女| 91麻豆精品激情在线观看国产| 欧美日韩一级在线毛片| 特级一级黄色大片| 男女做爰动态图高潮gif福利片| 成人无遮挡网站| 一级毛片女人18水好多| 久久欧美精品欧美久久欧美| 精品久久久久久久末码| 国产激情欧美一区二区| 亚洲美女黄片视频| 久久精品91无色码中文字幕| 午夜精品一区二区三区免费看| 波多野结衣高清无吗| 91在线精品国自产拍蜜月 | 亚洲午夜精品一区,二区,三区| 黄片小视频在线播放| 国产成人aa在线观看| 国产免费男女视频| 丰满的人妻完整版| 别揉我奶头~嗯~啊~动态视频| 成人永久免费在线观看视频| 久久精品国产亚洲av香蕉五月| 精品一区二区三区视频在线 | 亚洲成人精品中文字幕电影| 别揉我奶头~嗯~啊~动态视频| 免费在线观看视频国产中文字幕亚洲| 国产高清视频在线观看网站| 国产精品久久电影中文字幕| 亚洲av成人不卡在线观看播放网| 免费人成视频x8x8入口观看| 日本一本二区三区精品| 亚洲av成人不卡在线观看播放网| 桃红色精品国产亚洲av| 国产亚洲精品综合一区在线观看| 97超视频在线观看视频| 狂野欧美白嫩少妇大欣赏| 白带黄色成豆腐渣| 一二三四社区在线视频社区8| 成年免费大片在线观看| 变态另类成人亚洲欧美熟女| 亚洲无线在线观看| 特大巨黑吊av在线直播| 老汉色∧v一级毛片| 精品国产乱码久久久久久男人| 国产精品永久免费网站| 日韩人妻高清精品专区| 69av精品久久久久久| 国内精品久久久久精免费| 色在线成人网| 国产成人av教育| 欧美不卡视频在线免费观看| 免费高清视频大片| 久久精品国产99精品国产亚洲性色| 亚洲中文字幕日韩| 天天添夜夜摸| 啦啦啦观看免费观看视频高清| 精华霜和精华液先用哪个| 日本一二三区视频观看| 亚洲av成人一区二区三| 色综合亚洲欧美另类图片| 美女 人体艺术 gogo| 成人国产综合亚洲| 国产精品九九99| 亚洲精品456在线播放app | 国产69精品久久久久777片 | 成熟少妇高潮喷水视频| 国内精品美女久久久久久| 黄频高清免费视频| 亚洲av成人精品一区久久| 欧美又色又爽又黄视频| 日本 欧美在线| 亚洲午夜理论影院| 国产精品久久久久久精品电影| 亚洲人与动物交配视频| 在线观看66精品国产| 欧美性猛交黑人性爽| 亚洲国产精品久久男人天堂| 精品久久久久久,| 18美女黄网站色大片免费观看| 亚洲av中文字字幕乱码综合| 在线永久观看黄色视频| 美女 人体艺术 gogo| 国产精品1区2区在线观看.| 黄色视频,在线免费观看| 天堂网av新在线| 美女午夜性视频免费| 国产成人av教育| 一区二区三区激情视频| 日韩欧美免费精品| 亚洲欧美日韩高清在线视频| 国产91精品成人一区二区三区| 狂野欧美激情性xxxx| 一进一出抽搐gif免费好疼| 男女视频在线观看网站免费| 一个人看视频在线观看www免费 | 国产欧美日韩一区二区精品| 久久国产精品影院| 观看美女的网站| 成人一区二区视频在线观看| 精品国产美女av久久久久小说| 狂野欧美激情性xxxx| 亚洲 国产 在线| 毛片女人毛片| 日本黄色视频三级网站网址| 亚洲国产精品成人综合色| 人人妻人人看人人澡| 五月玫瑰六月丁香| 成人18禁在线播放| 97人妻精品一区二区三区麻豆| 成人性生交大片免费视频hd| 黄色 视频免费看| 免费人成视频x8x8入口观看| 欧美日韩瑟瑟在线播放| 中文字幕人妻丝袜一区二区| 最新在线观看一区二区三区| 日韩人妻高清精品专区| 欧美xxxx黑人xx丫x性爽| 99热这里只有是精品50| 欧美在线黄色| 久久久久国产一级毛片高清牌| 神马国产精品三级电影在线观看| 操出白浆在线播放| 色哟哟哟哟哟哟| 丝袜人妻中文字幕| 亚洲色图av天堂| 俄罗斯特黄特色一大片| 丁香六月欧美| 五月玫瑰六月丁香| 日韩欧美国产在线观看| 国产不卡一卡二| 最新在线观看一区二区三区| 日本与韩国留学比较| 国产麻豆成人av免费视频| 国产极品精品免费视频能看的| 欧美成人一区二区免费高清观看 | 亚洲五月婷婷丁香| av福利片在线观看| 黄频高清免费视频| 亚洲av成人一区二区三| 亚洲国产欧美人成| 亚洲精品乱码久久久v下载方式 | 深夜精品福利| 久99久视频精品免费| 精品国产超薄肉色丝袜足j| avwww免费| 欧美日韩一级在线毛片| 成人永久免费在线观看视频| 天堂动漫精品| 91老司机精品| 久久精品国产亚洲av香蕉五月| 老司机福利观看| 日韩免费av在线播放| 亚洲成av人片免费观看| 亚洲国产精品合色在线| 香蕉丝袜av| 国产成人av激情在线播放| 91麻豆精品激情在线观看国产| 性欧美人与动物交配| 成人国产一区最新在线观看| 嫁个100分男人电影在线观看| 亚洲中文日韩欧美视频| 在线免费观看的www视频| 老司机午夜福利在线观看视频| 人妻夜夜爽99麻豆av| 国产精品久久久久久人妻精品电影| 无限看片的www在线观看| 国产欧美日韩精品亚洲av| 观看免费一级毛片| 在线观看一区二区三区| 精品午夜福利视频在线观看一区| 成人av一区二区三区在线看| 熟女人妻精品中文字幕| 熟女电影av网| 精品欧美国产一区二区三| 超碰成人久久| cao死你这个sao货| 亚洲av中文字字幕乱码综合| 国产又色又爽无遮挡免费看| 欧美黄色淫秽网站| 日韩人妻高清精品专区| 女警被强在线播放| 亚洲熟女毛片儿| 国产精品自产拍在线观看55亚洲| 国产伦精品一区二区三区四那| 日本五十路高清| 午夜精品久久久久久毛片777| 欧美日韩黄片免| 免费观看的影片在线观看| 国产亚洲精品久久久久久毛片| 日韩高清综合在线| 午夜两性在线视频| 成年人黄色毛片网站| 成人国产综合亚洲| 在线免费观看的www视频| 亚洲成av人片免费观看| 午夜a级毛片| 久久久精品欧美日韩精品| 美女 人体艺术 gogo| h日本视频在线播放| 亚洲欧美一区二区三区黑人| 国产美女午夜福利| 午夜两性在线视频| 青草久久国产| 亚洲人成网站在线播放欧美日韩| 久久性视频一级片| 国产精品1区2区在线观看.| 午夜福利在线观看吧| 亚洲自偷自拍图片 自拍| 欧美日韩乱码在线| 久久久久久久久中文| 中文字幕人妻丝袜一区二区| 精品国产乱子伦一区二区三区| 久久久成人免费电影| 麻豆国产av国片精品| 国产一级毛片七仙女欲春2| 色精品久久人妻99蜜桃| 欧美绝顶高潮抽搐喷水| а√天堂www在线а√下载| 99久久99久久久精品蜜桃| 精品国产美女av久久久久小说| 中文字幕最新亚洲高清| 最新中文字幕久久久久 | 日本 av在线| 老司机午夜十八禁免费视频| 日韩欧美精品v在线| 一级作爱视频免费观看| 国产亚洲精品久久久久久毛片| a在线观看视频网站| 免费无遮挡裸体视频| 一个人免费在线观看电影 | 亚洲九九香蕉| 99热精品在线国产| 日本精品一区二区三区蜜桃| 观看免费一级毛片| 国产av不卡久久| 九色国产91popny在线| 成人av一区二区三区在线看| 亚洲欧美日韩高清专用| 国内少妇人妻偷人精品xxx网站 | 婷婷亚洲欧美| 久久久久国内视频| www日本在线高清视频| 狠狠狠狠99中文字幕| 久久久久久久久中文| 国产一区在线观看成人免费| 婷婷丁香在线五月| 欧美日本亚洲视频在线播放| 18禁黄网站禁片午夜丰满| 99视频精品全部免费 在线 | 成人欧美大片| 欧美黑人欧美精品刺激| 桃色一区二区三区在线观看| 一进一出抽搐gif免费好疼| 麻豆av在线久日| 亚洲午夜精品一区,二区,三区| 免费大片18禁| 网址你懂的国产日韩在线| 91在线精品国自产拍蜜月 | 成人国产综合亚洲| 久久天躁狠狠躁夜夜2o2o| 不卡av一区二区三区| 噜噜噜噜噜久久久久久91| 99国产精品一区二区三区| 听说在线观看完整版免费高清| 在线十欧美十亚洲十日本专区| 90打野战视频偷拍视频| 窝窝影院91人妻| 国产精品精品国产色婷婷| 99国产精品99久久久久| 99国产综合亚洲精品| 国产亚洲精品综合一区在线观看| 日本 av在线| 美女高潮的动态| 亚洲 国产 在线| 欧美乱妇无乱码| 久久精品人妻少妇| 日韩有码中文字幕| 亚洲国产精品sss在线观看| 美女高潮的动态| 99国产极品粉嫩在线观看| 精品午夜福利视频在线观看一区| 成人三级黄色视频| 丰满人妻熟妇乱又伦精品不卡| 一区福利在线观看| 天堂网av新在线| 亚洲国产欧洲综合997久久,| 高潮久久久久久久久久久不卡| 精品久久久久久久毛片微露脸| 免费一级毛片在线播放高清视频| 床上黄色一级片| 国产成+人综合+亚洲专区| 亚洲狠狠婷婷综合久久图片| 国产乱人伦免费视频| 国产精品久久久久久人妻精品电影| 亚洲最大成人中文| 两个人视频免费观看高清| 亚洲国产精品合色在线| 久久性视频一级片| 国产一级毛片七仙女欲春2| 国内精品久久久久久久电影| av天堂在线播放| 精品一区二区三区四区五区乱码| 色噜噜av男人的天堂激情| 老司机福利观看| 亚洲,欧美精品.| 精品久久久久久久久久久久久| 哪里可以看免费的av片| 99久久精品一区二区三区| 国产精品九九99| 国产精品乱码一区二三区的特点| 18禁美女被吸乳视频| 午夜两性在线视频| 后天国语完整版免费观看| 国产三级在线视频| 欧美黄色淫秽网站| 国产精品,欧美在线| 真实男女啪啪啪动态图| 香蕉av资源在线| 制服人妻中文乱码| 波多野结衣高清作品| 一本综合久久免费| 精品免费久久久久久久清纯| 亚洲成人免费电影在线观看| 毛片女人毛片| 久久香蕉精品热| 在线免费观看不下载黄p国产 | 午夜精品在线福利| 午夜免费成人在线视频| 国产亚洲欧美98| 人人妻,人人澡人人爽秒播| 日韩欧美在线二视频| 国产精品久久久久久精品电影| 国产1区2区3区精品| 天堂√8在线中文| 色综合欧美亚洲国产小说| 亚洲精品色激情综合| 99精品在免费线老司机午夜| 国产成人精品久久二区二区91| 国产精品99久久99久久久不卡| av女优亚洲男人天堂 | 亚洲欧美日韩高清专用| 国产成人系列免费观看| 人妻久久中文字幕网| 中文字幕人妻丝袜一区二区| 国产激情欧美一区二区| 欧美在线一区亚洲| 在线观看66精品国产| 日本a在线网址| 精品国产亚洲在线| 久99久视频精品免费| 久久国产乱子伦精品免费另类| 女警被强在线播放| 国产午夜福利久久久久久| 欧美色欧美亚洲另类二区| 搡老岳熟女国产| 怎么达到女性高潮| 欧美日韩中文字幕国产精品一区二区三区| 亚洲午夜精品一区,二区,三区| 亚洲国产欧美一区二区综合| 此物有八面人人有两片| 欧美丝袜亚洲另类 | 亚洲av美国av| 无限看片的www在线观看| 免费人成视频x8x8入口观看| 日韩免费av在线播放| 很黄的视频免费| 国产精品久久久久久精品电影| 久久国产精品人妻蜜桃| 99精品欧美一区二区三区四区| 欧美乱色亚洲激情| 亚洲国产欧美一区二区综合| 国产精品久久久久久亚洲av鲁大| 性欧美人与动物交配| 波多野结衣高清作品| 精品久久久久久久久久免费视频| 制服丝袜大香蕉在线| 久久久久国产精品人妻aⅴ院| 在线观看日韩欧美| 午夜福利18| 性欧美人与动物交配| 波多野结衣高清作品| 欧美高清成人免费视频www| 丝袜人妻中文字幕| 1000部很黄的大片| 色av中文字幕| 日韩欧美一区二区三区在线观看| 亚洲国产欧美网| 亚洲欧美一区二区三区黑人| 老司机在亚洲福利影院| 国产亚洲精品久久久com| 日本成人三级电影网站| 99久久精品国产亚洲精品| 色噜噜av男人的天堂激情| 麻豆一二三区av精品| 我要搜黄色片| 国产高潮美女av| 欧美3d第一页| 天堂动漫精品| 91九色精品人成在线观看| 亚洲成av人片免费观看| 99在线人妻在线中文字幕| 曰老女人黄片| 久久午夜亚洲精品久久| 国内揄拍国产精品人妻在线| 国产成人av激情在线播放| 99国产精品一区二区蜜桃av| 成年女人看的毛片在线观看| 午夜福利成人在线免费观看| 亚洲精品色激情综合| 国产精品一区二区三区四区免费观看 | 一区二区三区激情视频| 久久草成人影院| 色综合亚洲欧美另类图片| 国产成+人综合+亚洲专区| 男女视频在线观看网站免费| 免费看a级黄色片| 校园春色视频在线观看| 中文字幕熟女人妻在线| 岛国在线观看网站| 美女 人体艺术 gogo| 少妇丰满av| 丁香六月欧美| 国产伦人伦偷精品视频| 精品电影一区二区在线| www国产在线视频色| 久久久久久国产a免费观看| 又黄又粗又硬又大视频| 日韩有码中文字幕| 欧美av亚洲av综合av国产av| 国产激情偷乱视频一区二区| 亚洲av电影在线进入| 午夜福利在线观看免费完整高清在 | 国内精品一区二区在线观看| 一边摸一边抽搐一进一小说| 村上凉子中文字幕在线| 一本精品99久久精品77| 亚洲av熟女| 欧美三级亚洲精品| 欧美中文综合在线视频| 国产精品一区二区三区四区久久| 黄色成人免费大全| 别揉我奶头~嗯~啊~动态视频| 欧美午夜高清在线|