• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automated Multi-Document Biomedical Text Summarization Using Deep Learning Model

    2022-08-23 02:21:18AhmedAlmasoudSiwarBenHajHassineFahdAlWesabiMohamedNourAnwerMustafaHilalMesferAlDuhayyimManarAhmedHamzaandAbdelwahedMotwakel
    Computers Materials&Continua 2022年6期

    Ahmed S.Almasoud,Siwar Ben Haj Hassine,Fahd N.Al-Wesabi,3,Mohamed K.Nour,Anwer Mustafa Hilal,Mesfer Al Duhayyim,Manar Ahmed Hamza,and Abdelwahed Motwakel

    1Department of Information Systems,College of Computer and Information Sciences,Prince Sultan University,Saudi Arabia

    2Department ofComputer Science,College of Science and Arts at Mahayil,King Khalid University,Saudi Arabia

    3Faculty of Computer and IT,Sana’a University,Sana’a,Yemen

    4Department of Computer Science,College of Computing and Information System,Umm Al-Qura University,Saudi Arabia

    5Department of Computer and Self Development,Preparatory Year Deanship,Prince Sattam bin Abdulaziz University,AlKharj,Saudi Arabia

    6Department of Natural and Applied Sciences,College of Community-Aflaj,Prince Sattam bin Abdulaziz University,Saudi Arabia

    Abstract:Due to the advanced developments of the Internet and information technologies, a massive quantity of electronic data in the biomedical sector has been exponentially increased.To handle the huge amount of biomedical data,automated multi-document biomedical text summarization becomes an effective and robust approach of accessing the increased amount of technical and medical literature in the biomedical sector through the summarization of multiple source documents by retaining the significantly informative data.So, multi-document biomedical text summarization acts as a vital role to alleviate the issue of accessing precise and updated information.This paper presents a Deep Learning based Attention Long Short Term Memory (DLALSTM) Model for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing.Then,the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents.In order to tune the summarization performance of the DL-ALSTM model,chaotic glowworm swarm optimization (CGSO) algorithm is employed.Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset.Comprehensive comparative result analysis is carried out to showcase the efficiency of the proposed DL-ALSTM model with the recently presented models.

    Keywords: Biomedical; text summarization; healthcare; deep learning; lstm;parameter tuning

    1 Introduction

    Automatic text processing tool plays a vital role in efficient knowledge acquisition in massive source of textual data in the field of health care and life science,namely,clinical guidelines/electronic health records and scientific publications[1].Automatic text summarization has been subregion of text mining and Natural Language Processing (NLP) which intends for producing a condensed form of more than one input documents by extracting the more important contents[2,3].Text summarization tool could assist clinicians and researchers’ resources saving and time by manually presenting and identifying the key concepts within long document,with no need to read the entire text[4].Initially,text summarization is based on frequency features to recognize the most relevant contents of textual documents.After,several summarization tools have integrated a broad range of heuristics and features into the procedure of content selection.The most commonly utilized feature includes the lengths of sentences,the position of sentences,keywords extracted from the texts,the existence of cue phrases,the title words, the centroid-based cohesion, the co-occurrence feature, the existence of arithmetical contents,etc[5].

    In order to resolve this limitation,other strands of study investigated the evolution of technology which utilizes source of knowledge domain for mapping the texts into concept-based representations[6].It allows measuring the useful content of the text regarding the semantics and context behindhand the sentence,instead of shallow features.But there are few difficulties in utilizing biomedical knowledge sources in text analyses, especially in summarization [7].Maintaining, utilizing, and Building knowledge basis could be challenging.A massive amount of automatic annotation is required to widely determine the entities and concepts and to capture the relationships among them.The selection of relevant sources of knowledge domain is challenging which might seriously affect the performances of bio-medical summarization[8].Another problem is how to measure the useful content of sentences relying on qualitative relationships among methods.Deep neural network (DNN) based language methods[9]could be used to tackle most of the problems related to knowledge domain in context aware bio-medical summarization.In deep language (DL) algorithm is pre-trained on massive quantities of text information and learns how to characterize 4 units of text, mostly words, in a vector space[10].The pre-trained embedding could be finetuned on down-stream tasks or straightly utilized as an arithmetical feature.

    [11]proposed a deep-reinforced,abstractive summarization method which can able to read biomedical publication abstract and produce summary by means of a title or one sentence headline.They present a new reinforcement learning (RL) reward metrics based biomedical expert systems,namely MeSH and UMLS Metathesaurus also shows that this method can able to produce abstractive,domain-aware summaries.[12]presented a new text summarization method for documents with Deep Learning Modifier Neural Network(DLMNN)classification.It produces an enlightening summary of the document-based entropy values.The introduced DLMNN architecture contains 6 stages.Initially,the input document is preprocessed.Next,the feature can be extracted with preprocessed information.Then, the most relevant feature can be elected by the improved fruit fly optimization algorithm(IFFOA).The entropy values for each selected feature are calculated.In [13], the major focuses on application of ML methods in 2 distinct sub-regions that are associated with medical industry.The initial applications are Sentiment Analysis (SA) of user narrated drug reviews and the next is engineering in food technology.Since ML and AI methods enforce the limitations of scientific drug detection, ML methods are chosen as another technique for 2 main factors.Initially, ML method includes distinct learning approaches and also its feasibility for numerous NLP operations.Next,its inherent capacity to model various features that capture the features of sentiment in text.

    [14] trying to address this limitation by suggesting a novel method with topic modelling, unsupervised neural networks,and documents clustering for building effective document representations.Initially, a novel document clustering method with the Extreme learning machine (ELM) method is implemented on massive text collection.Next,topic modelling is employed in the document collection for identifying the topic existing in all the clusters.Then, all the documents are characterized in a concept space using a matrix in which column represents the cluster topics and row represents the document sentence.The created matrix can be trained by numerous ensemble learning algorithms and unsupervised neural networks for building abstract representations of the document in the topic space.[15]designed a new biomedical text summarization method which integrates 2 commonly used data mining methods: Frequent itemset and clustering mining.Biomedical paper can be stated as a group of biomedical topics with the UMLS metathesaurus.The K-means method is utilized for clustering analogous sentences.Subsequently, the Apriori method is employed for discovering the frequent itemsets amongst the clustered sentences.Lastly,the relevant sentence from all the clusters is elected for building the summary with the detected frequent itemset.

    [16] integrated frequent itemsets and sentence clustering mining for building an individual biomedical text summarization model.A bio-medical document is denoted as a set of UMLS topics.The generic concept is rejected.The vector space method is applied for representing the sentence.The K-means clustering method is employed for semantically clustering analogous sentences.The frequent itemset is extracted between the global clusters.The detected frequent itemset is applied for calculating the score of sentence.The topmost N high scoring sentences are elected for representing the last summary.[17]resolve this problem in terms of biomedical text summarization.They measure the efficiency of a graph-based summarizer with distinct kinds of contextualized and context-free embeddings.The word representation is generated by pretraining neural language methods on massive amount of bio-medical texts.The summarizer modes the input texts as graphs where the strength of relationships among the sentences are evaluated by the domain specific vector representation.

    1.1 Paper Objective

    The objective of this study is to design novel deep learning based Multi-document Biomedical Text Summarization model with hyperparameter tuning process.

    1.2 Paper Contributions

    This paper presents a Deep Learning based Attention Long Short Term Memory(DL-ALSTM)Model for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing.Then,the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents.In order to tune the summarization performance of the DLALSTM model, chaotic glowworm swarm optimization (CGSO) algorithm is employed.Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset.

    1.3 Paper Organization

    The remaining sections of the paper are arranged as follows.Section 2 offers the proposed DLALSTM model and Section 3 discusses the performance validation.Finally, Section 4 draws the conclusion of the study.

    2 The Proposed Biomedical Text Summarization Technique

    In this study,an effective DL-ALSTM model has been presented for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model performs pre-processing,summarization,and hyperparameter optimization.The detailed working of these processes is offered in the following sections.

    2.1 Pre-processing

    The summarization procedure begins with running a pre-processed step.Initial,individual’s part of an input document which is discarded to inclusion under the overview have been removed, and the essential text was taken.The redundant parts contain the title,abstract,keywords,author’s data,headers of sections and subsections,figures and tables,and bibliography section[18].It can be assumed that parts are unnecessary as it doesn’t perform under the technique summary which is utilized for evaluating the amount of outline.The removal phase is customized dependent upon the framework of an input text and user’s preference.When it can be chosen for including the title of section and subsection under the summary, further data has been saved together with all the sentences for specifying the section of text which sentence goes to.

    As the input of feature extraction scripts of BERT are text files where all sentences perform in distinct lines,and all sentences are tokenized,the pre-processed step remains with splitting an input text as to distinct sentences and tokenization step.Utilizing the Natural Language ToolKit(NLTK),the summarizers split the essential text into groups of sentences,and signify all the sentences as groups of tokens.Afterward these pre-processed functions, an input sentence is ready that mapped as to contextualized vector representation.

    2.2 Biomedical Text Summarization

    The preprocessed data is fed into the DL-ALSTM model to summarize the multi-document biomedical text.The LSTM cell is comprised of 5 essential components: Input gatei, forget gatef, output gateo, recurring cell statec, and hidden state outputh.During this difference of LSTM cell,Apaszke utilized a varying manner to computeht.During the concrete,at all the time stepst,an internal memory cellct∈Rnhas been added to computeht.Mostly,at all the time stepst,the LSTM cell utilizes the earlier hidden stateht-1, an input statextfor producing the temp internal memory statec-int, afterward utilizing the internal memory statect-1and temp internal memory statec-intfor producing the internal memory statect.With utilizing the more of gradient under this LSTM cell for minimizing the gradient explosion(GE).During the presented technique has been utilizing in the LSTM cell as fundamental units to combine encoding as well as decoding elements[19].

    Based on the subsequent explains the fundamental calculations in LSTM cell:

    1.Gates

    2.Input transform

    3.State Update

    The trained stage purposes for learning the parametersWx*,Wh*forxandhcorrespondingly;σ(·)refers the generallysigmoidfunction;tanh(·)represents the hyperbolic tangent function;⊙implies the multiplication operators,andbsignifies the bias.

    In this case,it can utilize stacked LSTM layer on the vertical way in which input of present LSTM layers utilizes resultant of preceding layer.Fig.1 illustrates the structure of LSTM model.

    Figure 1:LSTM structure

    Noticeably, this technique is exchanging classical RNN by LSTM unit.Besides the initial layer,this technique carries out passing the hidden state of preceding layerto input of present layer,in whichlrefers the layer.So,the activation of layerlhas demonstrated as:

    This method has been simulated in Google neural machine translation method.It has 3 modules:Encoded, decoded, and attention networks.During the PCA-LSTM technique, the encoded utilizes stacked LSTM layer that contains 1bi-directional LSTM layer and 3uni-directional LSTM layers.In the bi-directional LSTM encoded,the data needed to paraphrase particular words from the resultant side is act somewhere on source side.Frequently the source side data was around left-to-right,same as the target side,however,based on the language pair the data to a specific resultant word is distributed and also to be separate in particular area of input side.Next,the last hidden statehito all input unitsxihas been the concatenation of forward as well as backward hidden states.The decoded that present technique utilizes only a typical LSTM which is utilized to create paraphrase ordery=y0,...,yLwith calculating order of hidden state(...,in which the context of present created paraphrase unit is encoder fromsL-1.Perfectly,typical attention process has been demonstrated to compute the relevance scoreαtito all hidden statehithat are utilized to compute the context vectorctas:

    The value of hidden state significance scoreαtirepresents the most significant source units for focusing on and is calculated as:

    whereetiis named as alignment technique and computed as utilizing NNfas follow:

    wherefgenerally utilizestanhfunction with 2 input parametersst-1,hi,in case,it can be also regarded that default value ofβis 1.When it is assumed that word equivalent to the minimal value oftanhfunctions to have no role from creating paraphrase then nearly words containing a play in the typical attention method.Actually,any words are no play from paraphrasing of PG issue(it can be distinct in neural machine translations).For addressing this problem,it can be adding a novel parameter to the technique.It is obvious that the play ofβontanhfunctions.It is calledβhas penalty coefficient(PC)on alignment technique.An objective ofβhas for suppressing the play of words(equivalent to value oftanhfunction is -1) from the source order to the present word under the paraphrase order.This directly modifies the value of attention weight so it can be called this technique was Penalty Coefficient Attention (PCA).So, with utilizing the earlier hidden statest-1, the very relevant source contextsct,and earlier created textual unitsyt-1for computing the hidden statestof decoded:

    wheregrepresents the GRU units.During this encoder-decoder structure dependent upon PCALSTM approach,the calculating of created paraphrase ordery=y0,...,yLis also dependent upon conditional distributionsPas:

    2.3 Design of CGSO Based Hyperparameter Optimization

    In order to effectually adjust the hyperparameters involved in the DL-ALSTM model,the CGSO algorithm is utilized [20].In the fundamental GSO, thenfirefly (FF) individuals are arbitrarily distributed fromD-dimension search space, and all fireflies (FFs) transmits luciferaselu.The FF individuals produced a specific count of fluorescence, it can be interrelated nearby individual FF,and its individual decision making area0≤rs).The luciferase size of individual FFs are compared with objective function of their individual place,the superior fluorescein,an optimum the brighter FF signify in their location, an optimum target, and conversely target is worse.The size of decision-making area radius is moved by the amount of individuals under the neighborhood,the lesser density of neighborhood FFs,FF’s decision area radius is improved for finding further neighbors;on the other hand, the decision area radius of FFs are shrinking.At last, one of the FFs collects from several places.The primary FFs,all FFs separate transmits the similar luciferase concentrationsl0and perception radiusr0.

    Fluorescein update:

    whereJ(xu(z))refers the main function value that is equivalent to all FFsufrom the placexu(z)oftth iteration;lu(z)defines the present value of FF luciferase;γindicates the fluorescein upgrade rate.

    Probability selection:vth individual probabilitiespuv(z)has been chosen near and within the regions setNu(z).

    In particular,the region set isNu(z)= {v:duv(z) <implies the radius of FFs individual perception.Fig.2 demonstrates the flowchart of GSO technique.

    Figure 2:Flowchart of GSO

    Location update:

    wheresrepresents the moving step.

    The dynamic decision area radius upgrade:

    The GSO technique contains a primary distribution of FFs, luciferase upgrade, FF progress,and decision-making area upgrade.To improve the performance of the GSO algorithm, the CGSO algorithm is derived by integrating the concepts of chaotic theory.

    The chaos method is a subdivision of mathematics that performs on nonlinear dynamic process.Nonlinear denotes that it can be inconceivable for predicting the system’s response with respect to the input,and dynamic mean alters from the system in one state to another over time.The chaos purposes signify the dynamical system using deterministic formula.However, based on the initial condition,chaotic function is divergent feature performances and generated wildly unpredictable.Thus,the chaos functions are improving the diversification and intensification of enhanced methods i.e., avert local optimum solution and alter neighboring global optimum.This purpose follows easier principle and has some interconnecting portions;but,in all iterations,the created value was depending on the primary condition and earlier values.

    In this case,it is performed 3 different chaotic maps such as iterative mapping,tent mapping,and logistic mapping with power exponents(p)and sensory modality(c)computation from the BOA.The chaos purpose has been determined to exhibit high efficacy associated with another chaos purpose.

    Logistic map:

    Now,xtmeans the value from iterationt, as well asrrepresents the rate of growth i.e., proceed value in[3.0–4.0].

    Iterative map:

    During the iterative map,the values ofPare chosen among zero and one,as well the resultxthas been chaotic parameter that takes values in[0–1].

    Tent map:

    The tent map has 1D map which is same as logistic map.Now, the resultxthas been chaotic variable which takes values in[0–1].

    3 Experimental Validation

    The proposed DL-ALSTM model has been validated using PubMed dataset[21],which comprises the instances in json format.The abstract, sections, and body are all sentence tokenized.The json objects includes several parameters such as:article_id,abstract_text,article_text,section_names,and sections.

    The accuracy graph of the LSTM model is depicted in Fig.3.The figure reported that the LSTM model has attained increased training and validation accuracies with an increase in epoch count.At the same time,it is noticed that the LSTM model has resulted in higher validation accuracy compared to training accuracy.

    Fig.4 reports the loss graph analysis of the LSTM model on the test dataset applied.The figure showcased that the LSTM model has accomplished reduced loss with a rise in epoch count.It is observed that the LSTM model has resulted to reduced validation loss compared to training loss.

    Figure 3:Accuracy analysis of LSTM model

    The accuracy graph of the DL-ALSTM manner is demonstrated in Fig.5.The figure stated that the DL-ALSTM technique has attained improved training and validation accuracies with a higher epoch count.Besides,it can be clear that the DL-ALSTM approach has resulted in superior validation accuracy compared to training accuracy.

    Fig.6 illustrates the loss graph analysis of the DL-ALSTM system on the test dataset applied.The figure outperformed that the DL-ALSTM technique has accomplished lesser loss with an increase in epoch count.It is demonstrated that the DL-ALSTM approach has resulted in decreased validation loss related to training loss.

    Fig.7 showcases the Rouge-1 analysis of the LSTM model on the applied dataset.The figure shows that that the LSTM model has attained moderately considerable ROUGE-1 values.For instance,the LSTM model has attained Rouge-1 of 0.7471 under the execution run-1.In addition,the LSTM Model has resulted in a Rouge-1 of 0.7354 under the execution run-2.Similarly,the LSTM technique has resulted in a Rouge-1 of 0.7470 under the execution run-4.Along with that,the LSTM manner has resulted in a Rouge-1 of 0.7528 under the execution run-8.Finally,the LSTM algorithm has resulted in a Rouge-1 of 0.7304 under the execution run-10.

    Figure 4:Loss analysis of LSTM model

    Figure 5:Accuracy analysis of DL-ALSTM model

    Figure 6:Loss analysis of DL-ALSTM model

    Figure 7:Result analysis of LSTM model in terms of Rouge-1

    Fig.8 depicts the Rouge-2 analysis of the LSTM manner on the applied dataset.The figure displayed that the LSTM manner has reached moderately considerable ROUGE-2 values.For instance,the LSTM technique has gained Rouge-2 of 0.3449 under the execution run-1.Similarly,the LSTM Model has resulted in a Rouge-2 of 0.3223 under the execution run-2.Likewise,the LSTM approach has resulted in a Rouge-2 of 0.3216 under the execution run-4.In addition,the LSTM methodology has led to a Rouge-2 of 0.3307 under the execution run-8.At last,the LSTM Model has resulted in a Rouge-2 of 0.3562 under the execution run-10.

    Figure 8:Result analysis of LSTM model in terms of Rouge-2

    Tab.1 illustrates the result analysis of LSTM and DL-ALSTM models with different runs.

    Table 1: Result analysis of proposed model with varying runs

    Fig.9 portrays the Rouge-1 analysis of the DL-ALSTM manner on the applied dataset.The figure exhibited that the DL-ALSTM approach has reached moderately considerable ROUGE-1 values.For instance,the DL-ALSTM manner has reached Rouge-1 of 0.7692 under the execution run-1.Followed by,the DL-ALSTM approach has resulted in a Rouge-1 of 0.7369 under the execution run-2.At the same time,the DL-ALSTM technique has led to a Rouge-1 of 0.7624 under the execution run-4.In line with,the DL-ALSTM algorithm has resulted in a Rouge-1 of 0.7886 under the execution run-8.Eventually, the DL-ALSTM methodology has resulted in a Rouge-1 of 0.7640 under the execution run-10.

    Figure 9:Result analysis of DL-ALSTM model in terms of Rouge-1

    Fig.10 displays the Rouge-2 analysis of the DL-ALSTM method on the applied dataset.The figure outperformed that the DL-ALSTM model has attained moderately considerable ROUGE-2 values.For instance,the DL-ALSTM model has attained Rouge-2 of 0.3561 under the execution run-1.Besides,the DL-ALSTM manner has resulted in a Rouge-2 of 0.3231 under the execution run-2.In the meantime,the DL-ALSTM algorithm has resulted in a Rouge-2 of 0.3533 under the execution run-4.Simultaneously,the DL-ALSTM technique has resulted in a Rouge-2 of 0.3357 under the execution run-8.Finally, the DL-ALSTM approach has resulted in a Rouge-2 of 0.3712 under the execution run-10.

    Figure 10:Result analysis of DL-ALSTM model in terms of Rouge-2

    A brief comparative results analysis of the DL-ALSTM model with recent approaches takes place in Tab.2.Fig.11 investigates the Rouge-1 analysis of the DL-ALSTM model with existing techniques.The figure shows that the Bayesian BS,BERT-Base and SUMMA approaches have obtained reduced Rouge-1 values of 0.7288, 0.7257, and 0.7098.At the same time, the BioBERT-pubmed, CIBS, and BioBERT-pmc techniques have obtained slightly increased Rouge-1 values of 0.7376, 0.7345, and 0.7309 respectively.Moreover, the LSTM and BERT-Large techniques have resulted in reasonable Rouge-1 values of 0.7528 and 0.7504 respectively.However,the proposed DL-ALSTM technique has accomplished superior performance with the maximum Rouge-1 of 0.7886.

    Table 2: Comparative analysis of DL-ALSTM model with existing approaches

    Figure 11:Comparative analysis of DL-ALSTM model interms of Rouge-1

    Fig.12 examines the Rouge-2 analysis of the DL-ALSTM manner with existing algorithms.The figure outperformed that the Bayesian BS, BERT-Base and SUMMA manners have reached minimal Rouge-2 values of 0.3143, 0.3110, and 0.3022.Likewise, the BioBERT-pubmed, CIBS, and BioBERT-pmc algorithms have gained slightly enhanced Rouge-2 values of 0.3203, 0.3187, and 0.3164 correspondingly.Moreover,the LSTM and BERT-Large techniques have resulted in reasonable Rouge-2 values of 0.3562 and 0.3312 correspondingly.But,the presented DL-ALSTM technique has accomplished higher efficiency with the maximal Rouge-2 of 0.3712.

    Figure 12:Comparative analysis of DL-ALSTM model interms of Rouge-2

    By looking into the above mentioned tables and figures, it is apparent that the DL-ALSTM technique is found to be an effective tool for biomedical text summarization process.

    4 Conclusion

    In this study,an effective DL-ALSTM model has been presented for Multi-document Biomedical Text Summarization.The proposed DL-ALSTM model initially performs data preprocessing to convert the available medical data into a compatible format for further processing.Then,the DL-ALSTM model gets executed to summarize the contents from the multiple biomedical documents.In order to tune the summarization performance of the DL-ALSTM model, CGSO algorithm is employed.Extensive experimentation analysis is performed to ensure the betterment of the DL-ALSTM model and the results are investigated using the PubMed dataset.Comprehensive comparative result analysis is carried out to showcase the efficiency of the proposed DL-ALSTM model with the recently presented models.In future, the performance of the DL-ALSTM model can be improvised by the use of advanced hybrid metaheuristic optimization techniques.

    Acknowledgement:The authors would like to acknowledge the support of Prince Sultan University for paying the Article Processing Charges(APC)of this publication.

    Funding Statement:This work is funded by Deanship of Scientific Research at King Khalid University under Grant Number(RGP 1/279/42).www.kku.edu.sa.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    大码成人一级视频| 欧美老熟妇乱子伦牲交| 丝袜人妻中文字幕| 高清av免费在线| 精品久久久精品久久久| 精品亚洲成国产av| 大陆偷拍与自拍| 免费在线观看黄色视频的| 丰满迷人的少妇在线观看| 欧美精品啪啪一区二区三区| av欧美777| 大陆偷拍与自拍| 一进一出抽搐动态| 脱女人内裤的视频| 18禁观看日本| 久久香蕉国产精品| 一区在线观看完整版| 91大片在线观看| 亚洲专区国产一区二区| 日本欧美视频一区| 男女下面插进去视频免费观看| 国产精品亚洲av一区麻豆| 热re99久久国产66热| 午夜视频精品福利| 亚洲av日韩精品久久久久久密| 男人操女人黄网站| 免费在线观看日本一区| 国产欧美日韩一区二区三区在线| 国产一区二区三区综合在线观看| 国产极品粉嫩免费观看在线| 9191精品国产免费久久| 午夜福利视频在线观看免费| 久久99一区二区三区| 狂野欧美激情性xxxx| 韩国av一区二区三区四区| 欧美亚洲日本最大视频资源| 人妻一区二区av| a在线观看视频网站| 在线播放国产精品三级| 美女扒开内裤让男人捅视频| 午夜两性在线视频| 精品人妻熟女毛片av久久网站| 国产精华一区二区三区| 国产99久久九九免费精品| 妹子高潮喷水视频| 五月开心婷婷网| 别揉我奶头~嗯~啊~动态视频| 精品国产乱子伦一区二区三区| 热re99久久精品国产66热6| 欧美午夜高清在线| 亚洲一卡2卡3卡4卡5卡精品中文| 精品高清国产在线一区| 欧美精品高潮呻吟av久久| 亚洲av电影在线进入| 国产精品秋霞免费鲁丝片| 成人精品一区二区免费| videos熟女内射| 99国产精品免费福利视频| 一区二区三区精品91| 最近最新中文字幕大全电影3 | 国产高清激情床上av| 久久久久久久精品吃奶| 丰满迷人的少妇在线观看| 国产亚洲一区二区精品| 久久久久久久午夜电影 | 我的亚洲天堂| 亚洲精品在线观看二区| 叶爱在线成人免费视频播放| www.精华液| 一级a爱视频在线免费观看| 黄色女人牲交| 最新的欧美精品一区二区| 国产片内射在线| 又紧又爽又黄一区二区| 国产不卡av网站在线观看| 国产成人影院久久av| 久久影院123| 激情视频va一区二区三区| 深夜精品福利| 日本撒尿小便嘘嘘汇集6| 99久久99久久久精品蜜桃| 亚洲精品在线观看二区| 老鸭窝网址在线观看| av一本久久久久| 国产亚洲av高清不卡| 国产精品免费一区二区三区在线 | 欧美激情久久久久久爽电影 | 国产男靠女视频免费网站| 欧美激情高清一区二区三区| 国产片内射在线| 天堂俺去俺来也www色官网| 1024香蕉在线观看| 精品一区二区三区四区五区乱码| 女性被躁到高潮视频| 在线观看免费视频网站a站| 国产精品 国内视频| 国产成人啪精品午夜网站| 亚洲欧美精品综合一区二区三区| 精品乱码久久久久久99久播| 国产欧美日韩综合在线一区二区| 女人爽到高潮嗷嗷叫在线视频| 无遮挡黄片免费观看| 51午夜福利影视在线观看| 欧美成人免费av一区二区三区 | 精品国产乱码久久久久久男人| 欧美黄色淫秽网站| 女性被躁到高潮视频| 国产蜜桃级精品一区二区三区 | 黑人巨大精品欧美一区二区蜜桃| 高清黄色对白视频在线免费看| 欧洲精品卡2卡3卡4卡5卡区| 999精品在线视频| 欧美日韩中文字幕国产精品一区二区三区 | 12—13女人毛片做爰片一| 女人久久www免费人成看片| x7x7x7水蜜桃| 水蜜桃什么品种好| 日韩三级视频一区二区三区| 色综合欧美亚洲国产小说| 在线观看66精品国产| 美女高潮喷水抽搐中文字幕| 午夜精品久久久久久毛片777| 成人精品一区二区免费| 久久精品国产亚洲av香蕉五月 | av天堂在线播放| 18禁美女被吸乳视频| 国产精品久久电影中文字幕 | cao死你这个sao货| 国产99久久九九免费精品| 亚洲少妇的诱惑av| 18在线观看网站| 夜夜躁狠狠躁天天躁| 欧美+亚洲+日韩+国产| 亚洲人成77777在线视频| 久久草成人影院| 亚洲欧美精品综合一区二区三区| 人人妻人人澡人人爽人人夜夜| 夜夜夜夜夜久久久久| 757午夜福利合集在线观看| 午夜免费观看网址| aaaaa片日本免费| 又黄又爽又免费观看的视频| 国产欧美亚洲国产| 国产精品久久电影中文字幕 | 国产精品 国内视频| 人妻丰满熟妇av一区二区三区 | tocl精华| 欧美激情久久久久久爽电影 | 国产精品香港三级国产av潘金莲| 久9热在线精品视频| 91九色精品人成在线观看| 国产xxxxx性猛交| 亚洲人成电影观看| 女人爽到高潮嗷嗷叫在线视频| 久久精品熟女亚洲av麻豆精品| 超色免费av| 日日爽夜夜爽网站| 精品视频人人做人人爽| 国产成人一区二区三区免费视频网站| 一级a爱视频在线免费观看| 日本黄色日本黄色录像| 中出人妻视频一区二区| 纯流量卡能插随身wifi吗| 久久人妻熟女aⅴ| 亚洲欧洲精品一区二区精品久久久| 狠狠狠狠99中文字幕| 极品人妻少妇av视频| 成人18禁高潮啪啪吃奶动态图| 亚洲熟妇中文字幕五十中出 | 久久国产精品男人的天堂亚洲| 国产精品免费一区二区三区在线 | 老熟妇仑乱视频hdxx| 亚洲精品在线观看二区| a级片在线免费高清观看视频| 狠狠婷婷综合久久久久久88av| 国产成人av教育| 怎么达到女性高潮| 国产精品99久久99久久久不卡| 一级a爱片免费观看的视频| 亚洲中文av在线| 黄片大片在线免费观看| 男女高潮啪啪啪动态图| 老熟妇乱子伦视频在线观看| www.自偷自拍.com| 五月开心婷婷网| 别揉我奶头~嗯~啊~动态视频| 午夜免费鲁丝| 国产99白浆流出| av视频免费观看在线观看| 国产亚洲精品久久久久5区| 制服诱惑二区| 黄色丝袜av网址大全| 色综合婷婷激情| 亚洲五月色婷婷综合| 18禁观看日本| 亚洲成av片中文字幕在线观看| 超色免费av| 国产成人啪精品午夜网站| 在线十欧美十亚洲十日本专区| 在线视频色国产色| 91av网站免费观看| 欧美丝袜亚洲另类 | 老熟妇乱子伦视频在线观看| 国产高清激情床上av| 久久久久久久久久久久大奶| 最近最新中文字幕大全电影3 | 99在线人妻在线中文字幕 | 亚洲成av片中文字幕在线观看| 成人国语在线视频| 少妇粗大呻吟视频| 免费看十八禁软件| 欧美精品啪啪一区二区三区| 国产一区二区三区在线臀色熟女 | 激情视频va一区二区三区| 亚洲九九香蕉| 真人做人爱边吃奶动态| 中文字幕人妻丝袜一区二区| svipshipincom国产片| 欧美+亚洲+日韩+国产| 麻豆av在线久日| 亚洲欧美一区二区三区久久| 国产欧美亚洲国产| 两个人看的免费小视频| 午夜福利乱码中文字幕| 国产单亲对白刺激| 女人被躁到高潮嗷嗷叫费观| 又大又爽又粗| 亚洲一区中文字幕在线| 免费观看人在逋| 人人澡人人妻人| 天天影视国产精品| 很黄的视频免费| 国产亚洲一区二区精品| 久久精品91无色码中文字幕| 看黄色毛片网站| 一进一出好大好爽视频| 一级片'在线观看视频| 午夜视频精品福利| a在线观看视频网站| 又黄又粗又硬又大视频| 90打野战视频偷拍视频| 人人妻人人添人人爽欧美一区卜| 日韩欧美一区二区三区在线观看 | 18禁观看日本| 欧美日韩视频精品一区| 亚洲成人免费电影在线观看| 久久99一区二区三区| 99国产精品99久久久久| www.999成人在线观看| 国产亚洲欧美精品永久| 亚洲成人免费av在线播放| 久9热在线精品视频| 97人妻天天添夜夜摸| 久久久国产精品麻豆| 伦理电影免费视频| 一a级毛片在线观看| 国产国语露脸激情在线看| 在线观看免费视频日本深夜| 欧美在线一区亚洲| av不卡在线播放| av有码第一页| 免费在线观看日本一区| 欧美激情 高清一区二区三区| 国产一区二区三区综合在线观看| 高清在线国产一区| 国产又色又爽无遮挡免费看| 极品人妻少妇av视频| 国产99白浆流出| 亚洲国产欧美一区二区综合| 久久久久国产精品人妻aⅴ院 | 国产乱人伦免费视频| 国产精品一区二区在线观看99| 久久人妻av系列| 亚洲欧美色中文字幕在线| 99久久国产精品久久久| 国产欧美日韩一区二区三区在线| 国产熟女午夜一区二区三区| 国产一区二区三区视频了| 交换朋友夫妻互换小说| 欧美国产精品va在线观看不卡| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲男人天堂网一区| 婷婷丁香在线五月| 在线视频色国产色| 国产精品亚洲av一区麻豆| 亚洲专区中文字幕在线| 欧美日韩成人在线一区二区| 国产精品亚洲一级av第二区| 亚洲五月天丁香| av不卡在线播放| 精品久久久久久久毛片微露脸| 亚洲在线自拍视频| 日韩欧美三级三区| av天堂久久9| 精品一品国产午夜福利视频| 91麻豆av在线| 搡老岳熟女国产| 视频区图区小说| 身体一侧抽搐| 男人操女人黄网站| 丝袜在线中文字幕| 欧美日韩视频精品一区| 亚洲精品国产区一区二| 男人的好看免费观看在线视频 | 亚洲精品成人av观看孕妇| 美女高潮到喷水免费观看| 国产精品一区二区在线不卡| 精品视频人人做人人爽| 国产精品免费大片| 波多野结衣av一区二区av| bbb黄色大片| 国产精品久久久久久精品古装| 在线观看午夜福利视频| 国产视频一区二区在线看| 国产精品 国内视频| www日本在线高清视频| 国产av精品麻豆| 久久国产精品人妻蜜桃| 1024香蕉在线观看| 最近最新中文字幕大全电影3 | 老司机福利观看| 国产亚洲精品第一综合不卡| 日韩一卡2卡3卡4卡2021年| 国产欧美日韩一区二区三区在线| 国产熟女午夜一区二区三区| 国产成人欧美| 在线观看日韩欧美| 少妇粗大呻吟视频| 黄色片一级片一级黄色片| 亚洲国产欧美日韩在线播放| 亚洲人成电影免费在线| 纯流量卡能插随身wifi吗| 亚洲色图 男人天堂 中文字幕| 亚洲中文字幕日韩| 国产精华一区二区三区| 老汉色∧v一级毛片| 超色免费av| 亚洲欧美激情在线| 两性夫妻黄色片| 亚洲中文字幕日韩| 欧美日韩视频精品一区| 亚洲精品在线美女| 亚洲一卡2卡3卡4卡5卡精品中文| ponron亚洲| 欧美色视频一区免费| 精品国产超薄肉色丝袜足j| 18在线观看网站| 亚洲欧美日韩另类电影网站| 欧美亚洲 丝袜 人妻 在线| 亚洲精品一卡2卡三卡4卡5卡| cao死你这个sao货| 一级黄色大片毛片| 亚洲五月婷婷丁香| 12—13女人毛片做爰片一| 久9热在线精品视频| 国产成人欧美| 国产成人av教育| 自线自在国产av| 亚洲人成电影免费在线| 久久精品国产a三级三级三级| 精品国产一区二区久久| 精品亚洲成国产av| 777米奇影视久久| 欧美在线黄色| 黑人猛操日本美女一级片| 一区福利在线观看| 午夜成年电影在线免费观看| 1024香蕉在线观看| 精品少妇久久久久久888优播| 一本一本久久a久久精品综合妖精| xxx96com| 国产一区二区激情短视频| 免费观看精品视频网站| 捣出白浆h1v1| 一级毛片精品| 久久中文字幕人妻熟女| 成年人午夜在线观看视频| 亚洲国产精品一区二区三区在线| 国产精品98久久久久久宅男小说| 黄片大片在线免费观看| 少妇粗大呻吟视频| 亚洲欧美激情综合另类| 丁香欧美五月| 欧美国产精品一级二级三级| 国产一卡二卡三卡精品| 欧美最黄视频在线播放免费 | 久久久久久久精品吃奶| 女同久久另类99精品国产91| 久久九九热精品免费| 国产精品久久久久久精品古装| 国产熟女午夜一区二区三区| 99久久人妻综合| 最新在线观看一区二区三区| 亚洲国产看品久久| 国产片内射在线| 亚洲男人天堂网一区| xxxhd国产人妻xxx| 老司机影院毛片| 咕卡用的链子| 满18在线观看网站| 久热爱精品视频在线9| 十八禁人妻一区二区| 激情视频va一区二区三区| 久9热在线精品视频| 91字幕亚洲| 日本vs欧美在线观看视频| 波多野结衣av一区二区av| 成人三级做爰电影| 久久精品国产清高在天天线| 人人妻人人添人人爽欧美一区卜| 国产成人精品在线电影| 久久精品亚洲精品国产色婷小说| 母亲3免费完整高清在线观看| 侵犯人妻中文字幕一二三四区| 大香蕉久久成人网| 亚洲专区中文字幕在线| 久久精品国产99精品国产亚洲性色 | www.精华液| 国产一卡二卡三卡精品| 黄色毛片三级朝国网站| 精品国产乱码久久久久久男人| 色婷婷久久久亚洲欧美| 国产成人影院久久av| 黄色 视频免费看| 美女 人体艺术 gogo| 精品亚洲成国产av| 免费av中文字幕在线| 亚洲精品一二三| 精品少妇久久久久久888优播| 女性生殖器流出的白浆| 99国产精品一区二区三区| cao死你这个sao货| 热re99久久精品国产66热6| 国产极品粉嫩免费观看在线| 国产亚洲精品第一综合不卡| √禁漫天堂资源中文www| 欧美老熟妇乱子伦牲交| 男男h啪啪无遮挡| 91av网站免费观看| 侵犯人妻中文字幕一二三四区| 成年人黄色毛片网站| 久久人人97超碰香蕉20202| 女人被狂操c到高潮| 国产淫语在线视频| 91大片在线观看| 亚洲精品国产精品久久久不卡| 午夜91福利影院| 亚洲一码二码三码区别大吗| 亚洲精品在线美女| 老司机午夜十八禁免费视频| 国产精品香港三级国产av潘金莲| 精品久久久久久久久久免费视频 | 电影成人av| 在线观看www视频免费| 视频区图区小说| 亚洲欧美色中文字幕在线| 丰满人妻熟妇乱又伦精品不卡| 精品高清国产在线一区| 亚洲欧美一区二区三区黑人| 国产精品亚洲一级av第二区| 99久久国产精品久久久| 久久精品熟女亚洲av麻豆精品| 精品第一国产精品| 久久精品人人爽人人爽视色| 亚洲午夜精品一区,二区,三区| 一区二区三区精品91| 男人舔女人的私密视频| 大片电影免费在线观看免费| 国产精品二区激情视频| 国产亚洲精品一区二区www | 伊人久久大香线蕉亚洲五| 女人被狂操c到高潮| av视频免费观看在线观看| 免费在线观看日本一区| 天堂中文最新版在线下载| 一边摸一边做爽爽视频免费| 久久久精品国产亚洲av高清涩受| 日韩精品免费视频一区二区三区| 精品久久久久久久久久免费视频 | 免费看十八禁软件| 欧美+亚洲+日韩+国产| 一边摸一边抽搐一进一出视频| 久久精品亚洲av国产电影网| av中文乱码字幕在线| 日韩欧美在线二视频 | 欧美人与性动交α欧美软件| 嫁个100分男人电影在线观看| 91字幕亚洲| 校园春色视频在线观看| 亚洲av成人av| 亚洲va日本ⅴa欧美va伊人久久| 日韩大码丰满熟妇| 人人妻,人人澡人人爽秒播| 国产亚洲欧美精品永久| 国产一区二区三区综合在线观看| 超碰成人久久| 制服诱惑二区| 国产在线观看jvid| 人人妻人人添人人爽欧美一区卜| 中文字幕人妻丝袜制服| 在线观看免费视频日本深夜| 精品久久久久久电影网| 他把我摸到了高潮在线观看| 亚洲国产欧美网| 一级片免费观看大全| 日韩免费av在线播放| 亚洲综合色网址| 久久精品aⅴ一区二区三区四区| 国产成人啪精品午夜网站| 另类亚洲欧美激情| 精品卡一卡二卡四卡免费| 满18在线观看网站| 黄色成人免费大全| 在线观看免费日韩欧美大片| 人人妻,人人澡人人爽秒播| 丰满人妻熟妇乱又伦精品不卡| 国产精品.久久久| 一本综合久久免费| 国产精品成人在线| 无遮挡黄片免费观看| 欧美久久黑人一区二区| 国产伦人伦偷精品视频| 亚洲精品在线美女| 久久久久国产一级毛片高清牌| 国产精品二区激情视频| 久久精品亚洲熟妇少妇任你| 欧美精品一区二区免费开放| 在线观看一区二区三区激情| 一级毛片女人18水好多| 亚洲欧美一区二区三区黑人| 飞空精品影院首页| 欧美中文综合在线视频| 色94色欧美一区二区| 久热爱精品视频在线9| 国产麻豆69| 精品国产美女av久久久久小说| 丝袜美足系列| 99riav亚洲国产免费| 国产野战对白在线观看| 两个人看的免费小视频| 天堂中文最新版在线下载| 久久婷婷成人综合色麻豆| av网站在线播放免费| 欧美日本中文国产一区发布| 亚洲一区中文字幕在线| 99久久国产精品久久久| 国产不卡av网站在线观看| 好看av亚洲va欧美ⅴa在| 亚洲人成电影观看| 亚洲国产毛片av蜜桃av| 免费在线观看日本一区| 久久久久久久久久久久大奶| 99久久人妻综合| 最新美女视频免费是黄的| 中文字幕人妻丝袜制服| 12—13女人毛片做爰片一| 熟女少妇亚洲综合色aaa.| 国产亚洲欧美在线一区二区| 一级a爱视频在线免费观看| 日本撒尿小便嘘嘘汇集6| 日韩欧美三级三区| 色婷婷久久久亚洲欧美| 国产国语露脸激情在线看| 精品国产美女av久久久久小说| 香蕉久久夜色| 免费观看a级毛片全部| 精品一品国产午夜福利视频| 精品卡一卡二卡四卡免费| 亚洲五月婷婷丁香| 午夜福利欧美成人| 午夜日韩欧美国产| 大香蕉久久成人网| 午夜福利欧美成人| 色播在线永久视频| 国产精品成人在线| av在线播放免费不卡| 午夜日韩欧美国产| 成人手机av| 午夜福利欧美成人| 亚洲午夜理论影院| 在线看a的网站| ponron亚洲| 搡老乐熟女国产| 交换朋友夫妻互换小说| 韩国精品一区二区三区| 国产aⅴ精品一区二区三区波| 欧美日韩乱码在线| 久久久久久久国产电影| av天堂久久9| 久久精品国产亚洲av高清一级| 久久人人97超碰香蕉20202| 最新美女视频免费是黄的| 在线看a的网站| 自线自在国产av| 两个人看的免费小视频| 欧美在线一区亚洲| 新久久久久国产一级毛片| xxxhd国产人妻xxx| 亚洲一区二区三区欧美精品| 色在线成人网| 看片在线看免费视频| 国产成人影院久久av| 美女午夜性视频免费| 精品高清国产在线一区| 久久性视频一级片| 久久狼人影院| 亚洲国产欧美网| 波多野结衣一区麻豆| 久久精品国产亚洲av香蕉五月 | 国产精品久久久久久精品古装| 法律面前人人平等表现在哪些方面| 叶爱在线成人免费视频播放| 国产高清视频在线播放一区| 国产精品一区二区在线不卡| 亚洲av熟女|