• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MEIM:A Multi-Source Software Knowledge Entity Extraction Integration Model

    2021-12-14 03:51:38WuqianLvZhifangLiaoShengzongLiuandYanZhang
    Computers Materials&Continua 2021年1期

    Wuqian Lv,Zhifang Liao,*,Shengzong Liu and Yan Zhang

    1School of Computer Science and Engineering,Central South University,Changsha,410075,China

    2School of Information Technology and Management,Hunan University of Finance and Economics,Changsha,410205,China

    3School of Computing,Engineering and Built Environment,Glasgow Caledonian University,Glasgow,G4 0BA,UK

    Abstract:Entity recognition and extraction are the foundations of knowledge graph construction.Entity data in the field of software engineering come from different platforms and communities,and have different formats.This paper divides multi-source software knowledge entities into unstructured data,semi-structured data and code data.For these different types of data,Bi-directional Long Short-Term Memory(Bi-LSTM)with Conditional Random Field(CRF),template matching,and abstract syntax tree are used and integrated into a multi-source software knowledge entity extraction integration model(MEIM)to extract software entities.The model can be updated continuously based on user’s feedbacks to improve the accuracy.To deal with the shortage of entity annotation datasets,keyword extraction methods based on Term Frequency–Inverse Document Frequency(TF-IDF),TextRank,and K-Means are applied to annotate tasks.The proposed MEIM model is applied to the Spring Boot framework,which demonstrates good adaptability.The extracted entities are used to construct a knowledge graph,which is applied to association retrieval and association visualization.

    Keywords:Entity extraction;software knowledge graph;software data

    1 Introduction

    In the construction of knowledge graphs,knowledge entity extraction is a fundamental step.The quantity and accuracy of knowledge entities have an important impact on subsequent steps such as relationship establishment,knowledge fusion and knowledge graph application.For example,if the number of entities is too small,the entity relationship will be limited and the results obtained from knowledge graph retrieval could be ineffective.Designing a method to extract knowledge entity reasonably and effectively is important for knowledge graph construction.

    Traditional knowledge entity extractions mainly target unstructured data and use deep learning models to extract knowledge entities.This type of method has been widely used in medical field and journalism field.In the medical field,researchers identify and extract medical entities in electronic medical records and use them to construct medical knowledge graphs.In the journalism field,researchers usually focus on extracting entities in three categories:Person,Location,and Organization.

    In the field of software engineering,software is composed of code and documents which exist in a large number of different types of data,so the extracted data sources are diverse,including Source Code,eXtensible Markup Language(XML)files,JavaScript Object Notation(JSON)files,Question and Answer(Q&A)records,Version Control System records,etc.They come from different software communities,such as GitHub,StackOverflow,Sourceforge,etc.These data sources not only contain unstructured data,but also a lot of semi-structured data and code data.

    At present,the majority of the research on software knowledge entity extraction is aimed at a single type of data.To build a complete and meaningful software knowledge graph,software knowledge entities should be extracted from multiple data sources.This paper proposes MEIM:A multi-source software knowledge entity extraction integrated model,which is used to implement entity extraction for data input in different formats.

    Main contributions presented in this paper include:1)The definition of entity categories in the field of software engineering.2)A method combining TF-IDF,TextRank and K-Means is proposed for software entity data annotation,which can efficiently obtain a large number of datasets.3)Rules of template matching are defined for semi-structured data,and analysis tool is used to achieve the extraction of entities.4)Different types of data extraction methods are integrated and a software entity extraction integration model,which is able to improve its accuracy incrementally,is built.

    2 Related Work

    Current research in the field of knowledge graph and entity extraction mainly focuses on the following aspects:

    Zhao et al.[1]proposed the Harvesting Domain Specific Knowledge Graph(HDSKG)framework to discover domain specific concepts and their relation triples from the content of webpages.They incorporated dependency parser with rule-based method to chunk the relation triple candidates.Then advanced features of these candidate relation triples were extracted to estimate the domain relevance by a machine learning algorithm.

    Guo[2]optimized the HDSKG framework and proposed a strategy for the extraction of WiKi pages in the field of software engineering.In this work,web page titles were used to construct domain dictionaries,then rules based on entity conceptual features were designed in the field of software engineering.Finally,the constructed domain dictionaries were used to improve the accuracy of subsequent entity recognition.Researchers[3–5]use different methods to extract entities in web.In the field of open source software,Liao et al.[6,7]expanded the scope of software knowledge to open source software domain and proposed recommendation and prediction methods for social networks and ecosystems.

    Ye et al.[8]analyzed the challenges of entity recognition in software engineering,and proposed a method based on machine learning for social content of software engineering.They combined labeled data,unlabeled data and other social content data of a Q&A website to train the model,which can be applied to various software entities in different popular programming languages and platforms.

    Hang et al.[9]proposed the DeepLink framework to realize the link recovery of Issue and Commit in GitHub.A code knowledge graph was constructed and text semantic information of Issue and Commit is combined to complete the link recovery.

    Xiao et al.[10]applied knowledge graph to the field of software security.He integrated heterogeneous software security concepts and examples of different databases into a knowledge graph,and then developed a knowledge graph embedding method which embeds symbolic relational and descriptive information of software security entities into a continuous vector space.The generated results can be used to predict software security entity relationships.knowledge graphs have also been used to find the defects in software[11].Chen et al.[12]proposed a method of combining recurrent neural networks with dependency parser to extract error entities and their relationships from error reports.

    Lin et al.[13]built an intelligent development environment based on the software knowledge graph and realized software text semantic search.Based on the work of Lin,Wang et al.[14]proposed a method to convert natural language questions into structured Cypher queries.These queries can be used in graph database Neo4j to return the corresponding answers.Ding et al.[15]identifies the primary studies on knowledge-based approaches in software documentation.

    3 Methodology

    In order to solve the problem of multi-source software entity knowledge extraction,an integrated extraction model is designed in this paper.The model integrates data classification,source code data extraction,semi-structured data extraction and unstructured data extraction.In this section,the framework of the proposed multi-source software knowledge entity extraction integration model and the specific implementation methods of each functional module are introduced.

    3.1 Framework for Integrated Models

    The framework of multi-source software knowledge entity extraction integration model(MEIM)is presented in Fig.1.

    Figure 1:Framework of multi-source software knowledge entity extraction integration model

    The first part is the input of multi-source software knowledge.Open source software usually consists of code and documentation,which exist in a large number of software platforms and communities,such as,GitHub,StackOverflow,Sourceforge,etc.

    The second part is the entity extraction module for various types of data,which includes three extraction sub-modules of unstructured data,semi-structured data and source code data.For unstructured data,characteristics of the words are used to extract keywords from data,then these keywords are manually labelled.After that,Bi-LSTM + CRF method is used to train a model.Finally,the model is used to extract entities from the unstructured data.For semi-structured data,template matching and parsing tools are applied to mine data patterns and extract entities.For source code data,the code is parsed into abstract syntax trees and the tree is traversed to obtain the code entities.

    3.2 Unstructured Data Extraction

    3.2.1 Dataset Processing

    At present,there is no good open source entity dataset for sequence labeling in the field of software engineering,so an entity dataset is constructed.We crawled 20,000 posts from the well-known IT technology question and answer site,Stackoverflow,and selected 500 posts for dataset construction.Based on TF-IDF,TextRank,and K-Means methods,more than 3000 keywords are extracted from them,and these keywords are manually labelled using BIO labeling methods.Finally,keywords set is used to annotate the original dataset.

    Text preprocessing:Firstly,all text is converted to lowercase and natural language toolkit(NLTK)is used to label part-of-speech(POS)tag.And the tokenized text(mainly the nouns and adjectives)is normalized by lemmatization tool.For example,“classes” is replaced by “class.” We used the English Stopwords List provided by Ranks NL to remove the stop words in the text.Then,based on the position of the stop words,the text was segmented to generate words and phrases.The phrases are also candidates for keywords to be extracted.

    TF-IDF:TF-IDF algorithm is a classic algorithm based on word frequency statistics.TF(Term Frequency)refers to the frequency with which a given word appears in the current text.IDF(Inverse Document Frequency)is related to the total number of texts containing a given word.The smaller the total number of texts,the greater the IDF value.The basic idea of TF-IDF is that the importance of a word increases proportionally with the number of times it appears in the text,but at the same time it decreases inversely with the frequency of its appearance in the text library.For example,in a piece of text,the word “the” may appear frequently,but it appears frequently in all texts,so the IDF value is low.So finalist TF-IDF score is also low.All posts are used to build a text library to improve the accuracy of the IDF value and calculate the score of each word according to the Eqs.(1)–(3).

    whereS(Wi)represents the TF-IDF score of thei-th word,TF(Wi)represents the TF score of thei-th word,IDF(Wi)represents the IDF score of thei-th word,C(Wi)represents the number of times thei-th word appears in a text,nrepresents the total number of words in this text,H(Wi)represents the number of texts where thei-th word appears in all text libraries,mrepresents the total number of texts in the text library.

    Then words are sorted by TF-IDF score in descending order to obtain the keyword set for each post.

    TextRank:TextRank algorithm is a graph-based model based on the idea of PageRank algorithm.The basic idea is:If a word is linked by a large number of words or by a highly ranked word,it means that the word is more important.Therefore,we build a graph.Each word serves as a node of this graph.And edges are constructed between related words.

    Firstly,a dictionary is built for the text,and all words in the dictionary become nodes in the graph.Then edges are generated in the graph using a Window which slides from the beginning of the original text to the end of the original text.Weighted edges are generated between words in a Window,which are determined by the distance between them.Since the relationship of words is mutual,we construct undirected edges.A twodimensional array is used to store the weights of the edges between all vocabularies.For each occurrence of two related words,the weight of the corresponding edge increases according to Eq.(4).

    whereWrepresents weight,I1represents the absolute position of the first word,I2represents the absolute position of the second word.

    After each calculation,the absolute position of two words are stored in a set to avoid repeated calculations in the same Window.After traversing the original text,a word graph is constructed,and the score of each word node is calculated based on the word graph.

    Initializing the score of all word nodes to 1,the score of each word node is iteratively updated according to Eq.(5).

    whereS(Vi)represents the score of thei-th node,α represents the damping factor,E(Vi)represents the set of connected point numbers of thei-th word node,W(Vi)represents the sum of weights of all edges of thej-th word node.

    The score of the word node is calculated iteratively until it converges to the given threshold or reaches the preset number of iterations.Word nodes are sorted according to the score in descending order to obtain the keyword set.

    K-Means:K-Means is a clustering algorithm.We give K cluster starting center points.The algorithm calculates the Euclidean distance from each point to the center point,dividing the points into clusters that contain the nearest center point.Then the algorithm recalculates the center point of each cluster,repeating the above steps until convergence.

    Word2vec tool is used to load Google open source English pre-trained word vectors to convert all words into word vectors.To use the K-Means clustering algorithm,the K value needs to be set,and the determination of the K value depends on the Calinski–Harabasz Score(Eqs.(6)–(8)).

    whereS(K)representsKcluster scores,tr(x)is used to take the diagonal elements of the matrix,Nis the number of all sample points,BKis the inter-class discrete matrix,WKis the discrete matrix within the class,nqis the total number of sample points of class q,cqis the center point of the class,cEis the center point of all sample points,Pqis all sample points of class q.

    A part of the words which are closest to each central point is used to form a keyword set.

    Generation of the keywords set:Based on the experimental results,the top ranked keywords are extracted from the three types of keyword set at a ratio of 3:2:1.After deduplication,the final keywords set to be annotated is obtained.

    Entity category definition:We collected developers’suggestions for the classification of software entities,and finally determined several categories of software entities as shown in Tab.1.They are File,Programming Language,Application Programming Interface,External Tools and Dependencies and Standard.

    Table 1:Software entity category

    Annotation of the keywords set:5 developers with rich development experience annotated the keyword set according to the entity category table in the form of BIO.The BIO annotation method labels each element as“B-X,”“I-X,”or“O.”Among them,“B-X”means that the word belongs to the X type and is located at the beginning of a phrase.“I-X”means that the word belongs to the X type but do not appear at the beginning of a phrase.“O”means it does not belong to any type.Fig.2 shows an example of BIO annotation.After that,the keywords set is used to annotate the original dataset.

    3.2.2 Model Training

    After the sequence labeling,we preprocess the data and obtain word vectors for word and POS tag,and use these two kinds of vectors as input of the model.The core of the model is Bi-LSTM + CRF,and the output is a sequence annotation of each word.

    Figure 2:An example of BIO annotation

    Data preprocessing:Firstly,NLTK is applied to the dataset for POS tag and filtering out sentences without nouns.Then we divide the dataset into a training set and a test set,where the test set is accurately annotated manually and accounts for 15%.The sentences in the training set are used to create the word and POS tag vectors.

    Bi-LSTM-CRF:Bi-LSTM-CRF is currently one of the most widely used sequence labeling models.For sequence labeling,it is effective to consider the contextual content of each word and the legal order of the tag sequence.Bi-LSTM can add context features to the training process,and CRF can output globally optimal sequences.The combination of them can complete the task of sequence labeling effectively.In this paper,the structure of the Bi-LSTM-CRF model includes three layers as shown in Fig.3.

    Figure 3:Bi-LSTM +CRF model structure

    The first layer is the Embedding Layer,which inputs word and POS tag vectors and adds Dropout to the two categories of word vectors to prevent overfitting.

    The second layer is composed of a Forward LSTM Layer and a Backward LSTM Layer,and Dropout is also added.Forward LSTM adds information before the word,and Backward LSTM adds information after the word.In this way,contextual information can be used with word order and meaning combined.LSTM calculates the current time valuehtby combining the cell stateCt-1,output valueht-1and current time inputxt.The sequencecalculated by Forward LSTM and the sequencecalculated by Backward LSTM are combined intoand outputted to the LSTM Output Layer.The output of this layer is a score for each sequence corresponding to various sequence labels.

    The third layer is the CRF Layer,which adds constraints to the last predicted label to ensure that the predicted label is legal.These constraints are learned in the training set through the CRF layer.For example,each sentence must start with “B-label” or “O,” and “O” cannot be followed by “I-label,” etc.Each label is used as a node to construct linear chain CRFs,and a two-dimensional matrix is used to store the transfer score from one label to another,determining the output sequence.

    3.3 Semi-Structured Data Extraction

    Semi-structured data has a certain structure.A large amount of semi-structured data exists in open source software,such as operation manual in HyperText Markup Language(HTML)format,configuration files in XML format,data storage files in JSON format,etc.We take HTML,XML and JSON data as examples,introducing the extraction methods of semi-structured data in the integrated model.

    3.3.1 HTML

    HTML files in open source software include operation manuals,user guides and static front pages.According to the experience,operation manuals and user guides contain more valuable entities than static front pages.Therefore,our extraction of HTML is focused on these two types of documents.

    The HTML parser and template matching method are used to extract entities in HTML.The templates in Tab.2 are used to extract some software entities in HTML.

    Table 2:Extraction templates

    BeautifulSoup is used to parse HTML files.1)Alltags are found through BeautifulSoup,using the text related to thetag as an entity,and the link address as an attribute of the entity.2)Thetags and their contents with the data-lang attribute are found.Then all HTML tags are cleared and the code is spliced.After that,the data-lang attribute value and the spliced code text are input to the source code data entity extraction module.3)All the code already used in the HTML file and all HTML tags are cleared.The remaining text are input to the unstructured data entity extraction module.

    User guide of the famous development framework Spring Boot is taken as an example to explain our entity extraction method.As shown in Fig.4,this is a part of the user guide and its corresponding HTML code.

    For the“src/main/java/com/example/springboot/HelloController.java,”it will be recognized by regular expressions.And we extract “HelloController.java” as a File entity.Regular expressions can also identify entities such as“@RestContoller”and “@RequestMapping” in web pages.

    Figure 4:User guide and its HTML code

    For the code block,its HTML code is “…… .” We will locate it through the tag and the datalang attribute in HTML files,then input them to code data extraction module.The remaining text is input to unstructured data extraction module.

    3.3.2 XML

    In software,most configurations are implemented by static XML configuration files.Therefore,a software usually contains a lot of XML files which consist of software entities.

    XML files are mainly parsed by Python Dom.For all nodes,the entity library is searched based on the node name(the entity library consists of labeled data and manual confirmation data).If the category of the corresponding node can be found,the node is extracted as an entity of this category,and the content of the node is extracted as the attribute.If the category to which the corresponding node belongs cannot be found,the node is extracted as an “Other” entity,and the content of the node is extracted as the attribute.For example,the node“”and its content is obtained through the parser.“modelVersion” is used as a keyword to search in the entity library.If the corresponding category can be found,“modelVersion” is extracted as an entity of this category,and “4.0.0” is used as an attribute of the entity.If there is no corresponding result,“modelVersion”is extracted as“Other” category.

    After parsing an XML file,the user can choose whether to classify the extracted “Other” entities.Through this step,the user can manually divide these entities into corresponding categories.And these entities will also be used to expand the entity library and to improve the accuracy of extraction model further.

    3.3.3 JSON

    In software,developers commonly use JSON files to store content and deliver messages.In fact,JSON is a light form of XML.JSON and XML can be mutually converted to some extent.Therefore,the same method as XML is used for JSON entity extraction.

    3.4 Source Code Data Extraction

    Source code is a unique data source for software.It has a fixed format and a large number of software entities.It is an important data source in the field of software engineering.Taking Java code as an example,we implemented a module for automatically extracting source code entities.QDox open source plug-in is used to parse the code.For the input source code file or folder,QDox iterates automatically and stores the obtained content into a JavaProjectBuilder object to form a tree structure.Using the JavaProjectBuilder object,we can get source files,packages,classes,methods,parameters,etc.

    The extraction results of entities and their attributes are shown in Tab.3.

    Table 3:Source code entity and attribute

    3.5 Module Integration

    All input files are classified according to the suffix names and parse the files using corresponding modules to extract all software entities in the files.Users can input software-related files that they want to extract according to the prompts through the user interface.During the extraction process,users can optionally participate in editing and manually determine some “Other” type entities.The entity library is continuously updated with manual confirmation,and the annotation of the source dataset can be updated by calling a new entity library at intervals to improve the accuracy of the unstructured entity extraction model.

    3.6 Construction and Application of Knowledge Graph

    In order to show the application value of MEIM,a knowledge graph is constructed using the extracted entities.“from” relations is established between all extracted entities and their source file entities.And“related to” relations are established between entities of the same name from different source files.In the source code entities,detailed relations are established according to the relationships in code,which are{extend,implement,have_field,declare_exception,have_parameter,have_method,import}.

    The established knowledge graph can realize fast retrieval and association visualization.

    4 Experiment

    4.1 Keyword Extraction

    4.1.1 Comparing the Keyword Extraction Results from the Three Methods

    In Fig.5,the first paragraph includes keywords extracted using the TF-IDF algorithm.This method os good at extracting important vocabularies in text.Overall,it performs best among the three methods.

    Figure 5:Keyword extraction results

    The second paragraph presents a part of the keywords extracted using the TextRank algorithm.The extracted keywords are slightly worse than TF-IDF.And the importance of keywords is not as high as that of TF-IDF.

    The third part is a part of the keywords extracted using the K-Means algorithm.Comparing with the other two methods,the extracted keywords perform poorly in the importance of vocabulary.It extracts some less distinctive words like “software” and “project.” However,the K-Means algorithm has more types of words and includes some words ignored by the other two methods because they only focus on importance.This makes the keyword dictionary more complete.

    In the end,the TF-IDF algorithm is used to extract 6000 keywords.The TextRank algorithm is used to extract more than 4000 keywords.And the K-Means clustering algorithm is used to extract 2000 keywords.After deduplication,finally more than 6,000 keywords are extracted.We manually filter and annotate these keywords to create a set of annotated keywords.

    4.1.2 Comparison of Keyword Annotation Data and Manual Annotation Data

    Fig.6 shows the results of the keyword annotation(in red)and the manual annotation(in blue).The keyword annotation is quite accurate.The result of keyword annotation is different from the one of manual annotation only in “WebSocket handshake” and “wss endpoint,” and both methods provide same categories.We observed a part of the dataset and noticed that the performance of keyword annotation is reliable and can meet the data requirements.In this way,we reduce repetitive labor and obtain a sequence labeling dataset in the field of software engineering.

    Figure 6:Annotation result

    4.1.3 K Value Selection

    In Fig.7,the distribution of the clustering results(when K=2 and K=3)of the word vectors is displayed with vocabularies reduced to two-dimensional.Fig.8 illustrates the change of Calinski-Harabasz score along the K value,where the highest value is achieved when K=2.So the final K value is 2.

    Figure 7:Distribution of word vectors and clustering results when K = 2,3

    Figure 8:Calinski-Harabasz score curve

    4.2 Application of Integrated Model in Spring Boot

    4.2.1 Entity Extraction Results

    We collected some codes and documents related to the Spring Boot framework from GitHub,StackOverflow and Spring Boot official website and use the proposed model to extract entities.We also crawled Posts from StackOverflow,Issues from GitHub,and documents from the Spring Boot official website.We crawled 15 documents in HTML format from the Spring Boot official website,collected files in XML or JSON format from GitHub repository,and downloaded the framework source code from GitHub.The quantity and types of entities finally extracted are shown in Tab.4.

    Table 4:Entity extraction result

    4.2.2 Manually Confirm the Entity to Expand the Entity Library

    The entities extracted from the semi-structured data are classified according to the entity library,but there are some entities that cannot be matched using the library and are labeled as “Other” type.Manual classification methods are provided for these entities,as shown in Fig.9.Users can selectively annotate some entities,and these entities will be added to the entity library.The expanded library can be used to improve the accuracy of unstructured data entity extraction models and semi-structured data entity classification.

    Figure 9:Manual classification UI

    4.2.3 Construction and Application of Knowledge Graph

    Fig.10 shows the construction result of the knowledge graph using the extracted entity set.

    This knowledge graph is applied to related knowledge retrieval and association visualization.As shown in Fig.11,the Neo4j graph database query language Cypher is used to retrieve “additionalProperties”keywords.After expanding the associations,we can find various entities and relations related to“additionalProperties.”

    Figure 10:The result of knowledge graph construction

    Figure 11:The result of knowledge retrieval and association visualization

    The results demonstrate that “additionalProperties” appears in unstructured data,semi-structured data,and source code data.Through the association relations,users can find that there are a variable,a field and a method named “additionalProperties” in the source code.This method belongs to Class“BuildInfoDslIntegrationTests,” and the methods in the same Class also have “warWithCustomName,”“buildforProperties,” etc.“additionalProperties” is also related to a Post on StackOverflow.Users can get relevant information of this Post by association visualization,and learn about the questions and answers related to“additionalProperties.”

    Through the application of the knowledge graph,developers and users can easily obtain the associated information of “additionalProperties” in different data sources and visualize knowledge relations.This provides great help for the learning of software development and iterative updating of software.

    5 Conclusion and Future Work

    This paper proposes MEIM:A multi-source software knowledge entity extraction integration model.In view of the shortage of entity annotation datasets in the field of software engineering,we define the categories of entities for software,and propose a keyword extraction method based on TF-IDF,TextRank and K-Means methods for the annotation task,which quickly expands the scale of labeling training set.Multi-source software knowledge entities are divided into unstructured data,semi-structured data,and code data.For these different types of data,software entities are extracted using Bi-LSTM + CRF,template matching and abstract syntax tree.These methods are integrated into a unique model,MEIM.Accuracy of the model can be improved literately based on user’s feedback.Our integrated model has been tested with Spring Boot framework.A knowledge graph is constructed from the extracted entities,which is used for related knowledge retrieval and association visualization.

    In the future,we expect to further optimize the results of entity extraction,add entity alignment algorithms and merge parts of the same entity.At the same time,we will try to pre-extract the relations between entities to improve the accuracy of the later steps of knowledge graph construction.

    Acknowledgement:The works that are described in this paper are supported by Ministry of Science and Technology:Key Research and Development Project(2018YFB003800),Hunan Provincial Key Laboratory of Finance &Economics Big Data Science and Technology(Hunan University of Finance and Economics)2017TP1025 and HNNSF 2018JJ2535.We are also grateful to corresponding author Shengzong Liu and his project NSF61802120.

    Funding Statement:Zhifang Liao:Ministry of Science and Technology:Key Research and Development Project(2018YFB003800),Hunan Provincial Key Laboratory of Finance &Economics Big Data Science and Technology(Hunan University of Finance and Economics)2017TP1025,HNNSF 2018JJ2535.Shengzong Liu:NSF61802120.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩中文字幕欧美一区二区| 国产精品久久电影中文字幕 | 免费一级毛片在线播放高清视频 | 日本精品一区二区三区蜜桃| 精品视频人人做人人爽| 精品电影一区二区在线| 成人国产一区最新在线观看| 丝袜在线中文字幕| 老汉色∧v一级毛片| av免费在线观看网站| 日日摸夜夜添夜夜添小说| 亚洲欧洲精品一区二区精品久久久| 热re99久久国产66热| 亚洲在线自拍视频| 啦啦啦在线免费观看视频4| 午夜免费鲁丝| 黄色丝袜av网址大全| 亚洲avbb在线观看| 免费久久久久久久精品成人欧美视频| 高潮久久久久久久久久久不卡| av国产精品久久久久影院| 国产精品久久久久成人av| 亚洲自偷自拍图片 自拍| 不卡av一区二区三区| 精品卡一卡二卡四卡免费| 高潮久久久久久久久久久不卡| 国产高清videossex| √禁漫天堂资源中文www| 黄色a级毛片大全视频| 我的亚洲天堂| 天天躁日日躁夜夜躁夜夜| 一级毛片精品| 岛国在线观看网站| 精品国产超薄肉色丝袜足j| 欧美国产精品va在线观看不卡| 国产亚洲精品一区二区www | 免费观看人在逋| 国产精品久久久久久人妻精品电影| 国产精品成人在线| 国产成人欧美在线观看 | 久久精品国产亚洲av香蕉五月 | 国产区一区二久久| 在线播放国产精品三级| 欧美一级毛片孕妇| 国产人伦9x9x在线观看| e午夜精品久久久久久久| 一级片'在线观看视频| 国产高清videossex| 搡老乐熟女国产| 丁香六月欧美| 两性午夜刺激爽爽歪歪视频在线观看 | ponron亚洲| 亚洲专区中文字幕在线| 日韩免费av在线播放| 飞空精品影院首页| 三上悠亚av全集在线观看| 亚洲美女黄片视频| 日韩欧美一区视频在线观看| 日韩欧美一区二区三区在线观看 | 中国美女看黄片| 桃红色精品国产亚洲av| 色婷婷av一区二区三区视频| 免费观看精品视频网站| 丝袜美足系列| 岛国在线观看网站| 国产日韩欧美亚洲二区| 在线观看免费视频网站a站| 亚洲成av片中文字幕在线观看| 日韩熟女老妇一区二区性免费视频| 狠狠婷婷综合久久久久久88av| 亚洲一码二码三码区别大吗| 日韩人妻精品一区2区三区| 久久久久久久精品吃奶| 夜夜爽天天搞| 麻豆av在线久日| 可以免费在线观看a视频的电影网站| 中文字幕最新亚洲高清| 亚洲午夜理论影院| 亚洲欧美日韩另类电影网站| 悠悠久久av| 亚洲精品久久成人aⅴ小说| 一本一本久久a久久精品综合妖精| 日韩三级视频一区二区三区| 午夜福利在线免费观看网站| 久久99一区二区三区| 亚洲国产欧美一区二区综合| 这个男人来自地球电影免费观看| 国产黄色免费在线视频| 两性夫妻黄色片| 女人精品久久久久毛片| 久久 成人 亚洲| 午夜激情av网站| 国产精品国产高清国产av | 99国产精品一区二区蜜桃av | 一进一出抽搐gif免费好疼 | 99久久人妻综合| 一a级毛片在线观看| 免费在线观看完整版高清| 午夜亚洲福利在线播放| 香蕉国产在线看| 国产亚洲欧美98| 法律面前人人平等表现在哪些方面| 淫妇啪啪啪对白视频| 欧美日韩瑟瑟在线播放| 在线观看66精品国产| 国产精品久久久久久精品古装| av在线播放免费不卡| 亚洲精品美女久久av网站| 免费女性裸体啪啪无遮挡网站| 大香蕉久久网| 亚洲九九香蕉| 啪啪无遮挡十八禁网站| 精品福利永久在线观看| 最近最新中文字幕大全免费视频| 亚洲aⅴ乱码一区二区在线播放 | 国产精品99久久99久久久不卡| 999久久久精品免费观看国产| 午夜免费成人在线视频| 80岁老熟妇乱子伦牲交| 大陆偷拍与自拍| 欧美日韩福利视频一区二区| 久久精品国产99精品国产亚洲性色 | 久久午夜综合久久蜜桃| 怎么达到女性高潮| 久久久久国产精品人妻aⅴ院 | 在线观看午夜福利视频| 国产极品粉嫩免费观看在线| 两个人看的免费小视频| 91九色精品人成在线观看| 亚洲熟女毛片儿| 日本欧美视频一区| 色94色欧美一区二区| 两性夫妻黄色片| 十分钟在线观看高清视频www| 国产精品一区二区免费欧美| 成人永久免费在线观看视频| 变态另类成人亚洲欧美熟女 | 亚洲免费av在线视频| 久久影院123| 色老头精品视频在线观看| 三上悠亚av全集在线观看| 美女高潮喷水抽搐中文字幕| 大陆偷拍与自拍| 麻豆av在线久日| 首页视频小说图片口味搜索| 夫妻午夜视频| 啦啦啦在线免费观看视频4| 亚洲一区中文字幕在线| 国产精品影院久久| 国产91精品成人一区二区三区| 69av精品久久久久久| 亚洲免费av在线视频| 99精品在免费线老司机午夜| 老熟妇乱子伦视频在线观看| 女人高潮潮喷娇喘18禁视频| www.999成人在线观看| 欧美亚洲 丝袜 人妻 在线| 性少妇av在线| 久久热在线av| 色婷婷久久久亚洲欧美| av线在线观看网站| 亚洲第一青青草原| 亚洲av第一区精品v没综合| 一进一出抽搐动态| 久久国产精品人妻蜜桃| 国产精品免费视频内射| 天天影视国产精品| 19禁男女啪啪无遮挡网站| 国产精品偷伦视频观看了| 久久影院123| 不卡av一区二区三区| 男人舔女人的私密视频| 亚洲av成人av| 国产欧美日韩一区二区精品| 日韩欧美在线二视频 | 亚洲精品国产一区二区精华液| 午夜免费鲁丝| 曰老女人黄片| 亚洲 国产 在线| 在线免费观看的www视频| 亚洲av熟女| 欧美成狂野欧美在线观看| 亚洲第一欧美日韩一区二区三区| 成人影院久久| 高清欧美精品videossex| 日韩制服丝袜自拍偷拍| 美女高潮喷水抽搐中文字幕| 色精品久久人妻99蜜桃| 热99re8久久精品国产| 精品少妇久久久久久888优播| 亚洲色图av天堂| 无遮挡黄片免费观看| 人人妻人人澡人人爽人人夜夜| 18禁裸乳无遮挡免费网站照片 | 日韩一卡2卡3卡4卡2021年| 亚洲va日本ⅴa欧美va伊人久久| a级片在线免费高清观看视频| 99久久国产精品久久久| 久热爱精品视频在线9| 亚洲中文av在线| 777米奇影视久久| 999久久久精品免费观看国产| 精品国产一区二区三区久久久樱花| 日韩欧美在线二视频 | 老司机午夜福利在线观看视频| 国产亚洲精品久久久久久毛片 | 国产精品国产高清国产av | 一本大道久久a久久精品| 国产欧美日韩一区二区三| 女警被强在线播放| 中文亚洲av片在线观看爽 | 下体分泌物呈黄色| 亚洲人成77777在线视频| 色尼玛亚洲综合影院| 18禁观看日本| 国产成人影院久久av| 久久精品国产综合久久久| 在线永久观看黄色视频| 99久久国产精品久久久| av免费在线观看网站| 欧美日韩中文字幕国产精品一区二区三区 | 亚洲第一av免费看| 午夜老司机福利片| 热re99久久精品国产66热6| 欧美日韩乱码在线| 一级毛片女人18水好多| 国产免费av片在线观看野外av| 亚洲人成电影免费在线| 亚洲欧美日韩另类电影网站| 久久久国产精品麻豆| 欧美国产精品一级二级三级| 久久国产乱子伦精品免费另类| 国产色视频综合| 日韩免费高清中文字幕av| a级片在线免费高清观看视频| 国产成人免费观看mmmm| 国产欧美日韩一区二区三区在线| 最新的欧美精品一区二区| 亚洲成人手机| 男女床上黄色一级片免费看| 777米奇影视久久| 国产精品综合久久久久久久免费 | av在线播放免费不卡| 99热国产这里只有精品6| 欧美日韩精品网址| 精品国产乱码久久久久久男人| 9色porny在线观看| 久久精品aⅴ一区二区三区四区| 人人妻人人澡人人看| 国产精品98久久久久久宅男小说| 午夜福利在线免费观看网站| 日日夜夜操网爽| 亚洲精品久久成人aⅴ小说| 国产精品秋霞免费鲁丝片| 亚洲精品一卡2卡三卡4卡5卡| 国产成人精品在线电影| 国产精品电影一区二区三区 | 人妻丰满熟妇av一区二区三区 | 很黄的视频免费| 国产亚洲av高清不卡| 国产亚洲精品久久久久5区| 50天的宝宝边吃奶边哭怎么回事| 午夜免费观看网址| 法律面前人人平等表现在哪些方面| 亚洲av成人不卡在线观看播放网| 国产精品成人在线| 变态另类成人亚洲欧美熟女 | 中文字幕人妻丝袜一区二区| 高清av免费在线| 亚洲精品在线观看二区| 日韩有码中文字幕| 热re99久久精品国产66热6| 亚洲av熟女| 在线国产一区二区在线| 日本黄色日本黄色录像| √禁漫天堂资源中文www| 日本五十路高清| 成人手机av| 久久精品亚洲熟妇少妇任你| 黄色视频,在线免费观看| 亚洲情色 制服丝袜| 国产又爽黄色视频| 国产成人欧美在线观看 | 777米奇影视久久| 好男人电影高清在线观看| 欧美日韩成人在线一区二区| 国产精品一区二区精品视频观看| 亚洲国产精品一区二区三区在线| 一本综合久久免费| 成在线人永久免费视频| 午夜成年电影在线免费观看| 老熟妇仑乱视频hdxx| 国产精品久久视频播放| 热99国产精品久久久久久7| 女人久久www免费人成看片| av网站在线播放免费| 国产乱人伦免费视频| videosex国产| 看免费av毛片| 国产一区有黄有色的免费视频| 免费看十八禁软件| 777久久人妻少妇嫩草av网站| 国产有黄有色有爽视频| 韩国av一区二区三区四区| 久久久水蜜桃国产精品网| 多毛熟女@视频| 精品亚洲成a人片在线观看| 在线观看日韩欧美| 亚洲国产欧美日韩在线播放| 香蕉国产在线看| 黄色成人免费大全| 久久亚洲真实| 老司机在亚洲福利影院| 欧美在线一区亚洲| 美女国产高潮福利片在线看| 亚洲一码二码三码区别大吗| 老司机亚洲免费影院| 国产精品.久久久| 91老司机精品| 99精品欧美一区二区三区四区| 国产一区有黄有色的免费视频| 国产欧美日韩一区二区精品| 国产亚洲一区二区精品| 亚洲五月婷婷丁香| 午夜两性在线视频| 一边摸一边抽搐一进一小说 | 99riav亚洲国产免费| 99国产精品一区二区三区| 日韩熟女老妇一区二区性免费视频| 91老司机精品| 天堂动漫精品| 怎么达到女性高潮| 久久久久精品人妻al黑| 久久青草综合色| 黄片小视频在线播放| 美女扒开内裤让男人捅视频| 一区福利在线观看| 亚洲精品国产一区二区精华液| 精品久久久久久,| 亚洲人成伊人成综合网2020| 亚洲av成人不卡在线观看播放网| 精品无人区乱码1区二区| 亚洲精品中文字幕在线视频| 男人操女人黄网站| 国产99白浆流出| 久久久久久久久久久久大奶| 国产精品99久久99久久久不卡| 国产成人精品久久二区二区91| 午夜福利影视在线免费观看| 少妇裸体淫交视频免费看高清 | 看黄色毛片网站| 啦啦啦 在线观看视频| 亚洲在线自拍视频| 老汉色av国产亚洲站长工具| 免费在线观看日本一区| 捣出白浆h1v1| 黄色毛片三级朝国网站| 亚洲,欧美精品.| 多毛熟女@视频| 国产淫语在线视频| www.精华液| 国产又色又爽无遮挡免费看| 法律面前人人平等表现在哪些方面| 欧美在线黄色| 免费久久久久久久精品成人欧美视频| 日韩欧美一区视频在线观看| 伦理电影免费视频| 久久中文看片网| 久久久久精品国产欧美久久久| 久久久精品免费免费高清| 婷婷丁香在线五月| 午夜福利乱码中文字幕| 久久久久精品国产欧美久久久| 午夜福利一区二区在线看| 熟女少妇亚洲综合色aaa.| 精品国产一区二区三区四区第35| 美女扒开内裤让男人捅视频| 另类亚洲欧美激情| 国产精品永久免费网站| 一进一出好大好爽视频| 黑人欧美特级aaaaaa片| 日本精品一区二区三区蜜桃| 怎么达到女性高潮| 99在线人妻在线中文字幕 | 天天添夜夜摸| av电影中文网址| 成在线人永久免费视频| 欧美 亚洲 国产 日韩一| 叶爱在线成人免费视频播放| 欧美乱色亚洲激情| 久久99一区二区三区| 久久久久国产精品人妻aⅴ院 | 欧美日韩黄片免| 欧美日韩成人在线一区二区| 999精品在线视频| 后天国语完整版免费观看| 国产成人欧美| 美女 人体艺术 gogo| 日韩制服丝袜自拍偷拍| 亚洲一区高清亚洲精品| 日韩 欧美 亚洲 中文字幕| 校园春色视频在线观看| 80岁老熟妇乱子伦牲交| 两个人看的免费小视频| 亚洲精品粉嫩美女一区| 久久精品国产综合久久久| 国产av精品麻豆| 搡老熟女国产l中国老女人| 久久国产精品大桥未久av| 黑人操中国人逼视频| 99精国产麻豆久久婷婷| 精品久久蜜臀av无| 亚洲国产毛片av蜜桃av| 日韩欧美国产一区二区入口| 美女福利国产在线| 90打野战视频偷拍视频| 无人区码免费观看不卡| 色精品久久人妻99蜜桃| 亚洲av片天天在线观看| 国产精品久久久人人做人人爽| 中文字幕制服av| 国产精品免费一区二区三区在线 | 成年动漫av网址| 国产成人精品久久二区二区91| 欧美黄色淫秽网站| 亚洲午夜理论影院| 国产精品影院久久| 精品人妻1区二区| 12—13女人毛片做爰片一| 精品免费久久久久久久清纯 | 亚洲熟妇中文字幕五十中出 | 精品久久久久久,| 亚洲伊人色综图| 一a级毛片在线观看| 欧美黄色淫秽网站| 免费黄频网站在线观看国产| 成人三级做爰电影| 在线永久观看黄色视频| 制服人妻中文乱码| 十八禁网站免费在线| 亚洲 欧美一区二区三区| 一边摸一边做爽爽视频免费| 一边摸一边抽搐一进一小说 | 成人黄色视频免费在线看| 欧美成人午夜精品| 久久久国产成人免费| 每晚都被弄得嗷嗷叫到高潮| 美女 人体艺术 gogo| 新久久久久国产一级毛片| 国产成人免费观看mmmm| 日韩 欧美 亚洲 中文字幕| 法律面前人人平等表现在哪些方面| av视频免费观看在线观看| 中文字幕最新亚洲高清| 捣出白浆h1v1| 午夜91福利影院| 天天添夜夜摸| 老司机在亚洲福利影院| 久久精品亚洲av国产电影网| 精品电影一区二区在线| 女人精品久久久久毛片| 久久狼人影院| 久久国产精品人妻蜜桃| 久久香蕉国产精品| 国产成+人综合+亚洲专区| 高清黄色对白视频在线免费看| 国产精品99久久99久久久不卡| a在线观看视频网站| 老汉色∧v一级毛片| 三级毛片av免费| 久久久国产欧美日韩av| www.精华液| a级片在线免费高清观看视频| 高潮久久久久久久久久久不卡| 国产亚洲欧美在线一区二区| 高清欧美精品videossex| 天天操日日干夜夜撸| 成人特级黄色片久久久久久久| 欧美成狂野欧美在线观看| 久久久久久久国产电影| 亚洲精品乱久久久久久| 国产蜜桃级精品一区二区三区 | 国产精品自产拍在线观看55亚洲 | 最新的欧美精品一区二区| 亚洲精品中文字幕在线视频| 极品人妻少妇av视频| 国产又爽黄色视频| 成人18禁高潮啪啪吃奶动态图| 成年人免费黄色播放视频| 亚洲国产欧美网| 日本a在线网址| 在线观看66精品国产| 后天国语完整版免费观看| 日韩欧美国产一区二区入口| 免费久久久久久久精品成人欧美视频| 亚洲,欧美精品.| 中文字幕av电影在线播放| 俄罗斯特黄特色一大片| 九色亚洲精品在线播放| 后天国语完整版免费观看| videos熟女内射| 免费久久久久久久精品成人欧美视频| 91精品三级在线观看| 亚洲午夜理论影院| 亚洲黑人精品在线| 身体一侧抽搐| 国产97色在线日韩免费| 日韩欧美一区二区三区在线观看 | 国产精品国产av在线观看| 亚洲精品久久午夜乱码| 久久中文字幕一级| 一级黄色大片毛片| x7x7x7水蜜桃| 久久亚洲精品不卡| 久久青草综合色| 91字幕亚洲| e午夜精品久久久久久久| 精品一品国产午夜福利视频| 人妻丰满熟妇av一区二区三区 | 欧美最黄视频在线播放免费 | 1024视频免费在线观看| 老熟妇仑乱视频hdxx| 18禁黄网站禁片午夜丰满| 国产精品久久久av美女十八| 91大片在线观看| 国产男女超爽视频在线观看| 国产乱人伦免费视频| 亚洲色图 男人天堂 中文字幕| 99国产极品粉嫩在线观看| 国产精品久久电影中文字幕 | www.熟女人妻精品国产| 大香蕉久久网| 女性被躁到高潮视频| 国产精品成人在线| 涩涩av久久男人的天堂| 亚洲欧洲精品一区二区精品久久久| 岛国毛片在线播放| 久久久久国产精品人妻aⅴ院 | videos熟女内射| 欧美大码av| 人人妻人人澡人人爽人人夜夜| 亚洲一区中文字幕在线| 国产欧美日韩一区二区精品| 99re在线观看精品视频| 人妻丰满熟妇av一区二区三区 | 男男h啪啪无遮挡| 男女下面插进去视频免费观看| 每晚都被弄得嗷嗷叫到高潮| 国产亚洲欧美精品永久| 老汉色av国产亚洲站长工具| 两性午夜刺激爽爽歪歪视频在线观看 | 国产亚洲欧美在线一区二区| 老熟妇乱子伦视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 免费少妇av软件| 淫妇啪啪啪对白视频| 亚洲九九香蕉| 亚洲成人免费电影在线观看| 99久久精品国产亚洲精品| 丰满的人妻完整版| 色播在线永久视频| 精品福利永久在线观看| 纯流量卡能插随身wifi吗| av网站在线播放免费| 亚洲精品美女久久av网站| 欧美国产精品va在线观看不卡| 欧美乱妇无乱码| 国产乱人伦免费视频| 叶爱在线成人免费视频播放| 天堂中文最新版在线下载| 多毛熟女@视频| 精品亚洲成国产av| 国产黄色免费在线视频| 看片在线看免费视频| 十八禁网站免费在线| 亚洲欧美日韩高清在线视频| 亚洲美女黄片视频| 欧美成狂野欧美在线观看| 美女国产高潮福利片在线看| 捣出白浆h1v1| 91麻豆av在线| 亚洲成人免费电影在线观看| 亚洲aⅴ乱码一区二区在线播放 | 无遮挡黄片免费观看| 国产成人一区二区三区免费视频网站| 国产亚洲av高清不卡| av网站免费在线观看视频| 中文字幕精品免费在线观看视频| 免费高清在线观看日韩| 男男h啪啪无遮挡| 久久久国产精品麻豆| 91精品国产国语对白视频| 亚洲精品美女久久久久99蜜臀| 91大片在线观看| 日本精品一区二区三区蜜桃| 久久人妻av系列| 国产精品国产av在线观看| 亚洲精品国产色婷婷电影| 国产精品自产拍在线观看55亚洲 | 亚洲成a人片在线一区二区| a级片在线免费高清观看视频| 两性午夜刺激爽爽歪歪视频在线观看 | 一级作爱视频免费观看| 久久精品国产亚洲av香蕉五月 | 电影成人av| 国产日韩欧美亚洲二区| 久久人人爽av亚洲精品天堂| 亚洲精品中文字幕一二三四区| 999久久久国产精品视频| 在线播放国产精品三级| 极品教师在线免费播放| 美女视频免费永久观看网站| 一级片'在线观看视频| 女性生殖器流出的白浆| 一区二区三区激情视频|