• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Knowledge Graph and Knowledge Reasoning:A Systematic Review

    2022-07-08 01:38:40LingTianXueZhouYanPingWuWangTaoZhouJinHaoZhangTianShuZhang

    Ling Tian | Xue Zhou | Yan-Ping Wu | Wang-Tao Zhou | Jin-Hao Zhang | Tian-Shu Zhang

    Abstract—The knowledge graph (KG) that represents structural relations among entities has become an increasingly important research field for knowledge-driven artificial intelligence.In this survey, a comprehensive review of KG and KG reasoning is provided.It introduces an overview of KGs, including representation, storage, and essential technologies.Specifically, it summarizes several types of knowledge reasoning approaches, including logic rules-based, representation-based, and neural network-based methods.Moreover, this paper analyzes the representation methods of knowledge hypergraphs.To effectively model hyper-relational data and improve the performance of knowledge reasoning, a three-layer knowledge hypergraph model is proposed.Finally, it analyzes the advantages of three-layer knowledge hypergraphs through reasoning and update algorithms which could facilitate future research.

    1.lntroduction

    The knowledge graph (KG) describes the objective world’s concepts, entities, and their relationships in the form of graphs.It can organize, manage, and understand massive information in a way close to human cognitive thinking.In that case, KG plays an important role in a variety of downstream applications, such as semantic search, intelligent recommendation, and question answering.

    KG reasoning, which is essential for the KG applications, improves the completeness of KG by inferring new knowledge.However, there is little research that systematically summarizes and analyzes KG reasoning.Furthermore, KG reasoning has attracted wide attention and a series of advanced approaches have emerged in recent years.Hence, a comprehensive review of KG reasoning is necessary and could promote the development of related research.

    The main contributions of this paper include: First, it summarizes the representation, storage, and essential technologies of KG.Second, it gives a fine-grained classification of KG reasoning, and analyzes the typical methods, solutions, and advantages and shortcomings of each category in detail.Third, it summarizes the research of knowledge hypergraphs.Finally, it proposes a three-layer framework of knowledge hypergraphs, which can greatly improve the ability and efficiency of reasoning as well as updating.

    2.KG Overview

    Since Feigenbaum worked with Stanford University proposed expert systems in 1965[1], artificial intelligence (AI) research has changed from traditional reasoning algorithms to knowledge-driven algorithms.And in 1968, Quillian proposed the knowledge expression model of semantic networks[2].The establishment of the knowledge base (KB) and knowledge representation (KR) methods had fueled a wave of hot research topics.Then in 1977, Feigenbaum proposed the concept of knowledge engineering[3], demonstrating how to represent knowledge using the principles, methods, and technology of AI.Later, in 1989, Berners-Lee invented the World Wide Web (WWW)[4].The Semantic Web concept, which combined traditional AI with WWW, was proposed in 1998[5].

    In 2012, Google proposed KG which was defined as KB[6].KG uses semantic retrieval methods to collect information from multiple sources to improve the quality of searches.In essence, KG is a semantic network composed of various entities, concepts, and relationships[7].This section mainly introduces the KG architecture, KR, storage, and essential technology.And the technology application framework of KG is shown inFig.1.It has 5 main components: KR learning (KRL), knowledge storage (KS), KG construction(KGC), knowledge updating (KU), and knowledge reasoning.Besides, KGC includes knowledge extraction(KE), knowledge fusion (KF), and knowledge processing (KP).The KG technology is the basis of knowledge reasoning and KG applications.

    Fig.1.Technology application framework of KG.

    2.1.Architecture of KG

    KG, which can be divided into a pattern layer and a data layer, needs certain constraints and norms to form a logical architecture.The pattern layer represents the data structure of knowledge, the hierarchical structure, and definitions of knowledge classes, such as the entity, relation, and attribute.It restricts the specific knowledge form of the data layer.The knowledge triples in the data layer are viewed as units to store specific data information.Thus, KG can be generally expressed in the form of triplesG={E,R,F}.Among them,Erepresents the set of entities {e1,e2,…,ei} .Entityeis the basic element in KG.It refers to things that exist objectively and can be distinguished from each other, including people, things, or abstract concepts.Rrepresents the set of relationships {r1,r2,…,rj}.And the relationshiprrepresents the edge of a certain connection between two different entities in KG.Frepresents the set of facts {f1,f2,…,fk} and each fact is defined as a triple (h,r,t)∈F, whereh,r, andtrepresent the head entity, relationship, and tail entity,respectively.For example, basic types of facts can be expressed as triples (Entity, Relation, Entity), (Entity,Attribute, Value), etc.

    As shown inFig.2, the knowledge triple can be expressed through directed graph structures.The oneway arrow indicates the asymmetric relationship “Starring”, while the two-way arrow indicates the symmetric relationship “Co-star”.

    Fig.2.Triple example.

    2.2.KR and KS

    KR and KS are the bases of the construction, management, and application of KG.Modern KGs are based on a massive amount of Internet data.Its growing scale poses a new challenge of the effective representation and storage of knowledge.

    2.2.1.KR

    KR refers to a method of knowledge descriptions.It could transform massive amounts of information in the real world into structured data by using information technology.Early KR methods include the first-order logic,Horn logic, semantic networks, production systems, and frame systems.With the development of the Internet and the Semantic Web, WWW Consortium (W3C) proposed the resource description framework (RDF), RDF schema (RDFS), and web ontology language (OWL) description languages.However, these traditional KR methods are based on symbolic logic, which are unable to represent a large amount of the real-world data.And it is also difficult to mine and analyze the semantic relationships between knowledge entities.In recent years, KRL based on deep learning has attracted extensive attention in the fields of speech recognition,image analysis, and natural language processing (NLP).KRL can calculate complex semantic relationships among entities efficiently through projecting the semantic information (such as triples) into a dense lowdimensional vector space.And it can be easily integrated with deep learning models to realize knowledge reasoning.We will introduce the details of knowledge reasoning based on KRL in Section 3.

    2.2.2.KS

    The purpose of KS is to find a reasonable and efficient approach for storing KGs.In existing research,most KGs are based on graph data structures.And there are three main storage methods of KS: RDF databases, relational databases (RDBs), and graph databases (GDBs).Nowadays, the main open-source RDF databases in academia include Jena[8]、RDF4J[9], and gStore[10].RDBs include PostgreSQL[11]and MySQL[12].Typical GDBs include Neo4j[13], JanusGraph[14], and HugeGraph[15].

    2.3.Essential Technology

    It requires a variety of efficient KG essential technologies to build large-scale, high-quality general KGs or domain KGs.Through the technologies, including KE, KF, KP, and KU, accurate extraction and rapid aggregation of knowledge can be achieved.

    2.3.1.KE

    KE is the basic and primary task of KG.Different entities, relationships, and attributes are extracted from raw data through automatic or semi-automatic KE, to support the construction of KGs.Early KE is mainly based on the expert rules.Through pre-defined KE rules, triple information can be extracted from the text.However, the traditional method relies on experts with domain knowledge to manually define the rules.When the amount of data increases, the rule-based method takes a long time and has poor portability, which is unable to support the construction of large-scale KGs.

    Compared with rule-based KE, neural network-based KE can discover the entity, relationship, and attribute features automatically.And it is suitable for processing large-scale knowledge.The three main tasks of KE include named entity recognition (NER), relation extraction (RE), and attribute extraction (AE).

    NER accurately extracts named entity information, such as people, locations, and organizations, from massive amounts of raw data (such as text).The main neural network-based methods for NER include the convolutional neural network (CNN)[16],[17]and recurrent neural network (RNN)[18],[19].

    RE is a hot research topic in the field of KGs and also the core of KE.By obtaining semantic relationships or relationship categories among entities, RE can automatically recognize the triples formed by entity pairs and the relationship between two entities.Neural network-based RE methods include CNN[20]-[22], RNN[23]-[25],attention mechanisms[26]-[29], graph convolutional network (GCN)[30]-[32], adversarial training (AT)[33]-[35], and reinforcement learning (RL)[36]-[38].In recent years, some researchers have proposed methods for joint entity and RE[39]-[42], which allows the models to incorporate the semantic relevance between entities and relationships.It can solve the problem of overlapping relationships, effectively improving KE.

    AE extracts the attribute names and values of the entities from different information sources, constructs the attribute lists of entities, and achieves comprehensive descriptions of the entities.AE is generally divided into traditional supervised models (hidden Markov model, conditional random fields, etc.), unsupervised models[43],[44], neural network-based models[45], and other types of AE models (such as meta-patterns[46]and multimodal[47]).

    2.3.2.KF

    KF involves the fusion of the same entity in different KBs, different KGs, multi-source heterogeneous external knowledge, etc.It determines equivalent instances, classes, and attributes in KGs, to facilitate the update of existing KG.The main tasks of KF consist of entity alignment (EA) and entity disambiguation (ED).

    EA constitutes the majority of the work in the KF stage, aiming to discover entities that represent the same semantics in different KGs.EA methods can be classified into traditional probability models (conditional random fields, Markov logic networks, latent Dirichlet allocation, etc.), machine learning models (decision tree, support vector machine, etc.), and neural networks (embedding-based CNN, RNN, etc.).

    ED eliminates the ambiguity of entities in different text according to the certain text, and maps them to actual entities that they refer to.Based on the target KB, ED includes named entity clustering disambiguation and named entity linking disambiguation.

    2.3.3.KP

    KP processes basic facts and forms a structured knowledge system with high-quality knowledge based on KE and KF, to realize the unification of knowledge.KP includes ontology construction (OC) and quality evaluation (QE).

    OC refers to the construction of conceptual entities of knowledge in the pattern layer based on the architecture of KG.It standardizes the description of the concepts and relationship between two concepts in a specified field, including concept extraction and inter-concept relationship extraction.According to the extent of automation in the construction process, widely used OC methods include manual construction, semiautomatic construction, and automatic construction.

    QE is usually used in KE or the fusion stage, to improve the quality of knowledge extracted from raw data and enhance the effectiveness of KE.And it is able to obtain high-quality and high-confidence knowledge through QE.

    2.3.4.KU

    KU refers to the update of KG and increase of new knowledge, to ensure the validity of knowledge.As shown inTable 1, KU includes the pattern layer update, data layer update, comprehensive update, and incremental update.

    Table 1: Content of KU

    2.4.Category of KG

    As shown inTable 2, categories of KG include early KB, open KG, common sense KG, and domainspecific KG.

    Table 2: Categories of KG

    Early KB is based on expert systems, which is applied to information retrieval and question answering systems.Open KG allows anyone to access, use, and share with the rule of open-source licenses.Common sense KG is significant to many NLP tasks.

    Domain-specific KG has a wide range of application scenarios.For example, medical KG can provide correct and efficient retrieval and scientific interpretation, which benefits from medical knowledge question and answer, intelligent diagnosis and treatment, medical quality control and disease risk assessment, etc.Transportation KG can execute traffic volume modeling, air traffic management, and public transport data mining.KGs in finance can save massive amounts of financial data, including entities, relationships, and attributes, allowing for the predictions of hidden risks in complex information and promoting financial upgrading and transformation.

    3.Knowledge Reasoning Methods

    Due to the limitations of KG construction and the rapid change in real-world data, KGs usually suffer from missing knowledge triples.Moreover, the performance of downstream tasks can also be severely degraded.Therefore, knowledge reasoning, to predict the missing relationships in KGs, is of great research and application value.This section introduces the details of knowledge reasoning methods based on logic rules,representation learning, and neural networks.

    3.1.KG Reasoning Based on Logic Rules

    Knowledge reasoning based on logic rules refers to using simple rules and features in KGs to discover new facts.These methods can make good use of symbolic representations of knowledge.In that case, they can perform with high accuracy and provide explicit explanations for reasoning results.This subsection introduces three types of KG reasoning approaches based on logic rules, specifically logic-based reasoning,statistics-based reasoning, and graph structure-based reasoning.

    3.1.1.ReasoningBasedonLogic

    Logic-based knowledge reasoning refers to directly using first-order logic (FOL) and description logic to express the rules formulated by experts.According to the representation methods of rules, logic-based reasoning methods can be classified into reasoning based on FOL and reasoning based on description logic.

    Knowledge reasoning based on FOL means that it adopts FOL to represent rules defined by experts, and then performs reasoning tasks by using propositions as basic units.Since it is conducted in a way that is close to natural human language, FOL-based reasoning achieves good interpretability and high accuracy for small-scale KGs.

    Propositions are comprised of two parts, individuals and predicates.Individuals and predicates correspond to entities and relations in KG separately.As shown inFig.3, the user “Carl” likes the director “Roland Emmerich”, and the movie “The Day after Tomorrow” is directed by the director “Roland Emmerich”.It is possible that there is a relation “Like” between “Carl” and “The Day after Tomorrow”.Therefore, a FOL rule can be obtained:

    where Λ means the conjunction operator, forming a Boolean-valued function, returning true only if all propositions are true.

    Fig.3.Example of knowledge reasoning methods based on logic rules.

    Reasoning based on description logic aims to transform the complex entity and relation reasoning into a consistency detection problem.This method effectively reduces the reasoning complexity over KGs and achieves the tradeoff between the expressing ability and reasoning complexity.Specifically, KG represented by description logic is composed of terminological axioms (TBoxes) and assertional sets (ABoxes)[55].TBoxes contain a series of axioms describing concepts and relations, and ABoxes contain examples of concepts in TBoxes.This method conducts reasoning by judging whether a description satisfies logical consistency.

    3.1.2.ReasoningBasedonStatistics

    Knowledge reasoning based on statistics applies machine learning approaches to automatically extract hidden logic rules from KGs, and conducts reasoning using these rules.Those methods do not depend on expert-defined rules and can interpret the results of the reasoning with automatically extracted logic rules.Statistics-based reasoning methods can be further divided into two subcategories, which are based on inductive logic programming (ILP) and association rule mining, separately.

    Reasoning based on ILP refers to using the machine learning and logic programming technology to automatically summarize abstract rule sets.This method abandons the usage of manually defined rules and achieves good reasoning ability over small-scale KGs.

    The key idea of reasoning based on association rule mining is to automatically extract high-confidence rules, and then employ these rules in reasoning.Compared with traditional ILP methods, reasoning based on association rule mining is faster and can handle more complex and larger scale KGs.

    3.1.3.Reasoning Basedon Graph Structure

    Graph structure-based reasoning refers to using the structure of the graph as the feature to conduct reasoning.The most typical structure in KG is the paths between entities, which play an important role in KG reasoning.Reasoning based on the graph structure is efficient and interpretable.For example, inFig.3,starting from the node “Roland Emmerich”, based on the relation path “Direct→Leading actor”, it can be inferred that the entity “Roland Emmerich” and the entity “Dennis Quaid” may have the relation “Collaborate”.According to the granularity of features, graph structure-based reasoning methods can be divided into global structure-based and local structure-based models.

    The essence of global structure-based reasoning is to extract paths of entire KG and use these paths as features to determine the existence of a target relation.

    Reasoning with the local structure performs KG reasoning by using the local graph structure that is highly related to the reasoning as features.Compared with reasoning based on the global structure, this method focuses on the finer granularity of features and has a lower computational cost.

    Table 3shows the current research of knowledge reasoning methods based on logic rules.

    Table 3: Comparison of knowledge reasoning methods based on logic rules

    Table 3 (continued): Comparison of knowledge reasoning methods based on logic rules

    3.2.KG Reasoning Based on Representation Learning

    In the field of machine learning, representation learning is a particularly important technique, which transforms complicated data structures into vectors.The traditional one-hot embedding of relational information faces the challenge of feature sparsity under the circumstance of large-scale data.Due to the limited accuracy of knowledge reasoning, semantic information is not incorporated in the representations of entities and relation types in this scenario.Thus, representations with more semantic information are needed for better reasoning quality.Through KRL, the semantic relationships between entities and relation types are well captured, and contained in the learned distributed embeddings, which guarantees a fixed semantic space.These distributed embeddings with abundant semantic information could facilitate expressive modeling of relations and boost the performance of knowledge reasoning tasks.By mapping hidden relational information in graphs into the low-dimensional space, KRL makes previously obscure correlations in graphs apparent.

    Representation learning shows a great advantage over many other approaches.Thus, KG reasoning based on representation learning has been extensively studied in recent years.This subsection introduces and compares three types of KG reasoning approaches based on representation learning, including the tensor decomposition approach, distance model, and semantic matching model.

    3.2.1.TensorDecompositionApproach

    Tensor decomposition-based KG reasoning refers to decomposing the relation tensor of KG into multiple matrices, which can be used to construct a low-dimensional embedding of KG.Existing basic tensor decomposition algorithms are modified and applied to train a distributed representation of KG in a fast and efficient way.

    RESCAL[77],[78]is currently the most widely used KG reasoning approach based on tensor decomposition.This model represents KG as a three-way tensor, which models the three-way interactions of entity-relationentity.RESCAL obtains representations of the entities and relation types by solving a simple tensor decomposition problem.The low-dimensional embedding calculated by RESCAL reflects the similarity of neighborhood structures of the entities and relation types in original KG.

    A relational graph is shown inFig.4, which is a small part of large KG.The movies “2012” and “The Day after Tomorrow” are both directed by “Roland Emmerich” and written by “Harald Kloser”, but have different leading actors, namely “John Cusack” and “Dennis Quaid”, respectively.The neighborhood structure of“2012” is similar to that of “The Day after Tomorrow”, causing the two movies with similar embeddings.As a result, given that the movie “The Day after Tomorrow” belongs to the type “Science fiction”, RESCAL can determine that “2012” also belongs to the type “Science fiction”.

    Fig.4.Example of knowledge reasoning problem.

    RESCAL is a classic tensor decomposition model for representation learning.However, RESCAL is too simple and lacks interpretability, it cannot be directly applied in complex scenarios.Hence, numerous tensor decomposition models are developed to enhance the performance of representation learning.Nickeletal.[79],Rendleetal.[80], and Jenattonetal.[81]proposed three other KG representation learning approaches aiming at more complicated application situations.Table 4shows the details.

    Table 4 (continued): Comparison of knowledge reasoning methods based on representation learning

    3.2.2.DistanceModel

    The distance model considers any relations in KG as a translational transformation from the subject embedding to the object embedding.By minimizing the transformation error, this type of models learns lowdimensional embeddings of all the entities and relation types in KG.

    The representative distance model is TransE[82]and its derivatives.TransE, a well-known translational model, learns distributed representations of all entities and relation types by imposingh+r=tfor all relation triples in KG, whereh,t, andrdenote the embedding of the subject entity, the object entity, and the corresponding relation type, respectively.

    Fig.5illustrates the mechanism of TransE with the previously given example.In order to find out whether “2012” and “Science fiction” share the relationship “Type”, as indicated by the dashed line,TransE projects all entities onto a low-dimensional space.“2012” and “Science fiction” are two existing entities, which are projected to Point A and Point B,respectively.Two other entities named “Mr.Bean”and “Comedy” are then projected to Point C and Point D, respectively.Since they already have the relation “Type”, the relation type “Type” can then be projected onto the embedding space asα, that is, the translation vector between the embedding point of “Mr.Bean” (Point C) and the embedding point of “Comedy” (Point D).Once all these embeddings are obtained,the existence of relation (2012, Type, Science fiction) can be determined by checking whether the embedding of “2012” (Point A) can be translated to the embedding of “Science fiction” (Point B) via the embedding of“Type” (α).If and only if this condition is satisfied, the relation (2012, Type, Science fiction) holds.

    Fig.5.Example of entity and relation mapping of TransE.

    TransE achieves good interpretability by formulating the problem in a straightforward way.Nevertheless,TransE has two limitations.One limitation is that the translation rule is too strict, which affects its flexibility and robustness.Thus, a number of relaxed translational models[83]-[87]are proposed to tackle noise in practical data.The other limitation has to do with the fact that TransE is not suitable for processing 1-to-N,N-to-1, orN-to-Nrelations, which largely limits its usage in practice.Some works[88]-[91]aim to solve this problem by separating the entity space and relation space to model their interactions in a projection space.Apart from these two types of derivatives, there are works focusing on other problems, including stochastic distance models[92],[93], rotation models[94], and other distance models[95].SeeTable 4for details of distance models.

    3.2.3.Semantic Matching Model

    The semantic matching model measures the validity of relation triples by matching hidden semantics among different entities and relation types in the low-dimensional space, using a scoring function.These models consider relations presented in KG similar, and those not in KG dissimilar.

    Currently, the most widely used semantic matching models, e.g., two- and three-way embedding combination (TATEC)[96], mainly focus on matching two-way and three-way semantics in KG.Specifically, the validity of relations is evaluated using a linear scoring function.

    Given a relational graph as shown inFig.4, TATEC first defines a scoring function for three-way relations,such as (Roland Emmerich, Direct, The Day after Tomorrow).For two-way relations, such as (Roland Emmerich, Directs), (Direct, The Day after Tomorrow), and (Roland Emmerich, The Day after Tomorrow),TATEC defines scoring functions in a similar way to evaluate their validity.For example, the three-way score of (Roland Emmerich, Direct, The Day after Tomorrow) is 0.35, and the two-way scores of (Roland Emmerich,Directs), (Direct, The Day after Tomorrow), and (Roland Emmerich, The Day after Tomorrow) are 0.25, 0.12,and 0.18, respectively.Then the total score for the triple (Roland Emmerich, Direct, The Day after Tomorrow)should be 0.90.The training process maximizes scores of the three-way and two-way relations presented in KG.When determining whether “2012” and “Science fiction” have the relation “Type”, TATEC simply calculates the score of the triple (2012, Type, Science fiction), and establishes the relation if the score is greater than an empirical threshold, such as 0.75.

    TATEC suffers from high computational complexity due to a large number of parameters.Hence, a series of linear and bilinear models[97]-[101]try to balance its performance and complexity.

    In order to better capture non-linear patterns of relations, numerous neural network-based representation learning models[102]-[105]based on semantic matching have been proposed.These models calculate scores of relations through deep neural networks.

    The representation learning-based methods of knowledge reasoning are summarized and compared inTable 4.

    3.3.KG Reasoning Based on Neural Network

    Though more and more KG reasoning methods have been proposed, it still remains an open issue on complex and multi-hop relation reasoning.To analyze such relations, the neural network could be a more powerful tool than other reasoning methods based on logic rules or representation learning.After representation learning based on neural networks like CNN or RNN, the accurate understanding of knowledge semantics could benefit the subsequent reasoning process based on the fully connected layer and softmax layer.Moreover, such methods could realize automatical reasoning on KG without logical inference or theoretical modeling.Due to the generality of structures and the convenience of reasoning, KG reasoning methods based on neural networks emerge endlessly.Therefore, this subsection introduces the knowledge reasoning methods based on CNN, RNN, the graph neural network (GNN), and deep reinforcement learning(DRL) in detail.

    Considering the knowledge subgraph shown inFig.6, RNN chooses the red, blue, and green paths as the input to reasoning the target relation between “Spider-Man” and “Life Story”.For the red path, RNN pays attention to the words like “hope” and “background” and keeps them in memory cells to increase the probability of the “main character” relation.For the blue path, since the participation of “Captain America” in“Vietnam War”, RNN notices the role of “Captain America” in “Life Story” and reasonings the friendship between “Spider-Man” and “Captain America”.For the green path, RNN forgets the “create” relation to some extent because all the relations are the same.At last, RNN synthesizes all the information in the three paths and obtains a high probability of the “main character” relation between the “Life Story” entity and the “Spider-Man” entity.Based on the final reasoning result of the friendship of “Spider-Man” and “Captain America” in“Life Story”, RNN decides to add a new triple (Life Story, main character, Spider-Man) to the knowledge subgraph.

    Fig.6.Example of KG reasoning methods based on RNN.

    CNN, which is capable of capturing the local characteristics of KG, is one of the earliest neural networks applied for knowledge reasoning.First, the knowledge triples could be an ideal learning object.With a twodimensional convolution on a two-dimensional entity relation embedding matrix, ConvE achieves the effective capture of interactive features between entities and relations[106].Besides, entity text descriptions provide more information for the entities in the knowledge triples.The most typical example is description-embodied knowledge representation learning (DKRL)[107],[108], which utilizes continuous bag-of-words (CBOW) and CNN to learn the unordered features and word-order features of text descriptions separately.After that, the fused feature achieves effective discovery of new entities.

    Additionally, only focusing on single knowledge triples would limit the reasoning scope.Many studies seek to explore a wider reasoning scope, focusing on using RNN to analyze the knowledge path that is alternately composed of entities and relations[109]-[112].This research could effectively combine different representations of knowledge paths.Moreover, RNNs could also be used to analyze entity text descriptions.As a typical method, the learning knowledge graph embedding with entity descriptions based on long short-term memory(LSTM) networks (KGDL)[113]utilizes LSTM to encode related text descriptions, then uses triples to encode entity descriptions, and realizes the prediction of missing knowledge.

    Since KG is based on the graph structure, GNN[114]has been widely explored for knowledge reasoning in recent years.It enlarges the learning scope from a single triple in CNN or knowledge path in RNN to knowledge subgraphs in GNN.For instance, the structure aware convolutional network (SACN)[115]utilizes one weighted GCN as an encoder and a convolution network Conv-TransE as a decoder.As a result, SACN realizes the adaptive learning of semantic information in the node’s neighborhood structure.While GCN has the powerful subgraph learning ability, it simply models a one-way relation as a two-way relation and suffers from the modeling error of relation direction.However, the graph attention network (GAT) explicitly distinguishes the bidirectionality of edges.As an improvement, ReInceptionE[116]combines ConvE and KBGAT to achieve deep understanding of KG structural information.

    In terms of interactive modeling ideas, DRL[117]provides a new perspective for knowledge reasoning.As the most typical model, Deeppath[118]treats the knowledge entities as the state space and wanders between entities by choosing relations.If it reaches the correct answer entity, Deeppath will receive a reward.In essence, this kind of method establishes a reasoning plan based on relation path exploration.As a result,DRL methods could significantly improve the effectiveness and diversity of knowledge reasoning.

    Table 5lists the details and comparison of reasoning methods based on neural network models.

    Table 5: Comparison of knowledge reasoning methods based on neural network models

    Table 5 (continued): Comparison of knowledge reasoning methods based on neural network models

    4.Knowledge Hypergraph Theory

    Despite KG being universally adopted, representation methods based on triples often oversimplify the complexity of the data stored in KG, especially for hyper-relational data.In hyper-relational data, each fact contains multiple relations and entities.To understand the characteristics of knowledge hypergraphs more clearly,Table 6shows the definitions of knowledge hypergraphs and related graphs.

    Table 7gives the comparison of knowledge hypergraph representation methods which mainly include soft rules-based, translation-based, tensor decomposition-based, and neural networks-based methods.To realize reasoning, the soft rules-based method considers relation and node as predicate and variable separately.Then it sets the logic rules and constraints for relational reasoning.The translation-based method aims to model the relation as a transformation operation between entities.The tensor decomposition-based method represents a hyper-relational fact as ann-order tensor (nis the number of entities in a hyperrelational fact).Then it learns the embedding of entities through tensor decomposition.The neural networksbased method can learn the interaction information between entities, the topological structure information of the graph, etc.

    Table 6: Definitions of knowledge hypergraph and related graphs

    Table 7: Comparison of knowledge hypergraph representation methods

    The knowledge hypergraph adopts a flat structure to organize knowledge and lacks the ability to explicitly leverage temporal and spatial knowledge, it may lead to vague spatio-temporal relations, slow update, and reasoning speed.As shown inFig.7, a three-layer knowledge hypergraph (TLKH) is proposed to combine the event layer, concept layer, and instance layer.Compared with the existing knowledge hypergraph, TLKH has the following three advantages: Explicit spatio-temporal relations, fast knowledge update, and fast knowledge reasoning.

    Fig.7.Structure of three-layer knowledge hypergraph.

    TLKH is represented asG={L1,L2,L3,R}, whereR={R(L1,L2),R(L2,L3)} represents the collection of crosslayer relations.The first layer is the event layer, defined asL1={EL1,RL1}, whereEL1andRL1are the sets of event entities and logical relations between entities separately.The second layer is the concept layer, defined asL2={EL2,RL2}, whereEL2andRL2are the sets of concept entities and hyperedges separately.The third layer is the instance layer, defined asL3={EL3,RL3}, whereEL3andRL3are the sets of instance entities and hyperedges separately.

    Fig.7shows the structure of TLKH which organizes the instance, concept, and event knowledge hierarchically.It includes the related entities and relations in the scenario of the highest-grossing movies.Entities in the event layer are abstracted events.For example, “Hig_grossing” can represent multiple specific Hig_grossing events, such as the highest-grossing Hollywood film and the highest-grossing Chinese film.The edges between event entities are logical relations which have various types in different application scenarios.This paper adopts the four main relations which include “Condition”, “Compose”, “Succeed”, and “Cause”.For example, “Political”, “Nature”, “Pub_opinion”, “Economic”, and “Social” cause “Hig_grossing”.

    The entity in the concept layer represents a group of different instances sharing common properties, such as “Country”.The edge between concept entities is a hyperedge, such as the hyperedge “Fil_grossing” which connects (Company, Actor, Movie, Timing, Duration, Box_office).This hyperedge can clearly express the relevance between concept entities.

    The entity in the instance layer is concrete entities that describe the real world, such as “American”.The edge between instance entities is hyperedges, such as the hyperedge “Fil_grossing” connecting (Fox, Quaid,DAT, 2004/5/28, 40_days, $0.6B).

    The cross-layer relation between the event layer and the concept layer is the relation between the event entity and the concept hyperedge.For example, the event entity “Hig_grossing” corresponds to the concept hyperedge “Fil_grossing”.The relation between the concept layer and the instance layer is the mapping relation between the concept entity and the instance entity.For example, the concept entity “Country”corresponds to the instance entity “American”.

    Timing and duration are proposed to represent the spatio-temporal characteristics of knowledge hypergraphs.Timing is an attribute entity, which represents the entity or hyperedge generated or occurred at a specific time.For example, “Timing” is in the concept layer and “2004/5/20” is in the instance layer.Duration is another attribute entity, which means that the entity or hyperedge is valid within a period, such as“Duration” at the concept layer and “140_days” at the instance layer.

    According to TLKH inFig.7, the reasoning process is shown in Algorithm 1 (Table 8).According to the entitiess1(2004/5/20) ands2(2004/5/28), there is a “Cause” relation between “Social” and “Hig_grossing” in the event layer.Algorithm 2 (Table 9) shows the update process of TLKH.Inputting a new event entitye1and its related concept and instance entities, these entities can be easily added to the right places quickly.

    Table 8: Algorithm 1

    Table 9: Algorithm 2

    To reason the implicit relations between event entities, the normal knowledge hypergraph needs to find the implicit relation between any entity pairs by reasoning (ne+nc+ns)2times, wherene,nc, andnsare the numbers of entities in the event, concept, and instance layers.According to Algorithm 1, TLKH needs to find the implicit relation between any instance entity pairs by reasoningtimes.Implicit relations in the concept and instance layers can be found through cross-layer relations.Through the knowledge complementarity between layers and the spatio-temporal characteristics, the query space of reasoning can be reduced, and the speed of knowledge reasoning can be accelerated dramatically.

    To update an event {e1, {c1,c2,…,ci}, {s1,s2,…,sj}}, the normal knowledge hypergraph needs to calculate the similarity between {e1, {c1,c2,…,ci},{s1,s2,…,sj}} and all existing entities withne+inc+jnstimes, whereiandjare the numbers of concept and instance entities related to evente1.According to Algorithm 2, TLKH needs to calculate the similarity betweene1and other event entities bynetimes.Related entities {{c1,c2,…,ci}, {s1,s2,…,sj}} can be located through cross-layer relations.It shows that TLKH is highly efficient, which can greatly accelerate the update process.

    Based on the above analysis, the advantages of TLKH are obvious.The hierarchical knowledge hypergraph is powerful when it comes to organizing the domain-specific knowledge.It can learn implicit spatio-temporal relations and greatly accelerate the speed of knowledge update and reasoning.Moreover,TLKH can facilitate the future research of large-scale KG by utilizing hierarchical organization and spatiotemporal characteristics.

    5.Conclusion

    This paper introduces KG and summarizes knowledge reasoning and knowledge hypergraph.Especially,this paper focuses on the reasoning methods, because of the important role of knowledge reasoning in the practical application of KG, based on logic rules, representation learning, and neural networks.And it is beneficial to downstream tasks, such as the link prediction.For effective and efficient reasoning and updating of KG, this paper proposes TLKH, which could express the spatio-temporal relations and make the knowledge reasoning and updating faster.

    KG changes the traditional KS and usage approaches, providing a solid knowledge foundation for the development of knowledge-driven artificial intelligence.In the future, KG could influence cognitive intelligence,and knowledge reasoning could be applied and extended to other aspects.

    Disclosures

    The authors declare no conflicts of interest.

    国产精品秋霞免费鲁丝片| 久久久久久久久久久久大奶| 男女做爰动态图高潮gif福利片 | 日韩高清综合在线| 美女大奶头视频| 真人做人爱边吃奶动态| 老司机午夜十八禁免费视频| 日日干狠狠操夜夜爽| 亚洲国产精品999在线| 一边摸一边抽搐一进一出视频| 中出人妻视频一区二区| 久久久久久久午夜电影 | 久久久久久久久久久久大奶| av视频免费观看在线观看| 热re99久久精品国产66热6| 欧美日韩国产mv在线观看视频| 日韩大尺度精品在线看网址 | 亚洲激情在线av| 老司机在亚洲福利影院| xxx96com| 中出人妻视频一区二区| 日韩人妻精品一区2区三区| 亚洲熟妇中文字幕五十中出 | 最近最新中文字幕大全免费视频| 亚洲专区中文字幕在线| 午夜日韩欧美国产| 日本一区二区免费在线视频| 成人亚洲精品一区在线观看| 精品久久久久久,| 日韩免费高清中文字幕av| 精品午夜福利视频在线观看一区| 中文字幕另类日韩欧美亚洲嫩草| 免费少妇av软件| 欧美国产精品va在线观看不卡| www.熟女人妻精品国产| 最近最新中文字幕大全免费视频| 国产精品免费一区二区三区在线| ponron亚洲| 老司机在亚洲福利影院| 91老司机精品| 午夜免费鲁丝| 亚洲第一欧美日韩一区二区三区| 精品国产乱码久久久久久男人| 午夜福利一区二区在线看| 欧美亚洲日本最大视频资源| 一本大道久久a久久精品| 国产单亲对白刺激| a在线观看视频网站| 国产一区二区在线av高清观看| 成人三级黄色视频| 亚洲国产欧美日韩在线播放| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲 欧美 日韩 在线 免费| 成熟少妇高潮喷水视频| 久久久久国内视频| 在线天堂中文资源库| 亚洲成人精品中文字幕电影 | 老熟妇仑乱视频hdxx| 高清欧美精品videossex| 亚洲在线自拍视频| 色综合欧美亚洲国产小说| 99精品欧美一区二区三区四区| 亚洲专区中文字幕在线| 757午夜福利合集在线观看| 久久久久久久久免费视频了| 精品午夜福利视频在线观看一区| 国产欧美日韩一区二区精品| 亚洲av熟女| 人人妻人人爽人人添夜夜欢视频| 无遮挡黄片免费观看| 在线观看免费视频日本深夜| 国产精品久久久久成人av| 黄色丝袜av网址大全| 性欧美人与动物交配| av超薄肉色丝袜交足视频| 欧美黄色片欧美黄色片| 亚洲av成人不卡在线观看播放网| 又黄又粗又硬又大视频| 黄色女人牲交| 国产三级黄色录像| 人成视频在线观看免费观看| 香蕉国产在线看| 19禁男女啪啪无遮挡网站| 亚洲成人免费电影在线观看| 咕卡用的链子| 91老司机精品| 婷婷丁香在线五月| 午夜老司机福利片| 午夜福利在线观看吧| 1024香蕉在线观看| 亚洲欧美日韩无卡精品| 精品福利观看| 天堂影院成人在线观看| 免费不卡黄色视频| 精品久久久久久电影网| 视频在线观看一区二区三区| 99热只有精品国产| 老司机靠b影院| 97人妻天天添夜夜摸| 99国产精品99久久久久| 最近最新中文字幕大全电影3 | www.熟女人妻精品国产| 69精品国产乱码久久久| 狠狠狠狠99中文字幕| 一a级毛片在线观看| 老熟妇仑乱视频hdxx| 777久久人妻少妇嫩草av网站| 夜夜躁狠狠躁天天躁| 精品久久久久久成人av| 国产亚洲欧美98| 一进一出抽搐动态| 午夜免费激情av| 亚洲一区二区三区欧美精品| 天堂中文最新版在线下载| 深夜精品福利| 亚洲国产看品久久| 99国产极品粉嫩在线观看| 免费在线观看影片大全网站| 妹子高潮喷水视频| 亚洲熟女毛片儿| 欧美性长视频在线观看| 可以在线观看毛片的网站| 国产精品久久电影中文字幕| 后天国语完整版免费观看| 亚洲国产欧美一区二区综合| 亚洲狠狠婷婷综合久久图片| 美女高潮喷水抽搐中文字幕| 国产成人系列免费观看| 亚洲精品在线美女| 免费一级毛片在线播放高清视频 | 久久久久久大精品| 国产不卡一卡二| 成人国产一区最新在线观看| 在线观看66精品国产| 国产精品久久久久成人av| 国产成人免费无遮挡视频| 国内久久婷婷六月综合欲色啪| 色婷婷av一区二区三区视频| e午夜精品久久久久久久| 两人在一起打扑克的视频| 视频区图区小说| 午夜免费激情av| 18禁美女被吸乳视频| 天天添夜夜摸| 纯流量卡能插随身wifi吗| 久久婷婷成人综合色麻豆| av欧美777| 国产三级黄色录像| 久久中文看片网| 国产一区二区激情短视频| 18禁国产床啪视频网站| 成人三级黄色视频| 午夜福利欧美成人| 亚洲国产毛片av蜜桃av| 亚洲自拍偷在线| 亚洲自拍偷在线| 国产高清激情床上av| 男女下面进入的视频免费午夜 | 亚洲,欧美精品.| 丰满的人妻完整版| 99精品欧美一区二区三区四区| 久久中文看片网| 天堂动漫精品| 女生性感内裤真人,穿戴方法视频| 国产精品二区激情视频| 天堂动漫精品| 啦啦啦 在线观看视频| 亚洲精品美女久久av网站| 国产成年人精品一区二区 | 十分钟在线观看高清视频www| 在线观看日韩欧美| 99热国产这里只有精品6| tocl精华| 中国美女看黄片| 黄片大片在线免费观看| 大香蕉久久成人网| 桃红色精品国产亚洲av| 香蕉丝袜av| 91av网站免费观看| 在线看a的网站| 这个男人来自地球电影免费观看| 免费看十八禁软件| 高清黄色对白视频在线免费看| 中文字幕人妻丝袜制服| 男女下面插进去视频免费观看| 香蕉丝袜av| 国产精品一区二区三区四区久久 | 在线观看午夜福利视频| 免费看十八禁软件| tocl精华| 国产极品粉嫩免费观看在线| 视频区图区小说| 97超级碰碰碰精品色视频在线观看| 18禁黄网站禁片午夜丰满| 别揉我奶头~嗯~啊~动态视频| 亚洲五月婷婷丁香| 亚洲一卡2卡3卡4卡5卡精品中文| 男人操女人黄网站| 视频区图区小说| 欧美成狂野欧美在线观看| 制服诱惑二区| 欧美乱妇无乱码| 欧美色视频一区免费| 欧美日韩福利视频一区二区| 别揉我奶头~嗯~啊~动态视频| 亚洲精华国产精华精| 久久久久久久久久久久大奶| 高清黄色对白视频在线免费看| 欧美日韩亚洲综合一区二区三区_| 国产成人精品在线电影| 一二三四社区在线视频社区8| a级片在线免费高清观看视频| 久久精品影院6| 亚洲精品一二三| 黄片小视频在线播放| 女生性感内裤真人,穿戴方法视频| 亚洲欧美一区二区三区黑人| 色精品久久人妻99蜜桃| 大香蕉久久成人网| 亚洲人成伊人成综合网2020| 一区在线观看完整版| 欧美成人免费av一区二区三区| 久久午夜亚洲精品久久| 国产熟女午夜一区二区三区| 国产精品自产拍在线观看55亚洲| 国产精品久久久久久人妻精品电影| 国产熟女午夜一区二区三区| 精品一区二区三区四区五区乱码| 热99re8久久精品国产| 黄色片一级片一级黄色片| 亚洲男人的天堂狠狠| 美女福利国产在线| 午夜福利影视在线免费观看| 麻豆成人av在线观看| 亚洲精品在线观看二区| 久热这里只有精品99| 亚洲av电影在线进入| 日本免费a在线| 两性夫妻黄色片| 国产精品亚洲av一区麻豆| 欧美大码av| 女人爽到高潮嗷嗷叫在线视频| 亚洲欧美精品综合久久99| 久久影院123| 国产精品久久久久成人av| 妹子高潮喷水视频| 黑人巨大精品欧美一区二区蜜桃| 成人三级黄色视频| 91麻豆精品激情在线观看国产 | 日韩欧美在线二视频| 成人18禁高潮啪啪吃奶动态图| 日日摸夜夜添夜夜添小说| 两性午夜刺激爽爽歪歪视频在线观看 | 一进一出抽搐动态| 少妇的丰满在线观看| 久久午夜综合久久蜜桃| 黄色怎么调成土黄色| 女性被躁到高潮视频| 国产精品99久久99久久久不卡| 亚洲国产欧美日韩在线播放| 国产xxxxx性猛交| 亚洲成av片中文字幕在线观看| 最近最新中文字幕大全免费视频| 嫁个100分男人电影在线观看| 大香蕉久久成人网| 极品人妻少妇av视频| 免费久久久久久久精品成人欧美视频| av国产精品久久久久影院| 窝窝影院91人妻| 国产成+人综合+亚洲专区| 成年版毛片免费区| 黄色 视频免费看| 女同久久另类99精品国产91| 成人国产一区最新在线观看| 午夜福利欧美成人| 国产免费av片在线观看野外av| 国产主播在线观看一区二区| 国产精品免费一区二区三区在线| 国产精品av久久久久免费| av网站免费在线观看视频| 国产精品 国内视频| 久久九九热精品免费| 大码成人一级视频| 亚洲精品国产色婷婷电影| 一本综合久久免费| 国产男靠女视频免费网站| 欧美一区二区精品小视频在线| 欧美+亚洲+日韩+国产| 丰满的人妻完整版| 男女床上黄色一级片免费看| 国产亚洲精品久久久久5区| 中国美女看黄片| 十分钟在线观看高清视频www| 性少妇av在线| 国产高清国产精品国产三级| 无限看片的www在线观看| 亚洲精品成人av观看孕妇| 亚洲自拍偷在线| 亚洲成av片中文字幕在线观看| 亚洲一码二码三码区别大吗| 亚洲欧美激情综合另类| 中出人妻视频一区二区| 黄色a级毛片大全视频| 日韩国内少妇激情av| 老司机靠b影院| 宅男免费午夜| 国产黄a三级三级三级人| 国产精品久久久久久人妻精品电影| aaaaa片日本免费| 看免费av毛片| 亚洲精品成人av观看孕妇| 一级a爱片免费观看的视频| 90打野战视频偷拍视频| 久久久久久久久久久久大奶| 两人在一起打扑克的视频| 亚洲精品国产一区二区精华液| 精品卡一卡二卡四卡免费| 激情视频va一区二区三区| 国产精品香港三级国产av潘金莲| 国产免费现黄频在线看| 黄片大片在线免费观看| 91麻豆av在线| 丝袜美足系列| 丝袜在线中文字幕| 亚洲精品一二三| 欧美成人性av电影在线观看| 性欧美人与动物交配| 美女福利国产在线| 999精品在线视频| 国产欧美日韩一区二区三| 天堂中文最新版在线下载| 久久久国产成人免费| 成熟少妇高潮喷水视频| netflix在线观看网站| 12—13女人毛片做爰片一| 1024香蕉在线观看| av超薄肉色丝袜交足视频| 成年女人毛片免费观看观看9| 亚洲在线自拍视频| 又紧又爽又黄一区二区| 搡老岳熟女国产| 亚洲九九香蕉| 久久香蕉激情| 亚洲欧美日韩高清在线视频| 一二三四在线观看免费中文在| 日韩成人在线观看一区二区三区| 天天添夜夜摸| 日本一区二区免费在线视频| 精品国内亚洲2022精品成人| 亚洲av五月六月丁香网| 日韩免费av在线播放| 国产午夜精品久久久久久| 成人黄色视频免费在线看| 天堂影院成人在线观看| 18禁观看日本| 精品人妻在线不人妻| 69av精品久久久久久| av福利片在线| 一级毛片高清免费大全| 黑人巨大精品欧美一区二区蜜桃| 欧美日韩中文字幕国产精品一区二区三区 | 50天的宝宝边吃奶边哭怎么回事| 手机成人av网站| 国产一区在线观看成人免费| 久久久精品欧美日韩精品| 精品久久久久久电影网| 窝窝影院91人妻| 好男人电影高清在线观看| 99国产精品免费福利视频| 男人操女人黄网站| 亚洲中文日韩欧美视频| 丝袜在线中文字幕| 在线天堂中文资源库| 中文字幕人妻熟女乱码| 精品国产国语对白av| 久久欧美精品欧美久久欧美| 久久久久久大精品| 性少妇av在线| 免费在线观看视频国产中文字幕亚洲| 在线观看免费视频网站a站| 日日干狠狠操夜夜爽| 成人18禁在线播放| 色哟哟哟哟哟哟| 国产精品一区二区三区四区久久 | 真人做人爱边吃奶动态| 亚洲成人免费电影在线观看| 精品乱码久久久久久99久播| 国产av又大| 亚洲在线自拍视频| 亚洲 欧美 日韩 在线 免费| 波多野结衣av一区二区av| 亚洲,欧美精品.| 男女做爰动态图高潮gif福利片 | 亚洲欧美激情综合另类| 最近最新免费中文字幕在线| 久9热在线精品视频| 首页视频小说图片口味搜索| 亚洲午夜理论影院| 国产三级在线视频| 久久精品91无色码中文字幕| 精品一区二区三区av网在线观看| 国产深夜福利视频在线观看| 十八禁人妻一区二区| 精品第一国产精品| 麻豆久久精品国产亚洲av | 搡老乐熟女国产| 老熟妇仑乱视频hdxx| 国产有黄有色有爽视频| 亚洲成国产人片在线观看| 一边摸一边抽搐一进一出视频| 国产91精品成人一区二区三区| 国产精品美女特级片免费视频播放器 | 国产野战对白在线观看| 中文字幕另类日韩欧美亚洲嫩草| 欧洲精品卡2卡3卡4卡5卡区| 久久婷婷成人综合色麻豆| 中文字幕另类日韩欧美亚洲嫩草| www.精华液| 国产高清视频在线播放一区| 在线看a的网站| 黄片播放在线免费| 国产精品久久久av美女十八| 亚洲五月天丁香| 精品久久久久久久久久免费视频 | 欧美黑人欧美精品刺激| 久99久视频精品免费| 国产亚洲精品综合一区在线观看 | 久久精品aⅴ一区二区三区四区| 国产精品 国内视频| 99精品在免费线老司机午夜| av天堂在线播放| av网站免费在线观看视频| 我的亚洲天堂| 村上凉子中文字幕在线| 在线观看午夜福利视频| 成人精品一区二区免费| 淫秽高清视频在线观看| 日韩欧美免费精品| √禁漫天堂资源中文www| 老汉色∧v一级毛片| 国产伦人伦偷精品视频| 精品第一国产精品| 久久久国产欧美日韩av| 男人的好看免费观看在线视频 | 久久久久国产精品人妻aⅴ院| 黄片小视频在线播放| 亚洲成a人片在线一区二区| 亚洲欧美激情在线| 成在线人永久免费视频| 久久午夜综合久久蜜桃| 国产男靠女视频免费网站| 超色免费av| 午夜福利影视在线免费观看| 99热只有精品国产| 可以免费在线观看a视频的电影网站| 超碰97精品在线观看| 亚洲国产精品一区二区三区在线| 亚洲欧美日韩高清在线视频| 天天影视国产精品| 成年人黄色毛片网站| 亚洲一区中文字幕在线| 90打野战视频偷拍视频| 午夜日韩欧美国产| 日韩欧美三级三区| 日韩一卡2卡3卡4卡2021年| 久久香蕉精品热| 天堂中文最新版在线下载| 一级毛片女人18水好多| 天天影视国产精品| 国产色视频综合| 欧美不卡视频在线免费观看 | 91av网站免费观看| 高潮久久久久久久久久久不卡| 亚洲国产看品久久| 国产极品粉嫩免费观看在线| 亚洲欧美精品综合一区二区三区| 狠狠狠狠99中文字幕| 免费日韩欧美在线观看| 亚洲九九香蕉| 女人爽到高潮嗷嗷叫在线视频| 大陆偷拍与自拍| xxxhd国产人妻xxx| 80岁老熟妇乱子伦牲交| 久久久国产成人精品二区 | 亚洲黑人精品在线| 99久久99久久久精品蜜桃| 欧美黑人欧美精品刺激| 亚洲国产中文字幕在线视频| 午夜免费成人在线视频| 久久九九热精品免费| 久久久久国产精品人妻aⅴ院| 90打野战视频偷拍视频| 亚洲情色 制服丝袜| 精品一品国产午夜福利视频| 十分钟在线观看高清视频www| www.自偷自拍.com| 大香蕉久久成人网| 亚洲色图av天堂| 中文字幕另类日韩欧美亚洲嫩草| 一级毛片高清免费大全| 精品乱码久久久久久99久播| 欧美日韩亚洲国产一区二区在线观看| 嫩草影视91久久| 成人黄色视频免费在线看| 女同久久另类99精品国产91| 日韩人妻精品一区2区三区| 老司机亚洲免费影院| 欧美激情高清一区二区三区| 午夜免费鲁丝| 亚洲视频免费观看视频| 法律面前人人平等表现在哪些方面| 成人三级黄色视频| 日韩一卡2卡3卡4卡2021年| 老熟妇仑乱视频hdxx| 男人的好看免费观看在线视频 | 成人特级黄色片久久久久久久| 久久久久久久精品吃奶| 中文字幕精品免费在线观看视频| 国产成人精品在线电影| 人人澡人人妻人| 一进一出好大好爽视频| svipshipincom国产片| 国产又爽黄色视频| 久久久精品国产亚洲av高清涩受| 悠悠久久av| 国产精华一区二区三区| av免费在线观看网站| 国内毛片毛片毛片毛片毛片| 一级毛片精品| 亚洲中文av在线| 一级毛片精品| 日本黄色日本黄色录像| 亚洲欧洲精品一区二区精品久久久| avwww免费| 欧美日韩亚洲国产一区二区在线观看| 在线观看一区二区三区激情| 国产亚洲欧美98| av中文乱码字幕在线| av片东京热男人的天堂| 国产精品影院久久| 国产精品美女特级片免费视频播放器 | 欧美激情高清一区二区三区| 亚洲欧美一区二区三区久久| 日韩视频一区二区在线观看| 看片在线看免费视频| 日本黄色日本黄色录像| 精品国产乱子伦一区二区三区| 亚洲第一青青草原| 法律面前人人平等表现在哪些方面| 日韩 欧美 亚洲 中文字幕| 成年女人毛片免费观看观看9| 欧美日本亚洲视频在线播放| 免费不卡黄色视频| 国产av在哪里看| 人人妻人人澡人人看| 麻豆久久精品国产亚洲av | 亚洲欧美精品综合一区二区三区| 久久99一区二区三区| 超碰成人久久| 免费观看人在逋| 精品卡一卡二卡四卡免费| 久久久国产精品麻豆| 亚洲av片天天在线观看| 午夜福利在线免费观看网站| 9191精品国产免费久久| 女性生殖器流出的白浆| 日日夜夜操网爽| 美女高潮到喷水免费观看| 亚洲色图综合在线观看| 日本黄色日本黄色录像| 国产成人av激情在线播放| 欧美黑人欧美精品刺激| 夜夜躁狠狠躁天天躁| 久久久水蜜桃国产精品网| 脱女人内裤的视频| 亚洲中文av在线| 亚洲中文日韩欧美视频| 亚洲狠狠婷婷综合久久图片| 男男h啪啪无遮挡| 91成年电影在线观看| 99国产精品99久久久久| 久久久国产精品麻豆| 午夜日韩欧美国产| 99精国产麻豆久久婷婷| 无限看片的www在线观看| 亚洲精品中文字幕一二三四区| 超碰成人久久| 少妇裸体淫交视频免费看高清 | 欧美大码av| 欧美激情 高清一区二区三区| 精品一区二区三区av网在线观看| 黄色视频不卡| 黑人操中国人逼视频| 亚洲狠狠婷婷综合久久图片| 国产成+人综合+亚洲专区| 久久人妻熟女aⅴ| 亚洲一区中文字幕在线| 一级,二级,三级黄色视频| 亚洲精品成人av观看孕妇| 丝袜人妻中文字幕| 超碰97精品在线观看| 欧美日韩一级在线毛片| 嫩草影视91久久| 国产97色在线日韩免费| 精品一区二区三卡| 国内久久婷婷六月综合欲色啪| 宅男免费午夜| xxxhd国产人妻xxx| videosex国产| av电影中文网址| 一级片免费观看大全| 亚洲成人免费电影在线观看| 欧美色视频一区免费| 伊人久久大香线蕉亚洲五| 精品久久久久久电影网|