• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Knowledge graph construction with structure and parameter learning for indoor scene design

    2018-07-13 06:59:28YuanLiangFeiXuSongHaiZhangYuKunLaiandTaijiangMu
    Computational Visual Media 2018年2期

    Yuan Liang,Fei Xu,Song-Hai Zhang,Yu-Kun Lai,and Taijiang Mu()

    Abstract We consider the problem of learning a representation of both spatial relations and dependencies between objects for indoor scene design.We propose a novel knowledge graph framework based on the entity-relation model for representation of facts in indoor scene design,and further develop a weaklysupervised algorithm for extracting the knowledge graph representation from a small dataset using both structure and parameter learning.The proposed framework is flexible,transferable,and readable.We present a variety of computer-aided indoor scene design applications using this representation,to show the usefulness and robustness of the proposed framework.

    Keywords knowledge graph;scene design;structure learning;parameter learning

    1 Introduction

    Indoor scene design is required for applications built on virtual environments(e.g.,large-scale video games or virtual reality),and realistic interior design.Modern commercial CAD tools[1–3],as well as independently developed toolchains,have been utilized to help professional designers or architects to assemble indoor scenes with different levels of detail.However,generating a useful and aesthetically pleasing scene requires a relatively high level of expertise,in terms of both insight into interior design,and pro ficiency in use of design tools;this forms a high barrier for novice users.Even for expert users,a considerable amount of laborious and time-consuming interaction is needed.

    To reduce the amount of interaction and expertise needed when using these tools,context-based models have been proposed,supporting semi-automatic[4]and fully-automatic[5]systems.Fundamental features for such systems include helping(i)to retrieve from a library the objects to be placed in the scene and(ii)to decide where in the scene to place various objects.These models usually embed a group of quantitative criteria learned from a training dataset for both features,e.g.,co-occurrence frequencies as a prior for object retrieval,and for placement,e.g.,Gaussian mixture models[4].Such methods have achieved a certain level of success;however,they still have several shortcomings,including being in flexible given a limited training dataset,having limited transferability given a limited model library,and having poor readability of the underlying model for designers without a background in statistics or computer science.

    On the other hand,knowledge graphs,a representation for facts in specific domains,have had great success in information retrieval and question–answer systems.Introducing knowledge graphs to such systems allows issues caused by long-tail queries or entities with few samples to be well addressed[6–8].Moreover,knowledge graphs have good readability,which helps domain experts further explain and improve the model.

    Two major challenges exist for utilizing knowledge graphs in indoor scene design: (i)designing a proper schema for modeling the functional and geometric relations between objects in the room and(ii)extracting the structure of the knowledge graph from a few training samples of indoor layouts.To overcome these challenges,we design an entityrelation schema for modeling relations and properties of different objects.Based on this representation,we map the knowledge graph to a probabilistic graph model,which allows us to learn the structure of the knowledge graph from indoor scenes designed by professional designers,using structure and parameter learning.We show the usefulness of our knowledge graph framework by integrating it into indoor scene design algorithms and applying them to several typical application scenarios with good performance.

    Our work has the following technical contributions:

    ?introduction of the knowledge graph representation to indoor scene design,with significant benefits of flexibility,transferability,and readability,compared to existing methods;

    ?an entity-relation knowledge graph schema that models functional,geometric,and hierarchical relations among objects in a library;

    ?a process that maps a knowledge graph to a probabilistic graph model,and an algorithm that learns both the structure and parameters of the knowledge graph.

    The remainder of this paper is organized as follows.In Section 2 we discuss related work.We then introduce a general schema for knowledge graph representation in Section 3. Based on it,we present a corresponding underlying probabilistic graph model and an algorithm for parameter and structure inference in Section 4.We show several typical applications of our representation in Section 5,demonstrating the effectiveness of our approach.Finally,we conclude and discuss our work in Section 6.

    2 Related work

    2.1 Contextual modeling in indoor scene design

    To reduce the number of interactions in indoor scene design,contextual modeling,which tries to evaluate if a model and its placement fit its context,is widely used[4]in practical applications.Different forms of interaction for indoor scene design have been proposed based on contextual modeling,including:proper placement for a user-selected model[4],retrieving a model and finding a proper orientation with a user selected placement[9],suggesting objects for adding greater levels of detail to scenes[10],co-retrieval and co-placement with user-selected samples[11],and scene synthesis from freehand sketch drawings[12]or natural language descriptions[13].Fully-automatic methods have also been proposed to synthesize scenes in open world applications[5].

    Frequently used methods for contextual modeling include rule-based criteria [5]and data-driven models[4].Rule-based criteria apply a set of handcrafted rules,e.g.,“always place a plate on a table”or “the total area occupied by plates on a table should not exceed 70%of the area of the table”.As the number of rules required to generate reasonable scenes increases rapidly with an increasing number of object categories,such an approach is generally restricted to synthesizing scenes of low complexity.Data-driven models usually model binary placement and co-occurrence relations between pairs of objects using quantitative models such as Gaussian mixtures[4],graphlet extraction[12],kernel density estimation[13],etc.The idea of contextual modeling has also been used for indoor scene modeling from scanned data[14].

    Although such data-driven models have achieved success for certain categories of scenes,the usefulness of these methods is limited by their lack of flexibility given a limited set of training data,which usually causes the generated layout to over fit the training data.Also,these learned quantitative models are difficult to transfer to new scenarios.For example,they can perform poorly when placing a round table in a scene using a model trained with square tables.Furthermore,these data-driven models are hard for designers to interpret,lacking a background in statistics or computer science,making it difficult for them to improve the model when failures occur.

    2.2 Knowledge graph in information retrieval

    Outside the domain of geometric contextual modeling,knowledge graphs have been successfully used to support a wide variety of artificial intelligence applications: conventionally in information retrieval[15],and more generally in other applications such as question–answer systems[16],reasoning systems[17],and image classification [18].They model a sophisticated network of real-world entities,representing their relations and supporting operations such as entity identification,disambiguation,and completion.

    Despite such wide practical success,building a structured knowledge graph from unstructured training data is conventionally considered a challenging problem for two reasons:(i)natural languages are not fully structured,and(ii)it is hard to fuse knowledge from different(or even con flicting)data sources.As a result,widely-used knowledge bases,such as DBpedia[19],use manually maintained ontologies as well as filtered training data as input.

    In this paper,we build a knowledge graph for indoor scene design.As our training data is well-structured,it is much easier to generate structured knowledge.To solve the second challenge,we map our knowledge graph to a probabilistic model,which allows us to infer graph parameters as well as graph structure from our training data.The training process implicitly fuses different training data.

    The spatial knowledge learning approach in Ref.[20]is most similar to our work,and uses prior probabilities directly as knowledge to support a textto-scene application which allows missing information to be inferred.Our work differs in the following ways.(i)We give a detailed definition and discussion of relative position properties.(ii)Our representation is more general.In addition to spatial relations,our knowledge graph also addresses functional relations.(iii)Instead of using prior probabilities,we use a probabilistic graph model to learn a better joint distribution of different types of rooms for indoor design.

    3 Knowledge graph schema

    3.1 Preliminaries

    In this section,we explain how we formulate facts in indoor scene design,including facts about different objects and guidelines for their typical placement.

    In our knowledge graph,facts are formulated using an entity-relation(ER)model,which is conventionally applied in existing knowledge graphs such as DBpedia[19],Freebase[21],and YAGO[22].An ER model consists of entities with different attributes and relations between pairs of entities.It can easily be represented as a graph with attributes associated with nodes and edges.Conventionally,the graph is stored as a node(entity)list with attributes of the entities,and an edge(relation)list,with triples describing the type of the relation and the related entities,such as〈toad,is a kind of,amphibian〉.

    The schema,a concept originating in relational databases,is fundamental to an ER model.The schema of our knowledge graph has the following parts:

    ?The types of entities and the attributes that describe each type of entity.

    ?The types of relations between entities,the attributes that describe each type of relation,and the types of entities that each type of relation can connect.

    In the remainder of this section,we discuss how we designed our schema with the help from professional BIM(building information management)designers.

    3.2 Entities

    We interviewed several designers to ascertain how they select objects to be placed in a given room.Typical pipelines used by these designers included the following steps:

    1.Consider the target functionality of the room,and select types of objects needed to support that functionality(e.g.,a TV in a living room).

    2.Add more types of objects which are used together with the selected objects and place them according to their usages.

    3.Pick proper objects of the selected types to fit the general design style of the room.

    4.Fine-tune the layout,including adding necessary lights, filling in empty space with decorative objects,etc.

    Such a pipeline appeals to common sense.Using this generic pipeline,we designed a knowledge graph with the following entities:

    ?Room type.This encodes the general functionality of the room(e.g.,living room,bedroom,gym,etc.)to be synthesized.This type of entity does not have attributes.

    ?Object type.As in ShapeNet[23],we use the WordNet[24]ontology as our basic hierarchical taxonomy.Initially we map each synset in ShapeNet to an object type in our entities,which allows us to organize facts at different levels of a hierarchy.E.g.,we often put “cabinets”in a “l(fā)iving room”,and a “TV”can only be put on a “base cabinet”,which is a sub-synset of the synset “cabinet”.The most important field of an object type’s attributes is its metadata,which indicates what fields are required for object instances of this object type.We extracted metadata from Autodesk Revit projects provided by these designers. We also add an attribute field “generative probability”to model the probability of adding an instance of this object type to the room even if it is unrelated to any room type or other instances in the room.This if eld is useful in handling decorative objects,e.g.,paintings on a wall.

    To make the taxonomy more specific to indoor design,e.g.,to distinguish between a “tall cabinet”and a “base cabinet”in the synset “cabinet”in ShapeNet,we retrieved BIM families from several BIM websites and extracted their keywords.A designer then helped us filter these keywords and categorize them into existing synsets in ShapeNet.

    ?Object instance.Each object instance entity encodes an object in our library.Its attribute if elds include its corresponding 3D mesh,and fields dependent on its object type(e.g.,the material of the object,the width of a bed,the price of the object,etc.).All fields except the mesh are nullable,as some fields might be absent for models in the library.

    These types of entities define the general skeleton of the knowledge graph,as they define the nodes in the graph.Each type of entity is encoded as a type of node in the knowledge graph.

    3.3 Relations

    By considering the design pipeline in Section 3.2,we identified the following explicit relations as being used:

    ?Functional requirements,e.g.,a TV is required in a living room.

    ?Functional dependencies,e.g.,a seat is required for watching TV.

    ?Placement dependencies,e.g.,the seat should be placed facing the TV.

    We can also identify some implicit relations in the pipeline.For example,when a seat is needed,either a sofa or a chair can satisfy the requirement,because both sofa and chair satisfya kind ofseat.Also,when we select a certain instance of a sofa,this implicitly encodes the relation“this object isan instance ofa sofa”.

    A major challenge in modeling these relations is transferability.Due to lack of training scenes,we can only find a limited number ofobject instances of a certainobject typein the training dataset.As a result,we often need to transfer our model trained with our training dataset to new objects in practical applications,e.g.,predicting the placement of anobject instancethat is absent from the training scenes.

    A typical relation for which transferability is a problem is the relative placement relation ofobject types.Other work,such as example-based scene synthesis[11],uses a single Gaussian mixture model(GMM)to model relative placements. However,relations modeled in this way are hard to transfer.For example,a model that characterizes the placement of chairs around a given table is hard to transfer to a table of different shape or size(see Fig.1).

    The analysis above leads to the relations we define,including:

    ?Is needed in(R1).This encodes that someobject typeis needed in someroom type.We add an attribute “necessity”to measure the necessity(as a probability)of having this object type in the given room type.For example,a dining room needs a dining table,regardless of how small the room is.Given a larger room,we may place a TV in it.In this example,the object type “dining table”has a higher necessity than the type “TV”.It can be written as〈Dining table,is needed in,dining room〉.necessity>〈TV,R1,dining room〉.necessity.Relation names(e.g.,is needed in)and relation IDs(e.g.,R1)are used interchangeably in this paper.We quantify the necessity with a probabilistic model,which we introduce in Section 4.

    Fig.1 Transferring a model for relative placement and orientation between object types “table” and “chair”.(a)Training data with a square table.(b)For a table of a different shape,the model needs to be transferred.

    ?Works with(R2).This encodes the dependency between twoobject types.Similarly toR1,we add an attribute field “dependency”to measure the level of dependency.

    ?Relative placement(R3).Existing work[20]also uses this type of relation,which maps the relative placements to keywords in a natural language.However,due to the ambiguity of natural languages,such a representation is inadequate for many applications.Although that paper disambiguates“on”(as in,e.g.,on the wall versus on the desk),we can easily list several further cases(see Fig.2)that also need disambiguation.We propose a multi- field representation,to more precisely represent the relative placements between objects,as follows:

    –Primary direction.Instead of using keywords[20],we use a quantitative representation to represent the most important placement constraints.For example,for plates placedona table,the primary direction is(0,0,1),where thezcoordinate is assumed to be vertical.It is defined using the nearest pair of pointsxplateandxtablebelonging to the meshes of the plate and the table;the constraint here is(xplate?xtable)·(0,0,1)≥0.Specifically,when we calculate the primary direction between two objects touching each other,we use a primary direction that is perpendicular to their contact faces.Such a definition can help to overcome certain failures in specific cases,e.g.,a desk with an integrated tray.With the help of this field,we can disambiguate concepts such as “in front of the desk”(with a primary direction of(0,?1,0))and “on the front of the desk”(with a primary direction of(0,0,1)).

    –Primary distance.With the defined primary direction,we can also define(xplate?xtable)·(0,0,1)as the primary distance between objects“plate”and “table”;the primary distance is 0 for a pair of objects in contact.With the help of this field(along with the primary direction),we can easily disambiguate concepts such as“hanging on”(with a primary distance of 0)and “in front of”(with a primary distance larger than 0),which share the same primary direction.

    –Projected placement.With the defined primary direction,we can model more complex spatial relations.For example,to model the detailed placement of a plate on a table,we can further project the placement of the plate onto the table,(xplate?xtable)?(v·(xplate?xtable))v,wherevis the primary direction.We normalize this projected vector with the size of the object it relates to.With this field,we can disambiguate cases,e.g.,as in Figs.2(a)and 2(b),where the projections from the monitor and the laptop to the desk have different values,indicating where they are placed on the desk.

    –Relative orientation.Relative orientation is defined by the placement orientation of an object,relative to the primary direction.

    Note that for some pairs of objects,several alternative relative placements may exist between them.For example,a chair can be placed beside any of the four edges of a square table.Thus the relative placement relation is defined as a mixture of the above.

    ?Is an instance of(R4).This encodes theobject typeto which a certainobject instancebelongs.

    ?Is a hypernym of(R5).This type of relation is generated directly from the WordNet ontology.It encodes the conceptual relations between hypernyms and hyponyms,and also helps organize these concepts of object types in a hierarchical structure.It has an attribute “drilling probability”,indicating the probability of picking an instance of that type,e.g.,picking a “base cabinet”when a“cabinet” is needed.

    3.4 Indoor scene design analysis based on knowledge graphs

    With the entities(nodes)and relations(edges)defined above,we can organize facts in indoor scene design in a knowledge graph.

    Given an indoor scene,we can organize the hidden facts with a graph-based representation which is a subgraph of our knowledge graph.An example of such a subgraph is shown in Fig.3,which gives a typical pattern for a living room.This graph gives facts about how this scene is constructed.A non-wallmounted TV needs a base cabinet to support it(in contrast,a wall-mounted TV does not).To watch TV,we need a seat facing it,and a sofa is a kind of seat.We also add a tea table and two side tables to make better use of this seat.Although the representation in Fig.3(c)is a more complex representation than that in Fig.3(b),it contains much more information and can easily be transferred.For a simple example,we can easily conclude from the representation in Fig.3(c)that if we change the non-wall-mounted TV to a wall-mounted one,the base cabinet is no longer a necessary piece of furniture in the room.However,changing the three-seater sofa to a chair does not change any dependencies in the scene.

    Such a representation can support many indoor scene design applications,with the help of subgraph searching or matching algorithms.Analyzers and designers without a background in statistics or computer science can also easily interpret this representation,to analyze the results of these algorithms,or manually correct mistakes in the graph.

    Fig.2 Ambiguity and imperfection of natural language in describing relative placement.(a,b)Difference between “a monitor on a desk” and“a laptop on a desk”.Although the preposition “on” means “supported by” in both cases,monitors are usually placed far from the user,while laptops are usually placed near the user.(c,d)Difference between “a cabinet beside a bed” and “a window beside a bed”.The preposition“beside” represents different spatial relations in these two scenarios.

    Fig.3 A typical indoor scene and its corresponding graph-based representation,which is a subgraph of our knowledge graph.(a)A corner of a room provided by an indoor designer.(b)A scene graph from the work[25].(c)Our graph-based representation.To simplify this graph for drawing,we have removed nodes representing instances and corresponding is an instance of relations.

    4 Probabilistic mapping of the knowledge graph

    To learn the knowledge graph from indoor training scenes,we propose use of a factorization model as an intermediate step.Based on our proposed factorization model,we can evaluate how the learned knowledge graph fits our training data by mapping entities and relations in our knowledge graph to the factorization model.Similar methodology has shown success in semantic processing,e.g.,in topic models[26]in natural language processing.By learning the parameters and structure of the factorization model,we get an intermediate representation of hidden knowledge in indoor scene design.We may then map the factorization model back to a knowledge graph,and hence a knowledge graph representation is built.

    Specifically,a factor graph[27]is utilized as our factorization model,which is a bipartite graph representing the factorization of a function.Handcrafted factor graphs have shown success in fullyautomatic synthesis of indoor scenes with limited diversity of object types[5].A factor graph has two types of nodes:variable nodes that encode random variables involved in the model,and factor variables,we can build a factor graph withn+mnodes,includingnvariable nodes andmfactor nodes.IfXi∈Sj,we connect theith variable node and thejth factor node,which represents that thejth factor takes theith variable as a dependent variable.

    4.1 Mapping the knowledge base to a factor graph

    To build a proper objective function for indoor scene design and derive a good mapping to our knowledge graph representation,the factor graph should satisfy the following design principles:

    ?Factor nodes.We should define factor nodes which correspond to the relations defined in Section 3.3,to map our defined factor graph to relations in the knowledge graph.

    ?Variable nodes.We should define variables that are closely related to objects placed in indoor scene designs,to model the indoor design results.The variables should also correspond to the entities defined in Section 3.2,to support the aforementioned factors.

    ?Objective function.The objective function to be factorized should be easily optimizable,to support structure and parameter learning.

    We thus define the following variables in the factor graph:

    ?Room type variablesT1,T2,....Each variable of this type encodes a room type entity.Given a room of a single type,we represent it with a one-hot representation,setting the variable corresponding to the room type to 1,and those for other room types to 0.Given a room of mixed type,e.g.,a dining room with an integrated kitchen,we set[T1,T2,...]as a weighted vector of length 1.

    ?Object type count variablesC1,C2,....Each variable of this type encodes the number of objects of this type we plan to place in the room(which is not the same as the actual number of objects placed).We split this variable into the sum of 3 componentsC=CI+CD+CT,representing the numbers of objects planned for supporting inheritance relations(R5),dependency relations(R2),and functionality relations(R1)respectively,as explained later in the factor definitions.

    ?Instance placement variablesI1,I2,....Each variable of this typeI={instance,x,o}encodes the selection,placementx,and orientationoof an object instance in the room.

    In this model,only the room type and the placements of the instances(T,I)are observed.We define our objective function in a probabilistic formp(T,I|relations),which allows us to utilize probabilistic optimization methods in the estimation step.

    The following factors are defined to represent the objective function in a probabilistic form:

    ?Functionality factors.A functionality factor corresponds to the relationR1and measures how the numberCTof objects planned based on the room type fits the desired room type and the corresponding “necessity”property in theR1relation.We define a generative process for theCTvariables using the following pipeline:

    1.For each room typeiin the room type vector,sample the total number of all types of objects planned in the processni=Poisson(STi)featuring the room’s functionality,whereSis proportional to the area of the room.Such sampling with a Poisson distribution is conventional in topic models[28].

    2.Plan the number of objects to be placed in the room corresponding to each function:

    Herepti,jis the value of the field “necessity”of the relation〈jth object type,R1,ith room type〉,or 0 if the relation does not exist in the knowledge graph.gjis the field“generative probability”defined in the attributes of each object type. Hence we get a factorp(CT|T,room)=ip(CTi|n)p(n|T,room)=f(CT,n)g(n,T,room),as a factorized probability in the factor graph.

    ?Selection factors.A selection factor corresponds to relationR5,and models how we choose types of hyponym objects,e.g.,whether to choose a chair or a sofa when a seat is needed. It measures how the numberCIof objects planned from room types fits the knowledge graph and the property “drilling probability”inR5.We define the generative process of variablesCIas follows:

    Herepii,jis defined as the value of the field“drilling probability”of the relation〈jth object type,R5,ith object type〉,or 0 if the relation does not exist in the knowledge graph.We thus get a factorp(CI|C)=ip(CIi|C)=f(CI,C)=f(C)as a factor node in the factor graph.

    ?Dependencyfactors.A dependency factor corresponds to the relationR2and models how we choose some types of objects to work with other objects,e.g.,to choose a seat to work with a TV.It measures how the numberCDof objects planned from the relation in the knowledge graph fits the property “dependency”inR5.We define the generative process of variablesCDas follows:

    Heredepi,jis defined as the value of the field “dependency” of the relation〈jth object type,R2,ith object type〉, or 0 if the relation does not exist in the knowledge graph.By modeling dependencies among object types in our knowledge base in this way,we get a factorp(CD|C)=ip(CDi|C)=f(CD,C)=f(C)as a factor node in the factor graph.

    ?Placementfactors.A placement factor corresponds to the relationR3and models how we place object instances,including determining the locationxand orientationo,e.g.,placing a seat in front of a TV,so that it faces the screen of the TV.It measures how the layout in the room fits the facts and the parameters in the knowledge graph.From the defined properties ofR3in Section 3.3,we can get a group of extracted properties:

    Hereu,v,w,andtdenote the primary direction,projected placement,relative orientation,and primary distance of object instanceirelative to object instancej.

    Given an object instance pair(i,j),we only add a factor node when the hypernyms of their object types have relations of typeR3.We denote this hypernym pair as(i′,j′).We define the generative process foru,v,w,tas follows:

    Herezi,jis the identifier variable of this mixture model.All parameters are generated with a normal distribution,except for primary distance,as a beta distribution can better model the asymmetric impact of distance than a normal distribution.E.g.,watching TV from 1.5 m away is the best distance,and while 2.5 m is also a good distance,0.5 m is too close.

    The values ofk,b,μ,σare learnable parameters in the attributes ofR3relation.We write our factorized probability asp(Ii|Ij)=f(Ii,Ij)=p(ti,j|Ij)p(ui,j|Ij)p(vi,j|Ij)p(wi,j|Ij)/Zi,j,whereZi,jis a normalizing constant.Here,ki′,j′,βi′,j′,bi′,j′,μu,i′,j′,μv,i′,j′,μw,i′,j′,σu,i′,j′,σv,i′,j′,andσw,i′,j′byθi,jare the parameters in this factor.

    ?Instance selection factors.An instance selection factor corresponds to relationR4.We define it as 1 when for each leaf object typeiin the hierarchy,the number of instances of that object type placed in the room is exactlyCi,and 0 otherwise.This gives the factorized probabilityp(I|C)=f(I,C).

    ?Eligibilityfactor.This corresponds to constraints in indoor scene layouts.We set it to 0 when there are collisions among the objects,or an object is floating in the air or placed outside the room,and 1 otherwise.The factorized probability isp(T,C,I)=f(T,C,I).

    We thus get the final factorized objective function as follows:

    This gives a factor graph representation of our knowledge in indoor scene design,as shown in Fig.4.

    Fig.4 Mapped factor graph of knowledge in indoor scene design.

    4.2 Learning parameters and structure from training data

    We learn the knowledge graph from a given training dataset with indoor scenes designed by professional designers.Such a learning problem includes 3 coupled tasks:learning the structure of the knowledge graph,estimating the parameters of relations,and inferring the hidden variables in the factor graph.

    Existing work[29]on structure learning of factor graphs has shown that both structure learning and parameter inference can be completed in polynomial time.However the algorithm given there does not fit our work for the following reasons.(i)It assumes that all the variables in the factor graph are observable.However in our work,only the variablesIandT,which represent the objects in the room and the type of the room,are observed.(ii)It assumes discrete probability distributions over finite sets for each variable.In our case,althoughn,Care discrete variables,each of their components ranges from 0 to+∞,which is not a finite set.To make matters worse,the variablesI,are not discrete variables.Thus we propose our own solver for this problem.

    As all parameters in the factors are independent of each other,optimization of the factorized objective function can also be performed separately:

    As shown in the equation above and Fig.4,we split the multi-task optimization problem into the following parts:

    ?Instance selection.This corresponds to the factor nodes in red(factors without parameters)or blue(factors with parameters)in Fig.4,and focuses on the first term in the equation above.The optimization problem includes discrete variables and is a parameter inference problem.The number of factors in this part is fixed.As the structure of the knowledge graph is mapped implicitly to nonzero elements inpt,pi,anddep,this optimization part only needs to carry out(i)the inference of hidden variablesCandn,and(ii)the estimation of parameterspt,pi,anddep. By solving for them jointly,we can recover both structure and parameters in our knowledge graph from the learned parameters.

    ?Instance placement.This corresponds to the green factor nodes in Fig.4 and focuses on the second term in the equation above. The optimization problem involves continuous variables but no hidden variables.These factors are associated withR3relation in the knowledge graph explicitly,and the number of such factors needs to be learned.As a result,this part needs to(i)learn the structure related to this part and(ii)estimate parametersθi,j.

    Although the original learning problem has complex coupled tasks,the sub-problems we obtain above are much simpler.We can adapt existing solvers for both sub-problems independently:

    ?Instance selection.This includes coupled tasks of hidden variable inference and parameter estimation,which is a common problem in semantic modeling algorithms used with,e.g.,topic models[28].Given a group of parameters and hidden variables in this part,we can quickly evaluate the resulting objective function value for this part.However taking the derivative of the objective function with respect to each separate variable or parameter is difficult. Following work in semantic modeling,the Gibbs sampling method[30]is utilized to solve the coupled problem of hidden variable inference and parameter estimation.However,the total number of relations learned by the sampling algorithm,or the number of non-zero elements in the parameters,might be huge.This could cause a severe over fitting issue given a limited number of training samples.Thus the Bayesian information criterion[31](BIC)is employed in our objective function,to extract only the most salient[32]relations from our training samples.The BIC is defined asklog(nsample)?2 log?L,wherensample,k,and?Ldenote the number of training samples,the number of free parameters in the model,and the maximized value of the likelihood function,respectively.Accordingly,the objective function for the Gibbs sampling solver is set to?BIC:

    wherekis the number of non-zero elements in all parameters.With Gibbs sampling,we can get sparse parameters,and map each non-zero element to a relation in our knowledge graph.

    ?Instance placement.This includes coupled tasks of structure and parameter estimation,which is a traditional problem in probabilistic graph models.Using our factorized objective function,each parameterθi,jcan be estimated independently from other parameters and structure,asIis observable in the model,which is a simple parameter estimation problem with maximum a posteriori(MAP)estimation.The only difficulty in this problem is estimating the normalizing factorZi,j,as it requires integration over a joint distribution of all parameters inθi,j. In our implementation,this is done with the VEGAS[33]Monte Carlo method.

    As each factorp(Ii|Ij,θi,j)corresponds to a relation〈i,R3,j〉in our knowledge graph,we can also apply BIC to estimate the set of relationsR3:

    which can be solved with greedy search.

    5 Applications

    5.1 Data sources

    We have trained different versions of our knowledge base with a mixture of data sources,to support different indoor design tasks:

    ?Dataset A,built with 50 Autodesk Revit[1]projects collected from professional designers,including 460 rooms.

    ?Dataset B,built with 1194 rooms designed by players of a video game[34],with both usability and aesthetics considered.

    ?DatasetC,built with 233 rooms from the

    Stanford Scene Dataset[11]by non-professional users.

    The library of objects was constructed from manmade objects from ShapeNet[23],and we also added objects from Datasets A,B,and C to it.

    For object types and objects in the library,R4relation was obtained by classification,learned with MVCNN[35].The resulting knowledge graph learned from each dataset consisted of more than 200,000 entities and 200,000 relations,most of which were object instance entities andR4relations.

    We now show the effectiveness of our knowledge graph representation with several applications.

    5.2 Alternative indoor scene design

    One typical requirement for indoor scene design is to generate alternative designs.Given a design of a room,a user may have certain preferences for the room,and thus may like to generate alternative designs by changing the type of an object,changing a selected instance of a certain type,or moving some instances in the room.Thus it is desirable to generate alternative designs efficiently with these preferences addressed automatically.This application is similar to scene evolution[36],which aims to evolve layouts given a sequence of human activities.The major difference between alternative indoor scene design and scene evolution is with their desired outputs:alternative indoor scene design tries to generate a tidy layout to assist scene designers,whereas scene evolution aims to generate realistic and messy layouts that correspond to human activities.

    One major challenge in generating alternative indoor scenes is to find the lead–lag relations in the scene.For example,when a user moves a table,it is highly possible that their intention is to move the chairs around it together with it.However when a user moves a chair,it is likely that they just want to move the chair elsewhere.Such challenges can be addressed with a simple graph-based algorithm by analyzing the lead–lag relations.

    We address the problem with the following algorithm:

    When the placement of an instance in the scene is changed,or when we replace an instance byanother instance of the same object type,we simply check for a connected component from the changed instance in our knowledge-graph-based representation with only R3,R4,and R5 edges considered,and the placements of those instances in the connected components are to be in fluenced.Thus when the placement of a chair is changed,it does not in fluence the placement of the table,as there is no path representing dependency in the graph representation.In contrast,when a table is moved,we will move any chairs around it.

    When an instance is replaced by another instance of a different object type,we check in our knowledge graph and compare the contexts of these object types in the knowledge graph.We then modify the scene accordingly.For example,a non-wall-mounted TV needs a base cabinet to support it,and a wallmounted TV does not.When we replace a non-wallmounted TV to a wall-mounted one,we should try to remove the base cabinet if the base cabinet is not a dependency of any other instance in the scene.

    We show a typical example in Fig.5.Figure 5(a)shows the original design of a living room,which can also be used for dining.As the room is nearly empty,the user first moves the table to the left half of the room(see Fig.5(b))and the chairs are also moved by our algorithm.The TV is not moved with our lead–lag analysis based on the knowledge graph,and our algorithm adds an armchair in front of the TV after re-analyzing the dependency relations of the TV.When the TV is changed to a wall-mounted TV(see Fig.5(c)),our algorithm automatically removes the TV cabinet that originally supported the nonwall-mounted TV.In Fig.5(d),the square table is replaced by a long table not included in the training dataset.Our algorithm automatically transfers the R3 relation learned from other tables,and places some chairs around it.The right half of the room is still quite empty,so the user decides to add a coffee table(see Fig.5(e)),and our algorithm changes the seat originally placed there to a sofa,places it next to the wall,and automatically adds another seat.It also adds two side tables beside the sofa.In Fig.5(f),we move a chair around the dining table,to simulate a scenario of a family sitting around the coffee table,watching TV.In this case,the dining table in the chair’s context is not moved,by the lead–lag analysis.

    For the example in Fig.5(e),we further compare our knowledge graph representation with the pairwise co-occurrence modeling used by Refs.[10,25],as shown in Figs.5(g)and 5(h),respectively.The major differences between these two representations are:(i)ours usesdirectedrelations between entities,and(ii)ours introduces several levels of hypernyms for each object.In the alternative scene design application,such differences have significant benefits.By using directed relations,we can find lead–lag relations for those objects,and analyze dependencies between the objects.Figures 5(b),5(d),and 5(e)show some successful modifications according to the dependency analysis,while Fig.5(f)demonstrates the necessity of such analysis.By introducing different levels of hypernyms,the relations among the entities are easier to transfer,as shown in Fig.5(d).A graphical representation of such transfer is presented in Figs.5(g)and 5(h),where we add a single-seater sofa not included in the training dataset.As shown in Fig.5(g),the relations of its hypernym “sofa”and“seat”can be implicitly transferred to the hyponym“single seat sofa”with a single R5 relation,whereas in co-occurrence models lacking a lexical hierarchy,the user has to go through all similar concepts and manually decide which co-occurrence relations to transfer.Such transferability can be very helpful in reducing the required scale and coverage of training data.

    Fig.5 An example of alternative room design showing several steps.(a)Original room design.(b)The user moves the table to the left half of the room.(c)The user changes the TV to a wall-mounted one.(d)The user changes the square table to a long table.(e)The user adds a coffee table in front of the TV.(f)We move a chair around the dining table.(g)Underlying knowledge graph for(e),with various entities and relations removed for clearer illustration.(h)Co-occurrence-based representation for(e)as commonly used in previous work.

    5.3 Inference tasks

    Some interactions supported by existing works can be formulated as a simple MAP hidden variable inference problem based on our factor graph representation.The most typical interactions include finding a proper location for a user-selected instance[4],retrieving an instance for a user-selected placement[9],etc.

    We present a typical example of inference tasks in Fig.6.Figure 6(a)shows the result of finding a proper location for a chair in the room.The red chairs show six local maxima of posterior probability for placing the chair,indicating their R2 and R3 relations to the tea table and dining table,where the R2 relation encodes the table that the chair depends on,and its position and rotation are further determined by the parameters of the R3 relation.Figures 6(b)and 6(c)show the result of retrieving instances with userselected locations with the relations of small objects learned from Dataset B.By retrieving R2,R4,R5 relations in the knowledge graph for the hyponym concept and corresponding instances to be placed in the room filtered by R3,our algorithm identifies a plant,a tablet,a cabinet,and a plate for those locations.However,due to the limitation of our knowledge graph representation,without function labels for objects,our algorithm may select instances with duplicated function(see Fig.6(d)).

    5.4 Fully-automatic indoor design generation

    Another challenging problem is the fully-automatic indoor design generation problem.Existing work[5]in open world synthesis can only synthesize a room with a certain room type with manually designed factor-based criteria for specific room types.Here we replace their objective function with our factorized formulation based on our knowledge graph,to adapt the model to different room types. Instead of designing criteria for each specific type,we include some general aesthetic measures[37]to avoid strange results.Figure 7 shows some results.

    Fig.6 Typical inference tasks.(a)Finding a proper placement for a user-selected instance.(b,c)Retrieving instances for selected placements.(d)A typical failure in retrieving instances for selected placements:both knife&fork and chopsticks are added.

    Fig.7 Some rooms generated by automatic indoor design.Each case takes a different room type vector T as input,to generate(a)a bedroom,(b)a living room,(c)a dining room integrated with a kitchen,and(d)a toilet.

    6 Discussion and conclusions

    This work has proposed a knowledge-graph-based framework which addresses many practical problems in computer-aided indoor scene design.The relations can be learned with small-scale training datasets of indoor scenes,and can be easily transferred and adapted to suit practical needs.Various practical examples show that our framework is effective and can be easily embedded into existing applications.However,we can still point out several limitations and possible improvements of our work.

    Firstly,to overcome sparsity and lack of training data,we employ an ontology based on ShapeNet[23]to assist our knowledge graph construction,which partially solves the issue.However,given a limited number of training scenes,we cannot claim that our knowledge graph trained with this dataset containsevery factabout indoor scene design,since for some rare object types,we can find few if any instances in the training scenes,which makes it difficult to learn their relations.Possible improvements for such problems may include employing knowledge graph embedding techniques[38],which have shown success in relation prediction for long-tail entities.

    Secondly,some placements of objects are not based on the contextual relations between them,but on their context in human interaction.For example,we may learn to “place a knife to the right side of a plate”and “place a fork to the left side of a plate”.When we do not place a plate in the scene,the algorithms based on our framework do not know how to place knife and fork,even if a napkin has been placed.However with the underlying fact“it is conventional for a person to use a knife with their right hand and a fork with their left hand”,a person can easily know how to place them.Modeling the human context makes it possible to handle cases like this.Therefore,a possible direction for further improvement is to include human activity information[36,39]to our knowledge graph,an approach which has shown success in both realistic scene synthesis and in using evolutionary instructions.

    Acknowledgements

    This work was supported by the National Key R&D Program of China(No.2017YFB1002604),the National Natural Science Foundation of China(No.61772298),a Research Grant of Beijing Higher Institution Engineering Research Center,and the Tsinghua–Tencent Joint Laboratory for Internet Innovation Technology.

    久久天堂一区二区三区四区| 成年女人毛片免费观看观看9| 十八禁网站免费在线| 五月伊人婷婷丁香| 97碰自拍视频| 国产三级在线视频| 此物有八面人人有两片| 久久久久久人人人人人| 老熟妇仑乱视频hdxx| 国产亚洲av高清不卡| 亚洲 欧美 日韩 在线 免费| 午夜视频精品福利| 午夜福利高清视频| 欧美一区二区精品小视频在线| 久久久久久九九精品二区国产| 日本免费一区二区三区高清不卡| 男人舔女人的私密视频| 国产野战对白在线观看| www日本黄色视频网| 亚洲电影在线观看av| 色综合亚洲欧美另类图片| 国产高清激情床上av| 999久久久精品免费观看国产| 久久久久国产精品人妻aⅴ院| 波多野结衣巨乳人妻| 蜜桃久久精品国产亚洲av| 亚洲人成伊人成综合网2020| 精品欧美国产一区二区三| 久久久色成人| 麻豆成人午夜福利视频| 麻豆成人午夜福利视频| 精品一区二区三区视频在线 | 亚洲 国产 在线| 女人高潮潮喷娇喘18禁视频| 国产三级中文精品| 天天躁狠狠躁夜夜躁狠狠躁| 婷婷六月久久综合丁香| 亚洲av美国av| 天堂√8在线中文| 性色av乱码一区二区三区2| 欧美日韩乱码在线| 国产成人精品无人区| 又大又爽又粗| 老鸭窝网址在线观看| 欧美乱妇无乱码| 一进一出抽搐动态| 美女高潮的动态| av女优亚洲男人天堂 | 夜夜看夜夜爽夜夜摸| 日韩欧美 国产精品| 91麻豆av在线| 99国产精品一区二区蜜桃av| 男女视频在线观看网站免费| 久久天躁狠狠躁夜夜2o2o| 欧美乱妇无乱码| 日本黄色片子视频| 欧美成人免费av一区二区三区| 国产又黄又爽又无遮挡在线| 高潮久久久久久久久久久不卡| 人人妻人人看人人澡| 国产精品亚洲美女久久久| 老熟妇乱子伦视频在线观看| 国产亚洲精品久久久久久毛片| av在线蜜桃| 男人舔女人的私密视频| 亚洲国产精品999在线| 亚洲美女视频黄频| 国内揄拍国产精品人妻在线| 亚洲激情在线av| 长腿黑丝高跟| 国产午夜精品久久久久久| 免费观看人在逋| 日韩高清综合在线| 午夜精品久久久久久毛片777| 国产麻豆成人av免费视频| 欧美黄色淫秽网站| 欧美中文综合在线视频| 亚洲天堂国产精品一区在线| 波多野结衣巨乳人妻| 香蕉久久夜色| 亚洲午夜精品一区,二区,三区| 一区二区三区激情视频| 成人三级黄色视频| 国产又色又爽无遮挡免费看| 露出奶头的视频| 欧美zozozo另类| 两性夫妻黄色片| 国产视频内射| 国产成人av激情在线播放| 久久久久国产精品人妻aⅴ院| 999久久久精品免费观看国产| 91字幕亚洲| 久久天堂一区二区三区四区| 噜噜噜噜噜久久久久久91| 天堂动漫精品| 91av网站免费观看| 国产亚洲av嫩草精品影院| 亚洲人成伊人成综合网2020| 欧美国产日韩亚洲一区| 真人做人爱边吃奶动态| 国产精品1区2区在线观看.| 国产乱人视频| 国内揄拍国产精品人妻在线| 久久久国产精品麻豆| 一个人观看的视频www高清免费观看 | 伊人久久大香线蕉亚洲五| 精华霜和精华液先用哪个| 国产免费av片在线观看野外av| 久久精品综合一区二区三区| 国产一区二区三区在线臀色熟女| 中亚洲国语对白在线视频| 欧美色欧美亚洲另类二区| 黄色丝袜av网址大全| 看片在线看免费视频| 欧美乱码精品一区二区三区| 免费av不卡在线播放| 91字幕亚洲| 18禁国产床啪视频网站| 国产黄色小视频在线观看| 久久天躁狠狠躁夜夜2o2o| 亚洲一区高清亚洲精品| 99热这里只有是精品50| 在线观看舔阴道视频| 欧美日韩黄片免| av欧美777| 国产黄色小视频在线观看| 亚洲色图 男人天堂 中文字幕| 18禁黄网站禁片午夜丰满| 精品99又大又爽又粗少妇毛片 | av天堂中文字幕网| 婷婷精品国产亚洲av在线| 国产精品久久久人人做人人爽| 国产午夜精品久久久久久| 一个人观看的视频www高清免费观看 | 久久天躁狠狠躁夜夜2o2o| 亚洲人成电影免费在线| 久久伊人香网站| 亚洲av中文字字幕乱码综合| 老司机在亚洲福利影院| 麻豆久久精品国产亚洲av| 美女大奶头视频| 十八禁网站免费在线| 高清毛片免费观看视频网站| 97超级碰碰碰精品色视频在线观看| 两个人看的免费小视频| 亚洲va日本ⅴa欧美va伊人久久| 亚洲中文字幕一区二区三区有码在线看 | 一个人免费在线观看电影 | 亚洲一区二区三区色噜噜| 亚洲 国产 在线| 人人妻,人人澡人人爽秒播| 国产一区二区在线观看日韩 | 日韩免费av在线播放| 婷婷六月久久综合丁香| 欧美zozozo另类| 国产成人精品无人区| 国产麻豆成人av免费视频| 中文在线观看免费www的网站| 欧美乱码精品一区二区三区| 又紧又爽又黄一区二区| 欧美日本亚洲视频在线播放| 亚洲av免费在线观看| 一个人免费在线观看电影 | 亚洲 欧美 日韩 在线 免费| www日本黄色视频网| 成人特级av手机在线观看| 一本久久中文字幕| www国产在线视频色| 美女被艹到高潮喷水动态| 国产乱人伦免费视频| 91字幕亚洲| 天堂动漫精品| 久久亚洲真实| aaaaa片日本免费| 国产精品一区二区免费欧美| 一本综合久久免费| 国产一区二区三区视频了| 日韩成人在线观看一区二区三区| 99在线人妻在线中文字幕| 亚洲黑人精品在线| 国产精品美女特级片免费视频播放器 | 国产av麻豆久久久久久久| 国产精品一区二区精品视频观看| 久久久久久九九精品二区国产| 精品国产美女av久久久久小说| 淫秽高清视频在线观看| 久久九九热精品免费| 在线看三级毛片| 中文字幕久久专区| 麻豆成人av在线观看| 五月伊人婷婷丁香| 久久久国产精品麻豆| 亚洲色图 男人天堂 中文字幕| 欧洲精品卡2卡3卡4卡5卡区| 国产伦精品一区二区三区四那| 国产视频一区二区在线看| 中文字幕人成人乱码亚洲影| 日本免费a在线| 成年女人毛片免费观看观看9| 久久久久免费精品人妻一区二区| 麻豆久久精品国产亚洲av| 床上黄色一级片| 男插女下体视频免费在线播放| 男女那种视频在线观看| 午夜两性在线视频| 国产欧美日韩一区二区三| 成人无遮挡网站| 99热这里只有精品一区 | 日本 av在线| 舔av片在线| 精品无人区乱码1区二区| 国产亚洲精品综合一区在线观看| 老汉色∧v一级毛片| 国产高清激情床上av| 男女视频在线观看网站免费| 成人三级做爰电影| 欧美日韩精品网址| 国产精品av视频在线免费观看| 日韩欧美免费精品| 精品一区二区三区四区五区乱码| 窝窝影院91人妻| 12—13女人毛片做爰片一| 中文字幕人妻丝袜一区二区| 一本久久中文字幕| 日韩欧美在线二视频| 亚洲专区国产一区二区| 国产精品香港三级国产av潘金莲| 亚洲狠狠婷婷综合久久图片| 欧美乱妇无乱码| 日韩欧美在线二视频| 国产伦在线观看视频一区| 国产男靠女视频免费网站| 中文字幕熟女人妻在线| 免费在线观看成人毛片| 9191精品国产免费久久| 男人舔奶头视频| 中文字幕熟女人妻在线| 国产精品亚洲av一区麻豆| 欧美日韩国产亚洲二区| 男人舔女人的私密视频| 亚洲av日韩精品久久久久久密| 两人在一起打扑克的视频| 国产伦一二天堂av在线观看| 男人舔女人的私密视频| 90打野战视频偷拍视频| 精品一区二区三区四区五区乱码| 中文字幕av在线有码专区| 身体一侧抽搐| 麻豆国产97在线/欧美| 欧美成狂野欧美在线观看| 成人特级黄色片久久久久久久| 宅男免费午夜| 可以在线观看的亚洲视频| 久久久久免费精品人妻一区二区| 欧美又色又爽又黄视频| 久久久精品欧美日韩精品| 国产97色在线日韩免费| 超碰成人久久| 巨乳人妻的诱惑在线观看| 欧美成人性av电影在线观看| 日韩欧美国产在线观看| 国产99白浆流出| 色综合站精品国产| 亚洲欧美激情综合另类| 国产亚洲精品一区二区www| 国产精品久久久久久亚洲av鲁大| 亚洲av电影在线进入| 国产视频内射| 国产久久久一区二区三区| 国产精品亚洲一级av第二区| 成人三级黄色视频| cao死你这个sao货| 两个人看的免费小视频| ponron亚洲| 俄罗斯特黄特色一大片| 精品99又大又爽又粗少妇毛片 | 久久欧美精品欧美久久欧美| 国产一区二区在线av高清观看| 国产成人av教育| 亚洲av成人一区二区三| 亚洲精品在线观看二区| 黄色视频,在线免费观看| 国产亚洲欧美98| 亚洲av成人一区二区三| 巨乳人妻的诱惑在线观看| 啪啪无遮挡十八禁网站| 日本在线视频免费播放| 亚洲成人免费电影在线观看| 国产黄a三级三级三级人| 真人一进一出gif抽搐免费| 国产精品电影一区二区三区| 波多野结衣高清无吗| 成人午夜高清在线视频| av黄色大香蕉| 欧美一区二区精品小视频在线| 欧美性猛交╳xxx乱大交人| 亚洲欧美精品综合久久99| 一级黄色大片毛片| 99热只有精品国产| 亚洲国产色片| 久久久久免费精品人妻一区二区| av欧美777| 欧美xxxx黑人xx丫x性爽| 中文在线观看免费www的网站| 人妻丰满熟妇av一区二区三区| 9191精品国产免费久久| 在线视频色国产色| 欧美在线一区亚洲| 久久午夜综合久久蜜桃| 成年版毛片免费区| 亚洲精品色激情综合| 午夜激情欧美在线| 成人永久免费在线观看视频| 国内揄拍国产精品人妻在线| 18美女黄网站色大片免费观看| 日本一本二区三区精品| 天堂av国产一区二区熟女人妻| 又粗又爽又猛毛片免费看| 国产黄色小视频在线观看| 国产激情久久老熟女| 麻豆久久精品国产亚洲av| 男女做爰动态图高潮gif福利片| 欧美成人一区二区免费高清观看 | 18禁国产床啪视频网站| 最近在线观看免费完整版| 亚洲国产日韩欧美精品在线观看 | 国产高清激情床上av| 99riav亚洲国产免费| 国产激情欧美一区二区| 97人妻精品一区二区三区麻豆| 久久精品91无色码中文字幕| 91老司机精品| 人人妻,人人澡人人爽秒播| 91久久精品国产一区二区成人 | bbb黄色大片| 欧美日韩瑟瑟在线播放| 精华霜和精华液先用哪个| 欧美成人性av电影在线观看| 亚洲avbb在线观看| 日本黄色视频三级网站网址| 中文字幕人妻丝袜一区二区| 国产视频内射| 亚洲午夜精品一区,二区,三区| 国产私拍福利视频在线观看| 99热6这里只有精品| 少妇的逼水好多| 午夜影院日韩av| 国产成人影院久久av| netflix在线观看网站| 黄色女人牲交| 欧美乱妇无乱码| 在线十欧美十亚洲十日本专区| 色吧在线观看| 亚洲成a人片在线一区二区| 成人国产综合亚洲| 99在线人妻在线中文字幕| 精品午夜福利视频在线观看一区| 国产亚洲av高清不卡| 久久久久久大精品| 亚洲第一欧美日韩一区二区三区| 夜夜看夜夜爽夜夜摸| av在线天堂中文字幕| 国产在线精品亚洲第一网站| 此物有八面人人有两片| 两个人视频免费观看高清| 久久久久久久久免费视频了| 日韩欧美精品v在线| 国产精品久久视频播放| 一级毛片高清免费大全| 日本 av在线| 亚洲五月婷婷丁香| 久久久久免费精品人妻一区二区| 国产成年人精品一区二区| 亚洲欧美精品综合久久99| 亚洲人成网站在线播放欧美日韩| 99国产综合亚洲精品| 国产综合懂色| 国产乱人视频| 国模一区二区三区四区视频 | 老汉色∧v一级毛片| 日本精品一区二区三区蜜桃| 欧美成人一区二区免费高清观看 | 久久精品91蜜桃| 国产私拍福利视频在线观看| 国内毛片毛片毛片毛片毛片| 亚洲成a人片在线一区二区| 国产精品电影一区二区三区| 欧美日韩黄片免| 欧美黑人欧美精品刺激| 两个人视频免费观看高清| 97超级碰碰碰精品色视频在线观看| 怎么达到女性高潮| 在线观看66精品国产| 午夜a级毛片| 欧美zozozo另类| 性欧美人与动物交配| 中文字幕av在线有码专区| 日韩免费av在线播放| 国产欧美日韩精品亚洲av| 在线观看美女被高潮喷水网站 | 成人无遮挡网站| 亚洲国产中文字幕在线视频| 亚洲午夜精品一区,二区,三区| 校园春色视频在线观看| 亚洲va日本ⅴa欧美va伊人久久| 免费高清视频大片| www.精华液| 精品人妻1区二区| 午夜精品在线福利| 欧美乱色亚洲激情| 日本精品一区二区三区蜜桃| 久久久久国产一级毛片高清牌| 一区二区三区激情视频| 日韩欧美免费精品| 高潮久久久久久久久久久不卡| 美女大奶头视频| bbb黄色大片| 黄色成人免费大全| 极品教师在线免费播放| 日韩欧美国产在线观看| 久久久久亚洲av毛片大全| 男女视频在线观看网站免费| 色综合婷婷激情| 亚洲精品美女久久久久99蜜臀| 色av中文字幕| 三级国产精品欧美在线观看 | 欧美日韩国产亚洲二区| 深夜精品福利| 欧美3d第一页| 亚洲 欧美 日韩 在线 免费| а√天堂www在线а√下载| 久久久久精品国产欧美久久久| 久久国产乱子伦精品免费另类| 久9热在线精品视频| 亚洲av五月六月丁香网| 午夜福利免费观看在线| 小蜜桃在线观看免费完整版高清| 一二三四在线观看免费中文在| 嫩草影院精品99| 高潮久久久久久久久久久不卡| 精品国产三级普通话版| 热99re8久久精品国产| 一区二区三区激情视频| 免费人成视频x8x8入口观看| 好看av亚洲va欧美ⅴa在| 九色成人免费人妻av| 欧美国产日韩亚洲一区| 国产av在哪里看| 国产亚洲精品久久久com| 一二三四在线观看免费中文在| 狠狠狠狠99中文字幕| 中文字幕av在线有码专区| 五月玫瑰六月丁香| 久久伊人香网站| 五月伊人婷婷丁香| av片东京热男人的天堂| 一区二区三区高清视频在线| www.www免费av| 欧美日韩精品网址| 亚洲欧美日韩卡通动漫| 成人永久免费在线观看视频| 变态另类丝袜制服| 观看免费一级毛片| 日本 av在线| 久久精品国产99精品国产亚洲性色| 免费在线观看影片大全网站| 国模一区二区三区四区视频 | 欧美一区二区国产精品久久精品| 国产高清三级在线| 国产又色又爽无遮挡免费看| 美女高潮的动态| 日韩欧美国产一区二区入口| 国产欧美日韩一区二区精品| 国产亚洲精品久久久久久毛片| 欧美黑人巨大hd| 成人永久免费在线观看视频| 成年女人永久免费观看视频| 亚洲天堂国产精品一区在线| 1000部很黄的大片| 91av网站免费观看| 国产精品久久电影中文字幕| 成年女人看的毛片在线观看| 露出奶头的视频| 成年女人毛片免费观看观看9| 变态另类成人亚洲欧美熟女| 亚洲精品美女久久久久99蜜臀| 非洲黑人性xxxx精品又粗又长| 国产伦一二天堂av在线观看| 国内揄拍国产精品人妻在线| 国模一区二区三区四区视频 | 在线视频色国产色| 午夜福利在线观看吧| 欧美最黄视频在线播放免费| 亚洲成人中文字幕在线播放| 国产一区二区三区在线臀色熟女| 青草久久国产| 91麻豆av在线| 一二三四在线观看免费中文在| 宅男免费午夜| 很黄的视频免费| 国产精品99久久99久久久不卡| av天堂在线播放| 色吧在线观看| 国产欧美日韩精品亚洲av| 麻豆av在线久日| 在线播放国产精品三级| 国产精品一区二区精品视频观看| 国产成人系列免费观看| 天天躁日日操中文字幕| 小蜜桃在线观看免费完整版高清| 一二三四社区在线视频社区8| 伦理电影免费视频| av欧美777| 日韩欧美在线二视频| 少妇人妻一区二区三区视频| 99久久精品国产亚洲精品| 精品久久久久久,| 热99re8久久精品国产| 成人国产一区最新在线观看| 狂野欧美白嫩少妇大欣赏| 国产一区二区三区视频了| 18禁裸乳无遮挡免费网站照片| 久久国产精品影院| 午夜福利在线观看吧| 国语自产精品视频在线第100页| 国产成人精品久久二区二区91| 亚洲自偷自拍图片 自拍| 午夜亚洲福利在线播放| 人人妻人人看人人澡| 亚洲,欧美精品.| 久久久精品大字幕| 五月玫瑰六月丁香| 亚洲 欧美一区二区三区| 国产精品九九99| 日日摸夜夜添夜夜添小说| 天堂√8在线中文| 久久热在线av| 国产高清视频在线观看网站| 夜夜躁狠狠躁天天躁| 九色成人免费人妻av| 国产1区2区3区精品| 露出奶头的视频| 在线观看舔阴道视频| 俄罗斯特黄特色一大片| 国产成人精品久久二区二区免费| 国产精品爽爽va在线观看网站| 国产成人av教育| www.自偷自拍.com| 国内精品久久久久久久电影| 久久久国产欧美日韩av| 性欧美人与动物交配| 国产真人三级小视频在线观看| 亚洲avbb在线观看| 色吧在线观看| 啦啦啦观看免费观看视频高清| 变态另类丝袜制服| 99久久精品一区二区三区| 亚洲狠狠婷婷综合久久图片| 久久精品国产综合久久久| 免费观看人在逋| 欧美不卡视频在线免费观看| 国产成人精品无人区| 老熟妇乱子伦视频在线观看| 麻豆av在线久日| 精品一区二区三区视频在线观看免费| 国产亚洲精品久久久com| 国产高清视频在线播放一区| 在线免费观看不下载黄p国产 | 久久久久久久久久黄片| av黄色大香蕉| 亚洲18禁久久av| 精品久久久久久久人妻蜜臀av| 舔av片在线| 亚洲欧美日韩东京热| 中国美女看黄片| 色哟哟哟哟哟哟| 很黄的视频免费| 久久国产乱子伦精品免费另类| 精品午夜福利视频在线观看一区| 婷婷六月久久综合丁香| 国产 一区 欧美 日韩| 国产高清视频在线观看网站| 免费一级毛片在线播放高清视频| or卡值多少钱| 天天一区二区日本电影三级| 最近最新免费中文字幕在线| 久久精品91蜜桃| 亚洲人与动物交配视频| av中文乱码字幕在线| 欧美一级毛片孕妇| 1024香蕉在线观看| 一级毛片女人18水好多| 亚洲九九香蕉| 欧美日韩精品网址| 久久性视频一级片| 日韩人妻高清精品专区| 狂野欧美激情性xxxx| 日韩欧美一区二区三区在线观看| 国内久久婷婷六月综合欲色啪| 岛国视频午夜一区免费看| 真人做人爱边吃奶动态| 91av网站免费观看| 黄色成人免费大全| 操出白浆在线播放| 天天躁狠狠躁夜夜躁狠狠躁| 国产激情久久老熟女| 久久精品影院6| 好看av亚洲va欧美ⅴa在| 免费搜索国产男女视频| 999精品在线视频| 免费在线观看成人毛片| 中文字幕人成人乱码亚洲影| 久久人妻av系列| 99在线人妻在线中文字幕| 亚洲无线观看免费|