• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A visual awareness pathway in cognitive model ABGP①

    2016-12-29 05:34:28MaGangYangXiLuChengxiangZhangBoShiZhongzhi
    High Technology Letters 2016年4期

    Ma Gang (馬 剛), Yang Xi, Lu Chengxiang, Zhang Bo, Shi Zhongzhi

    (*The Key Laboratory of Intelligent Information Processing, Institute of Computing Technology,Chinese Academy of Sciences, Beijing 100190, P.R.China)(**University of Chinese Academy of Sciences, Beijing 100190, P.R.China)

    ?

    A visual awareness pathway in cognitive model ABGP①

    Ma Gang (馬 剛)②***, Yang Xi*, Lu Chengxiang*, Zhang Bo***, Shi Zhongzhi*

    (*The Key Laboratory of Intelligent Information Processing, Institute of Computing Technology,Chinese Academy of Sciences, Beijing 100190, P.R.China)(**University of Chinese Academy of Sciences, Beijing 100190, P.R.China)

    The cognitive model ABGP is a special model for agents, which consists of awareness, beliefs, goals and plans. The ABGP agents obtain the knowledge directly from the natural scenes only through some single preestablished rules like most agent architectures. Inspired by the biological visual cortex (V1) and the higher brain areas perceiving the visual feature, deep convolution neural networks (CNN) are introduced as a visual pathway into ABGP to build a novel visual awareness module. Then a rat-robot maze search simulation platform is constructed to validate that CNN can be used for the awareness module of ABGP. According to the simulation results, the rat-robot implemented by the ABGP with the CNN awareness module reaches the excellent performance of recognizing guideposts, which directly enhances the capability of the communication between the agent and the natural scenes and improves the ability to recognize the real world, which successfully demonstrates that an agent can independently plan its path in terms of the natural scenes.

    ABGP, visual cortex (V1), convolution neural networks (CNN), awareness, visual pathway

    0 Introduction

    Rational agents have an explicit representation for their environment (sometimes called world model) and objectives they are trying to achieve. Rationality means that the agent will always perform the most promising actions (based on the knowledge about itself and sensed from the world) to achieve its objectives. For a rational agent facing with a complex natural scene, how to get knowledge from scenes to drive their actions? Most agent designers may have a common view that they either create a virtual scene or set some single inflexible rules for agent to recognize surroundings.

    There exist numerous agent architectures as mentioned above, such as BDI[1,2], AOP[3], SOAR[4]and 3APL[5], in which the communication of information between the agents and the world is based on a single fixed rule as well. With respect to the theoretical foundation and the number of implemented and successfully applied systems, the belief-desire-intention (BDI) architecture designed by Bratman[1]as a philosophical model for describing the rational agents should be the most interesting and widespread agent architecture. Of course, there are also a wide range of agents characterized by the BDI architecture, where one of these types is called ABGP (a 4-tupleBDI agent cognitive model shown in Fig.1) model proposed by Shi et al.[6].

    ABGP model consists of the concepts of awareness, beliefs, goals, and plans. Awareness is an information pathway connecting to the world (including the natural scenes and the other agents in the multi-agent system). Beliefs can be viewed as the agent’s knowledge about its setting and itself. Goals make up the agent’s wishes and drive the course of its actions. Plans represent agent’s means to achieve its goals. However, the ABGP model agent still has a disadvantage that it just transfers special fixed-format message from the world. Obviously, if the agent constructed by ABGP model (ABGP-agent) is expected to have an ability to sense directly the natural scenes, a new awareness pathway must be proposed.

    Motivated by the above analysis, the primary work of this paper is to construct a novel flexible natural communication way between the ABGP-agent and the natural scenes by introducing a so-called artificial neural networks as the environment visual awareness pathway. The ABGP-agent using the artificial neural networks as an environment visual awareness pathway will be a natural (human-like) modeling and a higher level simulation of artificial organisms. However, which artificial neural network is better for the visual awareness path?

    Convolutional neural networks (CNN) from Hubel and Wiesel’s early work on the cat’s visual cortex[7]have some functions like the human visual pathway. The deep CNN with the trainable multiple-stage architectures composed of the multiple stages can learn the invariant features[6,8]. Each stage in a deep CNN is composed of a filter bank, some nonlinearities, and feature pooling layers. With the multiple stages, deep CNN can learn the multi-level hierarchies of features. That is the reason why deep CNN have been successfully deployed in many commercial applications from OCR to video surveillance[9].

    Taking into account all statements depicted above, the deep CNN based on the biological visual cortex theory is more suitable for an environment visual awareness pathway in cognitive model ABGP. Therefore, our main work in this study is to embed the deep CNN as a novel environment visual awareness pathway to the awareness module of ABGP, and apply it to the rat-robot maze search.

    1 Cognitive model ABGP

    In computer science and artificial intelligence, agent can be viewed as perceiving its environment information through sensors and acting environment using effectors[10]. A cognitive model for the rational agent should especially consider the external perception and the internal mental state. The external perception as a knowledge is created through the interaction between an agent and its world. For the internal mental state, BDI model conceived by Bratman[1]will be consulted as a theory of human practical reasoning. Reducing the explanation framework for the complex human behavior to the motivational stance is the especially attractive character[11].

    The cognitive model ABGP (Fig.1) proposed by Shi.[6]is one of the most typical agent models characterized by the BDI architecture, which is represented as a 4-tuple framework as , where awareness is an information pathway connecting to the world and relationship between agents. Belief can be viewed as the agent’s knowledge about its environment and itself. Goal represents the concrete motivations that influence an agent’s behaviors. Plan is used to achieve the agent’s goals. Moreover, an important module, policy-driven reasoning in ABGP model, is used to handle a series of events to achieve plans selection.

    Fig.1 Cognitive ABGP model[6]

    1.1 Awareness

    Awareness in cognitive ABGP agent model should be considered as the basic elements and the relationship related to its setting. A definition of awareness can be depicted as a 2-tuple . The contents for the elements of awareness mainly concern the actions (what are they doing) and the abilities(what can they do). Basic relationships in Ref.[6] contain task relationships, role relationships, operation relationships, activity relationships and cooperation relationships. Task relationships define the task decomposition and composition relationships. Role relationships describe the role relationship of agents in the multi-agent activities. Operation relationships represent the operation set of agents. Activity relationships give what activity of the role should be adopted at a time. Cooperation relationships construct the interactions between agents[6].

    1.2 Belief

    Belief for an agent can be commonly seen as the knowledge base from its internal state and external world, which contains abundant contents, including basic axioms, objective facts, data, etc. A 3-tuple can be used to depict the structure of a belief knowledge base, where T describes the basic concepts in the special field and their definitions, axioms from domain concepts; S is the domain between facts and formulas. There is a causal relationship between constraints, called causality constraint axiom, which ensures the consistency and integrity of the knowledge base; B is the set of beliefs in current state, containing facts and data. The contents of B is changed dynamically in pace with agent’s external perception and internal mental state.

    1.3 Goal

    Goal represents the agent’s motivation stance and is a driving force for its actions. Four typical structures of goal are listed as perform, achieve, query, and maintain goal[12,13]. A perform goal specifies some activities to be done, therefore the outcome of the goal only depends on the fact if activities were performed[14]. An achieve goal represents a goal in the classical sense by specifying what kind of world state an agent wants to bring about in the future[13]. A query goal is used to enquire information about a specified issue. A maintain goal has the purpose to observe some desired world state and the agent actively tries to re-establish this state when it is violated[13]. The goal deliberation process has the task to select a sub-set of consistent desires. Moreover, all goals represented above inherit the same generic lifecycle[13].

    1.4 Plan

    Plan represents the agent’s means to act in its environment. Therefore, the plans predefined by the developer are composed of the action library that the agent can perform. Depending on the current situation, plans are selected for responsing to the occurring events or goals. The selection of plans is done automatically by the system and represents one main aspect of a BDI infrastructure. When a certain goal is selected, the agent must look for an effective way to achieve the goal, this reasoning process is called planning. In order to accomplish the plan reasoning, the agent can adopt two ways: one is using already prepared plan library to achieve some specific or pre-established goals, which is also called static planning[6]; Another way is instantly planning for achieving the goals based on the beliefs of current status, namely dynamic planning[6].

    1.5 Policy-driven reasoning

    Policy-driven reasoning mainly makes the plans (policies) selection by handling a series of events. A policy will directly or indirectly cause an action aito be taken, the result of which is that the system or component will make a deterministic or probabilistic transition to a new state Si. Kephart, et al.[15]outlined a unified framework for autonomously computing policies based upon the notions of states and actions.

    The agent policy model can be defined as a 4-tuple P={St, A, Sg, U}, where Stis the trigger state set at a given moment in time; A is the action set; Sgis the set of goal states; U is the goal state utility function set to assess the merits of the goal state level.

    2 Deep convolutional neural networks

    The theory of CNN is primarily rooted in Hubel and Wiesel’s early work on the cat’s visual cortex neurons with the capability of locally-sensitive and orientation-selective in 1962[7,16]. However, the first implementation of CNN in computer science was the so-called Neocognitron proposed by Fukushima[17]which had been originally applied to the problem of hand-written digit recognition. After 2006, deep learning technology described new ways to train CNN more efficiently that allowed networks with more layers to be trained[18].

    The deep CNN consisting of multiple layers of small neuron collections has been adopted to recognize natural images[19]. Furthermore, some local or global pooling layers may be included in the deep CNN, which combine the outputs of neuron clusters[20]. A typical deep CNN consists of various combinations of convolutional layers and fully connected layers, with point-wise non-linearity applied at the end of or after each layer[21]. Generally, the combination consisting of a filter bank layer, a non-linearity transformation layer, and a feature pooling layer, is called a stage. Fig.2 shows a typical deep CNN framework composed of two stages.

    Fig.2 A typical deep convolution neural networks framework with two feature stages

    2.1 Filter bank layer

    The input is a 3D array with n12D feature maps of size n2×n3. Each component can be marked as xi, j,k, and each feature map is denoted as xi, where the output is also a 3D array, and y consists of m1feature maps of size m2×m3. A trainable filter (so-called kernel) ki, jin the filter bank has the size of l1×l2and connects input feature map xito output feature map yj. The module computes: yj=bj+∑iki, j*xi, where * is the 2D discrete convolution operator and bjis a trainable bias parameter[22].

    2.2 Non-linearity layer

    2.3 Feature pooling layer

    The purpose of this layer is to build the robustness to small distortions, playing the same role as the complex cells in models of visual perception. PA(Average Pooling and Subsampling) is a simplest way to compute the average values over a neighborhood in each feature map. The average operation is sometimes replaced by a PM(Max-Pooling). The traditional deep CNN use a point-wise tanh(·) after the pooling layer, but the more recent models do not. Some deep CNNs dispense with the separate pooling layer entirely, but use strides larger than one in the filter bank layer to reduce the resolution[24]. In some recent deep versions of CNN, the pooling also pools similar feature at the same location, in addition to the same feature at nearby locations[25].

    3 Deep CNN adopted to build visual awareness pathway of ABGP

    An agent perceives its environment information through sensors and acts upon the environment using effectors. The way of perceiving is numerous and varied, such as vision, audition, touch, smell, etc. The vision among those perception ways significantly plays a more important role. However, the visual information in most agent models isn’t directly perceived from the natural scenes.

    Therefore, our main work is introducing deep Convolutional Neural Networks as an environment visual awareness pathway into awareness module of ABGP agent model in terms of the mind mechanism of biological vision, which can not only enhance the capability of communication between the agent and the natural scenes, but also improve the ability to cognize the true world as human being.

    3.1 Motivation

    The awareness module in AGBP model is used to obtain, transform and transmit information from the environment. It firstly needs to convert the external original natural scenes into the identified internal signal format. From the vision theory of assembling scenes, it can be learned that a good internal representation of the natural scenes in recognition framework should be of hierarchical. So the method adopted to implement those functions should be able to represent the hierarchical features.

    It is generally known that the deep Convolutional Neural Networks (CNN) consisting of the multiple trainable stages stacked on top of each other are inspired by the biological visual neural pathways. Each level being able to represent a feature hierarchy of image in deep CNN means that it is suited for the hierarchic feature representation for a natural image (Fig.2). Furthermore, the researchers in recent deep learning have proposed a series of unsupervised methods to significantly reduce the need for labeled cases, which greatly expand the deep CNN application domains. In addition, all parameters of the deep CNN as an environment visual awareness pathway can be viewed as a special internal knowledge that an agent owns to represent a cognition to the environment, which means the deep CNN is an appropriate way to implement visual awareness pathway in the ABGP model at present.

    3.2 Model architecture

    The cognitive model ABGP with deep CNN (ABGP-CNN) (Fig.3) proposed in this work is also one of typical agent models characterized by BDI architecture. ABGP-CNN still consists of 4 modules awareness, belief, goal and plan, and a 4-tuple are used to express ABGP-CNN model as well. However, the awareness module has been changed into a deep CNN compared with the original ABGP model with the single pre-established rules, and the parameters of deep CNN will be a part of knowledge in belief base.

    Fig.3 Cognitive ABGP-CNN model

    Every internal goal action in ABGP-CNN model must be converted into an event to drive the policy-driven reasoning module. The events consist of internal events (occurrence inside an agent) and external events (incoming from external world including the natural scenes and the other agents). The goal consisted of motivation stance is a driving force for the behaviors of an agent. Unlike most traditional BDI systems, ABGP-CNN doesn’t simply view goals as a special sort of events nor assume that all adopted goals need to be consistent with each other. Because the goals in ABGP-CNN have lifecycle about themselves.

    A major attraction of ABGP-CNN model is the intrinsic properties of the deep CNN, such as non-linearity, robustness and hierarchic feature representation. Those properties can be directly adopted to make up one of abilities and knowledge of a cognitive agent, and enable the agent like human to recognize the true world. Because of the introduction of the deep CNN in the awareness module, an agent based on the ABGP-CNN model needs also to accept a good learning process before cognizing the natural scenes.

    3.3 Model implementation

    For an agent based on the ABGP-CNN model, the learning process recognizing the natural scenes should mainly focus on how to train the deep CNN as its awareness module and how to build the appropriate belief base, goals, and plans library. Training the deep CNN includes what the multi-stage architecture is appropriate for the natural object recognition and what the learning strategy is better availability. The aim of building beliefs, goals and plans is to achieve a series of agent’s behaviors feedback according to the accepted environment information.

    The implementation of ABGP-CNN model adopts a declarative and a procedural approach to define its core components awareness, belief, goal and plan as well. The awareness and plan module have to be implemented as the ordinary Java classes that extend a certain framework class, thus providing a generic access to the BDI specific facilities. The belief and goal module are specified in a so-called Agent Definition File (ADF) using an XML language (Fig.4). Within the XML agent definition files, any developer can use the valid expressions to specify any designated properties. Some other information is also stored in ADF such as default arguments for launching the agent or service descriptions for registering the agent at a directory facilitator. Moreover, awareness and plan need to be declared in ADF before they work.

    Fig.4 Composition of an ABGP-CNN Agent

    Awareness is commonly viewed as an information path connecting to the environment. The ADF of ABGP-CNN provides a description of attributes for deep CNN, such as the number of CNN stages, the number of hidden layers, filter shape, pooling size, etc, which can be any kinds of ordinary Java objects contained in the awareness set as an XML tuple. Those Java objects are stored as named facts.

    Beliefs are some facts known by the agent about its environment and itself, which are usually defined in ADF and accessed and modified from plans. Generally the facts can be described as an XML tuple with a name and a class through any kind of ordinary Java objects.

    Goals in the real society are generally viewed as the concrete instantiations of a person’s desires. The ABGP-CNN model still follows the general idea. In particular, the ABGP-CNN model does not assume that all adopted goals need to be consistent to each other. Because the goal in the ABGP-CNN model has its own lifecycle consisting of the goal states option, active, and suspended. For the ease of usage, each goal in an ADF still is depicted in XML tuple with the flags of 4 goal styles.

    Plans in ABGP-CNN model can be considered as a series of concrete actions expecting to achieve some goals or tasks. ABGP-CNN model adopts the plan-library approach to represent the plans of an agent. Each plan contains a plan head defining the circumstances under which the plan may be selected and a plan body specifying the actions to be executed. In general for reusing plans in different agents, it’s needed to decompose concrete agent functionality into the separate plan bodies, which are the predefined courses of the action implemented as Java classes.

    3.4 Model execution

    For a complete ABGP-CNN model, awareness, belief, goal and plan are necessary. The core of ABGP-CNN like most BDI architectures is obviously the mechanism for the plan selection. Plans in ABGP-CNN not only have to be selected for the awareness information, but also for the goals, the internal events and the incoming messages as well. Policy-driven reasoning as a specialized module in ABGP-CNN is adopted to collect all kinds of internal events and forward them to the plan selection mechanism. Of course, the other mechanisms, such as executing selected plans, keeping track of plan steps to monitor failures, are required if a complete plan process wants to be executed successfully. Algorithm 1 shows the function of those modules when they execute the interpret reasoning process.

    Algorithm1Interpret?reasoningprocessofABGP?CNN Initializeagent’sstates; whilenotachievegoalsdo Deliberategoals; ifworldinformationINForincomingmessagesIME wasperceivedthen CreateinternaleventEaccordingtoINForIME; FillinternaleventEintoeventqueueEQ; end Options←OptionGenerator(EQ); SelectedOptions←Deliberate(Options); Plans←UpdatePlans(SelectedOptions) Execute(Plans); Plansdriveagent’sbehaviors; Dropsuccessfulorimpossibleplans; end

    In Algorithm 1,the goals deliberation constantly triggers the awareness module to purposefully perceive the visual from the world the agent locates (extract visual information feature y:yj=gj·tanh (∑iki, j·xi), y∈D) and convert those visual information feature into the unified internal message events which are placed in the event queue (signal mapping T:D→E). According to goal events the event dispatcher continuously consumes the events from the event queue (corresponds to the OptionGenerator(EQ) function) and deliberates the events satisfying the goals (like Deliberate(Options)function). Policy-driven reasoning module builds the applicable plan library for each selected event (similar to the UpdatePlans(SelectedOptions) operation). In the Execute(Plans) step Plan module in Fig.3 will select plans from the plan library and execute them by possibly utilizing the meta-level reasoning facilities.Considering the competition among the multiple plans, the user-defined meta-level reasoner will rank the plan candidates according to the priority of plans. The execution of plans is done stepwise, and directly drives the agent’s external and internal behavior. Each circulation of goal deliberation will be followed by a so-called site-clearing work that means some successful or impossible plans will be dropped.

    4 ABGP-CNN model applied in rat-robot maze search

    An actual simulation application of the rat-robot maze search (Fig.5) will be provided here in order to significantly demonstrate the feasibility and the validity of ABGP-CNN. The following will mainly represent the actual design of the rat-robot agent and the environment agent based on the ABGP-CNN model in the simulation experiment and proves that why the CNN is better for the awareness module.

    4.1 Design of agents in rat-robot maze search

    In the maze search, there exist two types of agents, ie. the rat-robot agent and the environment agent. The task of the rat-robot agent is to start moving at the maze entrance, and finally reach the maze exit denoted as a red flag depended on guideposts in Fig.5. In order to fulfill the maze search task, the composition of the rat-robot agent should implement all the four basic modules, < awareness>, , and , in ADF shown in Fig.4. The following will give the detailed configurations of each module for the rat-robot agent.

    The stage number of feature maps is given as 2. The filter shape of each stage is configured as 5×5, followed by the pooling size 2×2. The fully connected layers using logistic regression is a 2-layer fully-connected neural network with 100 hidden units, and 4 outputs.

    The belief base of the rat-robot agent ought to contain an environment instance knowledge with the class Environment. The current position and moving direction for the rat-robot agent is essential in the environment. The weight W and the bias b about the deep CNN are a kind of knowledge for the environment.

    Four typical goals are used by the rat-robot agent in order to successfully complete the maze search. The perform goal is used to check the guideposts on the path. The achieve goal judges whether the rat-robot agent achieves the the maze exit. The query goal mainly obtains the position coordinate of the rat-robot agent. The maintain goal drives the rat-robot agent constantly moving on. The execution of maintain goal depends on the query goal because the precondition of executing the maintain goal is that the current position of the rat-robot agent isn’t equal to the coordinate of maze exit.

    The plans of the rat-robot agent primarily contains the checking plan and going plan. The checking plan is responsible for discovering and recognizing the guideposts while the rat-robot agent is moving on. The going plan implements which activity (moving on, moving back, turning left and turning right) should be adopted by the rat-robot agent according to the environment conditions.

    Compared with the rat-robot agent, the configuration of the environment agent is simpler. Its belief base just contains a maze map and a specification about the guideposts as shown in Fig.5. It is not only required to deploy awareness module, but also needs a maintain goal to keep the life circle of displaying maze map. Moreover, the environment agent has a creating plan to create the maze map and the specification.

    Fig.5 Rat-robot maze search

    4.2 Performance evaluation of ABGP-CNN model in rat-robot maze search

    In the rat-robot maze search experiment, the rat-robot agent is designed to have 4 basic behaviors including moving on, moving back, turning left and turning right in the maze. Therefore, a sub-MNIST dataset is constructed, called mnist0_3, extracting 4 types of handwritten digits with flag ‘0’ denoting moving on, ‘1’ moving back, ‘2’ turning left and ‘3’ turning right from the original MNIST dataset. The dataset mnist0_3 consists of 20679 training samples, 4075 validating samples and 4257 testing samples.

    A successful maze search activity in Fig.5 is defined as the rat-robot agent starting from the maze entrance and successfully reaching the maze exit marked as a flag. In the moving process, the rat-robot agent is guided by the guideposts in the path, its search behavior will be considered as the failure if it incorrectly recognize one of guideposts in the moving path. There are 32 guideposts in the maze path shown in Fig.5, which means that the rat-robot agent who wants to successfully complete a maze search will correctly recognize the 32 guideposts in one maze search behavior. Obviously, the performance validation of the deep CNN turns into the recognition rate of a sequence of length 32. Therefore a little change will be made to the test set in mnist0_3. A series of sequences, shown in Fig.6, are constructed based on the test set from mnist0_3. Each row with the length of 32 in Fig.6 is a sequence with the structure [2,0,3,3,0,2,2,0,3,2,3,2,1,2,3,0,3,2,2,3,2,3,3, 1,3,2,2,1,2,2,0,3] orderly corresponding to all the guideposts in the maze path. The guideposts with the same value in each column in Fig.6 are randomly selected from the test set in mnist0_3. A new test set with the size of 10000 sequences is constructed according to the above principle.

    Fig.6 10 Guideposts sequence test set

    The original ABGP model have developed a complete goal pursuit, event responding and action planning in its internal world. Those mechanisms are executed accurately in terms of the belief knowledge base and the information from environment. The performance of the agent’s action planning is primarily decided by the information from the environment if the belief knowledge is represented definitely. It is clear that the belief knowledge in the maze search experiment is precisely represented as ‘0’ denoting moving on, ‘1’ moving back, ‘2’ turning left and ‘3’ turning right.

    In order to prove the performance of ABGP-CNN model, the other two methods, multi-layer perceptron (MLP) and support vector machine (SVM), are respectively used for awareness module of ABGP to construct two different comparison models ABGP with awareness MLP (ABGP-MLP) and ABGP with awareness SVM (ABGP-SVM). For ABGP-MLP, the MLP is designed to a common neural network having 3 hidden layers with the activation function tanh(·) and a logistic regression output layer. For the SVM structure in ABGP-SVM, the 4 types of kernel functions, linear uT*v, polynomial γ*uT*v+c, radial basis function e-γ*|u-v|2and sigmoid tanh(γ*uT*v+c), are adopted to construct 4 different types of SVMs. Table 1 shows the success rate that the agents designed by ABGP model with 3 different awareness modules can recognize all guideposts in the maze path.

    Table 1 Recognition rate of MLP, SVM, CNN as the awareness modulein ABGP for 10000 guidepost sequences

    In Table 1, Input-C-C-H-Log means the deep CNN has double convolutional layers C, a hidden layer H and a logistic regression output layer Log. It is not hard to see there exists the similar hierarchy for awareness module MLP and CNN. Table1 shows that ABGP-MLP, ABGP-SVM and ABGP-CNN keep the high recognition rate for single guidepost, and ABGP-CNN has the highest recognition rate among them. However, for recognizing the sequence ABGP-CNN behaves the excellent performance of 91.52% compared with ABGP-MLP, ABGP-SVM. In particular, though there is a similar awareness module structure for ABGP-MLP and ABGP-CNN, the performance of ABGP-CNN on the sequence is significantly better than ABGP-MLP.

    Therefore the rat-robot agent based on ABGP-CNN model most easily passes through the maze from the entrance to the exit. That is the reason why the deep CNN with the excellent performance recognizing the guidepost sequences is better for the awareness module of ABGP besides its architecture inspired by biological visual.

    5 Conclusion and future work

    Inspired by the biological visual theory, a novel cognitive model, ABGP-CNN is proposed, with BDI architecture by introducing the deep CNN as a visual pathway into the awareness module of cognitive model ABGP, and applying it to the rat-robot maze search. The awareness module based on deep CNN can directly recognize the guideposts in natural scenes and accurately conduct the rat-robot agent’s behaviors, which enhances the capability of communication between the agent and the natural scenes, and improves the visual cognitive ability as human being to the true world. For future work, the ABGP-CNN model will be applied to the true world by the actual robot, and an appropriate guidepost database adopted to train the ABGP-CNN agent will be constructed. In addition, the learning algorithms of the agent and some better awareness structures for the different natural scenes will be the research focus as well in the future.

    [ 1] Bratman M E. Intention, Plans, and Practical Reason. Massachusetts: Harvard University Press, 1987. 124-136

    [ 2] Rao A S, Georgeff M P. BDI Agents: From theory to practice. In: Proceedings of the 1st International Conference on Multi-Agent Systems, Menlo Park, USA, 1995. 312-319

    [ 3] Shoham Y. Agent-oriented programming. Artificial Intelligence, 1993, 60(1): 51-92

    [ 4] Lehman J F, Laird J E, Rosenbloom P. A gentle introduction to soar, an architecture for human cognition. Invitation to Cognitive Science, 1996, 4: 212-249

    [ 5] Hindriks K V, De Boer F S, Van der Hoek W, et al. Agent programming in 3apl. Autonomous Agents and Multi-Agent Systems, 1999, 2(4): 357-401

    [ 6] Shi Z Z, Zhang J H, Yue J P, et al. A cognitive model for multi-agent collaboration. International Journal of Intelligence Science, 2013, 4(1): 1-6

    [ 7] Niell C. Cell types, circuits, and receptive fields in the mouse visual cortex. Annual review of neuroscience, 2015, 38: 413-431

    [ 8] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems, Lake Tahoe, USA, 2012. 1097-1105

    [ 9] LeCun Y, Kavukcuoglu K, Farabet C. Convolutional networks and applications in vision. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 2010. 253-256

    [10] Shi Z Z, Wang X F, Yue J P. Cognitive cycle in mind model CAM. International Journal of Intelligence Science, 2011, 1(2): 25-34

    [11] Pokahr A, Braubach L. The active components approach for distributed systems development. International Journal of Parallel, Emergent and Distributed Systems, 2013, 28(4): 321-369

    [12] Korecko S, Herich T, Sobota B. JBdiEmo-OCC model based emotional engine for Jadex BDI agent system. In: Proceedings of the IEEE International Symposium on Applied Machine Intelligence and Informatics, Herl’any, Slovakia, 2014. 299-304

    [13] Visser S, Thangarajah J, Harland J, et al. Preference-based reasoning in BDI agent systems. Autonomous Agents and Multi-Agent Systems, 2015, 30(291): 1-40

    [14] Pokahr A, Braubach L, Haubeck C, et al. Programming BDI Agents with Pure Java. Germany: Springer International Publishing, 2014. 216-233

    [15] Kephart J O, Walsh W E. An artificial intelligence perspective on autonomic computing policies. In: Proceedings of the 5th IEEE International Workshop on Policies for Distributed Systems and Networks, New York, USA, 2004. 3-12

    [16] Namboodiri V M K, Huertas M A, Monk K J, et al. Visually cued action timing in the primary visual cortex. Neuron, 2015, 86(1): 319-330

    [17] Fukushima K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 1980, 36(4): 193-202

    [18] Hinton G, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural computation, 2006, 18(7): 1527-1554

    [19] Karpathy A, Toderici G, Shetty S, et al. Large-scale video classification with convolutional neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Ohio, USA, 2014. 1725-1732

    [20] Ciresan D C, Meier U, Masci J, et al. Flexible, high performance convolutional neural networks for image classification. In: Proceedings of International Joint Conference on Artificial Intelligence, Barcelona, Spain, 2011. 1237-1242

    [21] Ciresan D, Meier U, Schmidhuber J. Multi-column deep neural networks for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Rhode Island, USA, 2012. 3642-3649

    [22] Jarrett K, Kavukcuoglu K, Ranzato M, et al. What is the best multi-stage architecture for object recognition? In: Proceedings of the IEEE 12th International Conference on Computer Vision,Kyoto, Japan, 2009. 2146-2153

    [23] Khan A M, Rajpoot N, Treanor D, et al. A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution. IEEE Transactions on Biomedical Engineering, 2014, 61(6): 1729-1738

    [24] Simard P Y, Steinkraus D, Platt J C. Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the 12th International Conference on Document Analysis and Recognition, Washington, USA, 2013. 958-958

    [25] Bo L, Ren X, Fox D. Learning hierarchical sparse features for RGB-(D) object recognition. The International Journal of Robotics Research, 2014, 33(4): 581-599

    Ma Gang, born in 1986. He is studying for a Ph.D degree in Computer Software and Theory in Institute of Computing Technology, Chinese Academy of Sciences. He received his B.S. degree in Computer Science and Technology and M.S. degree in Computer Application Technology from China University of Mining and Technology in 2010 and 2013, respectively. His research interests include artificial intelligence, machine learning, deep learning and complex networks.

    10.3772/j.issn.1006-6748.2016.04.008

    ① Supported by the National Basic Research Program of China (No. 2013CB329502), the National Natural Science Foundation of China (No. 61035003, 61202212) and the National Science and Technology Support Program (No. 2012BA107B02).

    ② To whom correspondence should be addressed. E-mail: mag@ics.ict.ac.cn Received on Aug. 10, 2015

    精品亚洲乱码少妇综合久久| 欧美三级亚洲精品| 久久这里只有精品中国| av卡一久久| 成人二区视频| 国产伦在线观看视频一区| 身体一侧抽搐| 高清av免费在线| 高清欧美精品videossex| 国产91av在线免费观看| .国产精品久久| 国产精品福利在线免费观看| 视频中文字幕在线观看| 韩国高清视频一区二区三区| 久久久久久久久久成人| 婷婷色综合大香蕉| 天天躁日日操中文字幕| 又爽又黄无遮挡网站| 亚洲精华国产精华液的使用体验| 午夜福利高清视频| 一级a做视频免费观看| 久久国内精品自在自线图片| 亚洲精品日韩av片在线观看| 少妇的逼好多水| 亚洲最大成人手机在线| 免费人成在线观看视频色| 欧美日韩一区二区视频在线观看视频在线 | 亚洲丝袜综合中文字幕| 国产乱人视频| 国产av不卡久久| 欧美成人一区二区免费高清观看| 久久久a久久爽久久v久久| 波野结衣二区三区在线| 日韩大片免费观看网站| 午夜老司机福利剧场| 欧美高清性xxxxhd video| 亚洲最大成人中文| 亚洲欧美日韩无卡精品| 超碰av人人做人人爽久久| 久久久久性生活片| 久久久欧美国产精品| 国产爱豆传媒在线观看| 一级毛片电影观看| 又爽又黄无遮挡网站| 国产午夜精品论理片| 久久久午夜欧美精品| 激情五月婷婷亚洲| 女人被狂操c到高潮| 91狼人影院| 蜜臀久久99精品久久宅男| 久久久久久久午夜电影| 麻豆乱淫一区二区| 国产一区二区三区综合在线观看 | 夜夜看夜夜爽夜夜摸| 精品一区二区免费观看| 大片免费播放器 马上看| 成年av动漫网址| 99久久九九国产精品国产免费| 国产一区二区在线观看日韩| 日日干狠狠操夜夜爽| 嘟嘟电影网在线观看| 内射极品少妇av片p| 国产真实伦视频高清在线观看| 欧美一区二区亚洲| 免费黄网站久久成人精品| 国产成人精品福利久久| 99久久精品热视频| 日本欧美国产在线视频| 午夜视频国产福利| 自拍偷自拍亚洲精品老妇| 深爱激情五月婷婷| 亚洲精品乱码久久久v下载方式| 欧美不卡视频在线免费观看| 亚洲欧美成人综合另类久久久| 偷拍熟女少妇极品色| 少妇熟女欧美另类| 欧美激情国产日韩精品一区| 男女下面进入的视频免费午夜| 一区二区三区四区激情视频| 精品99又大又爽又粗少妇毛片| 亚洲欧美日韩东京热| 国产一区二区三区av在线| 成人漫画全彩无遮挡| 亚洲精品成人av观看孕妇| 男女那种视频在线观看| 大话2 男鬼变身卡| 色哟哟·www| 在现免费观看毛片| 午夜视频国产福利| 精品人妻视频免费看| 日本免费a在线| 国产亚洲一区二区精品| 九草在线视频观看| www.色视频.com| 一区二区三区乱码不卡18| 高清在线视频一区二区三区| 三级国产精品欧美在线观看| 舔av片在线| 色视频www国产| 中国美白少妇内射xxxbb| 午夜福利在线在线| 欧美激情在线99| 99热这里只有精品一区| av.在线天堂| 亚洲国产欧美在线一区| 久99久视频精品免费| 久久这里有精品视频免费| 国内少妇人妻偷人精品xxx网站| 国产精品日韩av在线免费观看| 国产麻豆成人av免费视频| 看黄色毛片网站| 亚洲av中文av极速乱| 亚洲av电影在线观看一区二区三区 | 在现免费观看毛片| 高清视频免费观看一区二区 | 内地一区二区视频在线| 国产精品一区二区在线观看99 | 国产视频内射| 日韩,欧美,国产一区二区三区| 免费播放大片免费观看视频在线观看| 99久国产av精品国产电影| 联通29元200g的流量卡| 久久99精品国语久久久| av卡一久久| 日本一本二区三区精品| 国产一区二区三区av在线| 亚洲国产精品成人综合色| 精品人妻一区二区三区麻豆| 日韩精品青青久久久久久| 少妇裸体淫交视频免费看高清| 久久久久久久大尺度免费视频| 国产精品一区二区在线观看99 | 高清在线视频一区二区三区| 日产精品乱码卡一卡2卡三| 日本免费a在线| 别揉我奶头 嗯啊视频| 国产精品久久久久久久电影| 偷拍熟女少妇极品色| 熟妇人妻不卡中文字幕| 欧美精品一区二区大全| 亚洲aⅴ乱码一区二区在线播放| 又爽又黄a免费视频| 三级国产精品欧美在线观看| 日本爱情动作片www.在线观看| 精品久久久久久成人av| 日韩av在线免费看完整版不卡| 久久综合国产亚洲精品| 成年av动漫网址| 欧美xxⅹ黑人| 国产亚洲最大av| 蜜桃亚洲精品一区二区三区| 在线a可以看的网站| 高清av免费在线| 欧美xxⅹ黑人| 中国国产av一级| 成人亚洲精品av一区二区| 国产老妇女一区| 亚洲av成人av| 亚洲成人一二三区av| 网址你懂的国产日韩在线| av又黄又爽大尺度在线免费看| 三级毛片av免费| 色5月婷婷丁香| 免费播放大片免费观看视频在线观看| 久99久视频精品免费| 免费看光身美女| 亚洲av.av天堂| 久99久视频精品免费| 久久精品国产亚洲av涩爱| 日韩制服骚丝袜av| 日韩一区二区三区影片| 欧美日韩一区二区视频在线观看视频在线 | 草草在线视频免费看| 好男人在线观看高清免费视频| 日韩在线高清观看一区二区三区| 国产精品一及| 久久99热6这里只有精品| av福利片在线观看| 国产亚洲91精品色在线| 狠狠精品人妻久久久久久综合| av.在线天堂| 欧美性感艳星| 免费少妇av软件| 中文欧美无线码| 亚洲国产最新在线播放| 国产亚洲精品av在线| 亚洲美女视频黄频| 韩国高清视频一区二区三区| 国内精品美女久久久久久| 午夜亚洲福利在线播放| 国产精品麻豆人妻色哟哟久久 | 亚洲图色成人| 蜜臀久久99精品久久宅男| 亚洲婷婷狠狠爱综合网| 人妻一区二区av| 国产成人91sexporn| 亚洲欧美一区二区三区国产| 国产老妇女一区| 91久久精品国产一区二区三区| 亚洲美女搞黄在线观看| 亚洲av中文av极速乱| 亚洲精品影视一区二区三区av| 国产激情偷乱视频一区二区| 亚洲av男天堂| 亚洲精品456在线播放app| 在线天堂最新版资源| 久久久精品欧美日韩精品| 别揉我奶头 嗯啊视频| 亚洲激情五月婷婷啪啪| 日本欧美国产在线视频| 免费av不卡在线播放| 一级毛片电影观看| 国产成人精品一,二区| 国产三级在线视频| 成人av在线播放网站| 看免费成人av毛片| 少妇的逼好多水| 亚洲人成网站在线播| 91久久精品电影网| 卡戴珊不雅视频在线播放| 国产欧美日韩精品一区二区| 草草在线视频免费看| 非洲黑人性xxxx精品又粗又长| 免费av不卡在线播放| 国产高清国产精品国产三级 | 国产一级毛片在线| 亚洲激情五月婷婷啪啪| 日韩亚洲欧美综合| 伦理电影大哥的女人| 国产亚洲午夜精品一区二区久久 | 黄色日韩在线| 搡老妇女老女人老熟妇| 欧美一级a爱片免费观看看| 欧美日韩精品成人综合77777| 老司机影院成人| 国产成人精品福利久久| 国产高清国产精品国产三级 | 熟妇人妻不卡中文字幕| 91aial.com中文字幕在线观看| 午夜精品在线福利| 波野结衣二区三区在线| 久久6这里有精品| 美女被艹到高潮喷水动态| 免费播放大片免费观看视频在线观看| 久久久久性生活片| 纵有疾风起免费观看全集完整版 | 国产精品99久久久久久久久| xxx大片免费视频| 中文字幕亚洲精品专区| av在线观看视频网站免费| 亚洲四区av| 久久久久免费精品人妻一区二区| 国产亚洲午夜精品一区二区久久 | 亚洲国产高清在线一区二区三| 日本av手机在线免费观看| 午夜精品一区二区三区免费看| 3wmmmm亚洲av在线观看| 精品欧美国产一区二区三| 亚洲av中文字字幕乱码综合| av免费观看日本| 91久久精品电影网| 少妇高潮的动态图| 国产欧美另类精品又又久久亚洲欧美| 在线 av 中文字幕| 免费观看精品视频网站| 亚洲乱码一区二区免费版| 国产成人a∨麻豆精品| 又爽又黄无遮挡网站| 看十八女毛片水多多多| 亚洲内射少妇av| 日韩 亚洲 欧美在线| 天堂av国产一区二区熟女人妻| 2022亚洲国产成人精品| 亚洲国产欧美人成| 天堂影院成人在线观看| 热99在线观看视频| 狠狠精品人妻久久久久久综合| freevideosex欧美| 国产美女午夜福利| 1000部很黄的大片| 99久久精品国产国产毛片| 白带黄色成豆腐渣| 免费观看性生交大片5| 波多野结衣巨乳人妻| 久久久久久久久久黄片| 国产 一区 欧美 日韩| 看非洲黑人一级黄片| 99热这里只有是精品在线观看| 国内精品美女久久久久久| 亚洲高清免费不卡视频| 不卡视频在线观看欧美| 国产精品女同一区二区软件| kizo精华| 乱人视频在线观看| 成人欧美大片| 国产av不卡久久| 国产综合懂色| 亚洲综合色惰| 天堂影院成人在线观看| 日日撸夜夜添| 99久久人妻综合| 深夜a级毛片| 秋霞在线观看毛片| 午夜福利在线在线| 久久精品国产亚洲网站| 日韩av不卡免费在线播放| 亚洲av免费高清在线观看| 人人妻人人澡人人爽人人夜夜 | 亚洲精品日本国产第一区| 一个人看的www免费观看视频| av在线天堂中文字幕| 成年女人在线观看亚洲视频 | 久久久色成人| 色综合色国产| 超碰av人人做人人爽久久| 中文字幕av在线有码专区| 一级片'在线观看视频| 午夜久久久久精精品| 国内揄拍国产精品人妻在线| 欧美极品一区二区三区四区| 最新中文字幕久久久久| 亚洲国产成人一精品久久久| 亚洲自拍偷在线| 麻豆乱淫一区二区| 神马国产精品三级电影在线观看| 三级毛片av免费| 一级毛片黄色毛片免费观看视频| 国产在视频线精品| 白带黄色成豆腐渣| 亚洲婷婷狠狠爱综合网| 三级男女做爰猛烈吃奶摸视频| 国产av国产精品国产| 久久久久久久久久成人| 最近中文字幕高清免费大全6| 欧美日本视频| 免费大片18禁| 一个人看视频在线观看www免费| 亚洲av电影在线观看一区二区三区 | 国产伦在线观看视频一区| 91久久精品国产一区二区三区| 久久这里有精品视频免费| 国产亚洲91精品色在线| 可以在线观看毛片的网站| 777米奇影视久久| 久久久a久久爽久久v久久| 久久久久久久久大av| 亚洲精品一区蜜桃| 丰满人妻一区二区三区视频av| 久久精品国产亚洲av天美| 免费播放大片免费观看视频在线观看| 99久久人妻综合| 婷婷六月久久综合丁香| 天堂影院成人在线观看| 日韩三级伦理在线观看| 午夜精品一区二区三区免费看| 国产精品一及| 亚洲图色成人| 亚洲一级一片aⅴ在线观看| 精品亚洲乱码少妇综合久久| 日韩欧美三级三区| 一级爰片在线观看| 高清av免费在线| 日韩电影二区| 一级毛片电影观看| 国产伦理片在线播放av一区| 日韩成人av中文字幕在线观看| 欧美激情国产日韩精品一区| 欧美激情在线99| 大香蕉久久网| 日韩av在线大香蕉| 国产老妇女一区| 性色avwww在线观看| 国产av码专区亚洲av| 亚洲精品国产av成人精品| 夜夜看夜夜爽夜夜摸| 免费观看的影片在线观看| av.在线天堂| 久久久久九九精品影院| 一级毛片黄色毛片免费观看视频| 成年人午夜在线观看视频 | 校园人妻丝袜中文字幕| 九草在线视频观看| 亚洲国产最新在线播放| 亚洲欧美一区二区三区黑人 | 国产精品福利在线免费观看| 99久国产av精品| 欧美日韩精品成人综合77777| 午夜福利成人在线免费观看| 亚洲综合色惰| 精品一区二区三卡| 国产精品爽爽va在线观看网站| 亚洲精华国产精华液的使用体验| 校园人妻丝袜中文字幕| 国产精品一区二区在线观看99 | kizo精华| 一级毛片久久久久久久久女| 九草在线视频观看| 最后的刺客免费高清国语| av.在线天堂| 久久久久网色| 男女边摸边吃奶| 国产中年淑女户外野战色| 汤姆久久久久久久影院中文字幕 | 国产爱豆传媒在线观看| 国产精品一二三区在线看| 日本与韩国留学比较| 男人狂女人下面高潮的视频| 成人国产麻豆网| 天堂影院成人在线观看| 久久人人爽人人爽人人片va| 欧美一级a爱片免费观看看| 国产成人91sexporn| 免费黄频网站在线观看国产| 天堂√8在线中文| 成人欧美大片| 超碰av人人做人人爽久久| 久久鲁丝午夜福利片| 亚洲国产精品成人综合色| 国产高清不卡午夜福利| 久久久久网色| 只有这里有精品99| 欧美极品一区二区三区四区| 亚洲一级一片aⅴ在线观看| 中文字幕久久专区| 国产视频首页在线观看| 久久久久国产网址| 一区二区三区乱码不卡18| 国产精品一区二区性色av| 狂野欧美白嫩少妇大欣赏| 亚洲精品,欧美精品| 久久草成人影院| 日本wwww免费看| 免费黄频网站在线观看国产| 亚洲国产欧美人成| 十八禁网站网址无遮挡 | 在线观看av片永久免费下载| 欧美丝袜亚洲另类| 精品久久久久久久久亚洲| 久久精品国产亚洲av天美| 97超视频在线观看视频| 国产在线男女| 一个人免费在线观看电影| 成年版毛片免费区| 男人和女人高潮做爰伦理| 男女边摸边吃奶| 男人狂女人下面高潮的视频| 久久久久久久久大av| 亚洲三级黄色毛片| 中国国产av一级| www.av在线官网国产| 一区二区三区乱码不卡18| 日韩欧美 国产精品| a级毛色黄片| 久久精品久久久久久噜噜老黄| 日日啪夜夜爽| 国产精品日韩av在线免费观看| 亚洲熟女精品中文字幕| 午夜精品国产一区二区电影 | 噜噜噜噜噜久久久久久91| 搞女人的毛片| 插阴视频在线观看视频| 久久久久久九九精品二区国产| 色视频www国产| 精品国产三级普通话版| 青春草视频在线免费观看| 我要看日韩黄色一级片| 看十八女毛片水多多多| 午夜精品在线福利| av在线观看视频网站免费| 国产91av在线免费观看| 亚洲熟妇中文字幕五十中出| 亚洲,欧美,日韩| 亚洲婷婷狠狠爱综合网| 国内少妇人妻偷人精品xxx网站| 免费看av在线观看网站| 免费av毛片视频| 成人一区二区视频在线观看| 嫩草影院新地址| 人体艺术视频欧美日本| 久久久色成人| 欧美日韩在线观看h| 九九爱精品视频在线观看| 777米奇影视久久| 色5月婷婷丁香| 久久午夜福利片| 男的添女的下面高潮视频| 亚洲精品456在线播放app| 国产亚洲最大av| 亚洲无线观看免费| 最后的刺客免费高清国语| 日韩人妻高清精品专区| 久久久久久久大尺度免费视频| 免费不卡的大黄色大毛片视频在线观看 | 亚洲国产精品专区欧美| 亚洲av中文字字幕乱码综合| 一二三四中文在线观看免费高清| 国内精品美女久久久久久| 色视频www国产| 美女国产视频在线观看| 18+在线观看网站| 欧美成人a在线观看| 中文天堂在线官网| 亚洲av电影不卡..在线观看| 亚洲熟妇中文字幕五十中出| 免费看光身美女| 2021少妇久久久久久久久久久| 国产av在哪里看| 久久久久久久大尺度免费视频| 亚洲自偷自拍三级| 亚洲在久久综合| 男插女下体视频免费在线播放| 日韩欧美 国产精品| 99久国产av精品| av又黄又爽大尺度在线免费看| 91狼人影院| 亚洲精品国产av成人精品| 日产精品乱码卡一卡2卡三| 色尼玛亚洲综合影院| 国国产精品蜜臀av免费| 免费无遮挡裸体视频| 在线 av 中文字幕| 美女主播在线视频| 免费观看精品视频网站| 国精品久久久久久国模美| 成人毛片60女人毛片免费| 一级av片app| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 不卡视频在线观看欧美| 国产精品嫩草影院av在线观看| 中文欧美无线码| 十八禁国产超污无遮挡网站| 视频中文字幕在线观看| 亚洲欧美精品自产自拍| 蜜桃亚洲精品一区二区三区| 天堂√8在线中文| 九草在线视频观看| 熟女人妻精品中文字幕| 亚洲欧洲国产日韩| 欧美变态另类bdsm刘玥| 最近中文字幕高清免费大全6| 日本黄大片高清| 嘟嘟电影网在线观看| 亚洲熟妇中文字幕五十中出| 免费大片黄手机在线观看| 插阴视频在线观看视频| 亚洲成人精品中文字幕电影| 天堂影院成人在线观看| 久久综合国产亚洲精品| 国产午夜福利久久久久久| 美女高潮的动态| 爱豆传媒免费全集在线观看| 日韩国内少妇激情av| 精品一区二区三区视频在线| 久久久久精品久久久久真实原创| 国产在线男女| 精品久久久精品久久久| 国产午夜精品久久久久久一区二区三区| 亚洲图色成人| av在线天堂中文字幕| 老司机影院毛片| 国产美女午夜福利| 国产精品久久久久久精品电影| 亚洲精华国产精华液的使用体验| 久久久久九九精品影院| 日本色播在线视频| 欧美潮喷喷水| 美女内射精品一级片tv| 成人毛片60女人毛片免费| 亚洲精品国产av蜜桃| 久久久久久国产a免费观看| 成人亚洲精品一区在线观看 | 成人漫画全彩无遮挡| 成年女人看的毛片在线观看| 国产精品蜜桃在线观看| 夫妻午夜视频| 久久久久国产网址| 三级毛片av免费| 美女黄网站色视频| 精品少妇黑人巨大在线播放| 亚洲欧美中文字幕日韩二区| 日韩制服骚丝袜av| 久久久久久久久久成人| 亚洲精品国产av成人精品| 哪个播放器可以免费观看大片| 亚洲精品456在线播放app| 亚洲伊人久久精品综合| 国产伦理片在线播放av一区| 国产精品嫩草影院av在线观看| 亚洲欧美日韩无卡精品| 成人欧美大片| 性色avwww在线观看| 免费高清在线观看视频在线观看| 亚洲精品乱码久久久v下载方式| 国产成人福利小说| 亚洲在久久综合| 毛片女人毛片| 亚洲欧洲国产日韩| 人妻一区二区av| 午夜爱爱视频在线播放| 欧美bdsm另类| 久久久欧美国产精品| 大香蕉97超碰在线| 久久久久精品久久久久真实原创| 欧美日本视频| 国产片特级美女逼逼视频| 精品一区二区免费观看| 亚洲在久久综合| 久久精品夜色国产| 菩萨蛮人人尽说江南好唐韦庄| 久久亚洲国产成人精品v| 中文乱码字字幕精品一区二区三区 | 韩国高清视频一区二区三区| 高清av免费在线| 精品国产露脸久久av麻豆 | 亚洲乱码一区二区免费版|