• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Interactions Between Agents:the Key of Multi Task Reinforcement Learning Improvement for Dynamic Environments

    2017-10-10 05:01:55SadrolahAbbasiHamidParvinMohamadMohamadiandEshaghFaraji

    Sadrolah Abbasi, Hamid Parvin, Mohamad Mohamadi, and Eshagh Faraji

    InteractionsBetweenAgents:theKeyofMultiTaskReinforcementLearningImprovementforDynamicEnvironments

    Sadrolah Abbasi, Hamid Parvin*, Mohamad Mohamadi, and Eshagh Faraji

    In many Multi Agent Systems, learner agents explore their environments to find their goal and agents can learn their policy. In Multi-Task Learning, one agent learns a set of related problems together at the same time, using a shared model. Reinforcement Learning is a useful approach for an agent to learn its policy in a nondeterministic environment. However, it is considered as a time-consuming algorithm in dynamic environments. It is helpful in multi task reinforcement learning, to use teammate agents’ experience by doing simple interactions between each other. To improve performance of multi-task learning in a nondeterministic and dynamic environment, especially for dynamic maze problem, we use the past experiences of agents. Interactions are simulated by operators of evolutionary algorithms. Also it is switched to a chaotic exploration instead of a random exploration. Applying Dynamic Chaotic Evolutionary Q-Learning to an exemplary maze, we reach significantly promising results.

    interactions between agents; multi task reinforcement learning; evolutionary learning, chaotic exploration; dynamic environments

    1 Introduction

    A Multi Agent System (MAS) usually is to explore or monitor its space to achieve its goal. In an MAS, each agent has only a local view of its own neighborhood. In these cases, each agent tries to learn a local map of its world. It is useful for agents to share their local maps in order to aggregate a global view of the environments and to cooperatively decide about next action selection. Therefore agents attempt to learn because they don’t know anything about their environment. The learning can happen in a cooperative background where the aim is to make agents to share their learned knowledge[1-8].

    Multi-Task Learning (MTL)[10-15]is an approach in machine learning in that it tries to learn a problem together with other related problems at the same time, using a shared model. This often leads to a better model for the main task, because it allows the learner to use the commonality among the related tasks[13-19].

    Also Reinforcement Learning (RL) is a type of agent learning that implies what to do in any situation or how to map situations to actions, in order to maximize agents’ reward. It does not tell the learners which actions to take and the agents must discover which actions yield the most reward. RL can solve sequential decision tasks through trial-and-error interactions with the environment. A reinforcement learning agent senses and acts in its environment in order to learn the optimal situation-to-action mapping to more frequently achieve its goal. For each action, the agent receives feedback (reward or penalty) to distinguish what is good and what is bad. The agent’s task is to learn a policy or control strategy for choosing the best set of actions in such a long run that achieves its goal. For this purpose, the agent stores a cumulative reward for each state or state-action pair. The ultimate objective of a learning agent is to maximize the cumulative reward it receives in the long run, from the current state and all subsequent next states along with goal state[20].

    RL has two strategies for solving problems. (1) To use statistical techniques and dynamic programming methods to estimate the utility of taking actions in states of the world. (2) To search in the space of behaviors in order to find one that performs well in the environment. This approach can be achieved by Genetic Algorithms (GA) and Evolutionary Computing (EC)[21-26].

    If the size of problem space of such instances is huge, instead of original Reinforcement Learning Algorithms, Evolutionary Computation could be more effective. Dynamic or uncertain environments are crucial issues for Evolutionary Computation and they are expected to be effective approach in such environments[27-31].

    Q-learning is the specific families of Reinforcement learning techniques[32-37].This is a powerful algorithm for agents that are to learn their knowledge in a RL manner. It is worthy to mention that this algorithm is considered as one of the best RL algorithms. The only drawback of the Q-learning algorithm is its slowness during learning in an unknown environment.

    Most of researches in the RL field focus on improving the speed of learning. It has been shown that Transfer learning (TL) can be very effective in speeding up learning. Main idea of TL is based on “experience gained in learning to perform one task that can help improve learning performance in a related, but different, task”. In TL, knowledge from some tasks is used to learn other tasks. The steps of a TL method is: “deconstructing the task into a hierarchy of subtasks, learning with higher-level, temporary abstract, actions rather than simple one-step actions; and efficiently abstracting over the state space, so that the agent may generalize its experience more efficiently”[1-5].

    TL and MTL are closely related to each other. MTL algorithms may be used to transfer knowledge between learners, similar to TL algorithms, but MTL supposes all tasks that have the same distribution, while TL may allow for arbitrary source and target tasks. MTL generally does not need task mappings.

    In MTL, data from multiple tasks can be considered simultaneously and it is possible that RL agents could learn multiple tasks simultaneously in a multi agent system. A single large task may be considered as a sequential series of subtasks. If the learner does each subtask, it may be able to transfer knowledge between different subtasks, which are related to each other because they are parts of the same overall task. Such a setting may provide a well-grounded way of selecting a distribution of tasks to train over, either in the context of transfer or in MTL[12-19].

    In this work we found that using interactions between learner agents improves the performance of Multi Task Reinforcement Learning. In order to implement simple and effective interactions among the agents in a nondeterministic and dynamic environment, we have used genetic algorithm. Also, due to using GAs, it is possible to maintain the past experiences of agents and to exchange experiences between agents.

    In the remaining of this paper, it can be found the literature review of this scope in next section. It contains reinforcement learning first. Then it explains evolutionary computations in RL. Finally it manages chaotic methods in RL.

    Overall, our contributions in this work are:

    ? Employing interactions between learning agents by using an evolutionary reinforcement learning algorithm in support of increasing performance in both speed and accuracy of learning.

    ? Exploring the role of interaction existence in a dynamic maze learning.

    ? Implementation of multi-task reinforcement learning in a nondeterministic and dynamic environment.

    ? Regarding effect of chaotic exploration phase in quality of learned policy in a dynamic environment

    2 Literature Review

    2.1 Reinforcement learning

    In Reinforcement learning problems, an agent must learn behavior through trial-and-error interactions with an environment. There are four main elements in RL system[16-22], i.e., policy, reward function, value function and model of the environment. A policy defines the behavior of learning agent and it consists of a mapping from states to actions. A reward function specifies how good the chosen actions are and it maps each perceived state-action pair to a single numerical reward. In a value function, the value of a given state is the total reward accumulated in the future, starting from that state. The model of the environment simulates the environment’s behavior and may predict the next environment state from the current state-action pair and it is usually represented as a Markov Decision Process (MDP)[21-31]. In a MDP Model, the agent senses the state of the world, then it takes an action which leads to a new state. The choice of the new state depends on the agent’s current state and its action.

    An MDP is defined as a 4-tuple 〈S,A,T,R〉 characterized as follows: S is a set of states in environment, A is the set of actions available in environment, T is a state transition function in state s and action a, R is the reward function. Taking the best action available in a state is the optimal solution for an MDP. An action is best, if it collects as much reward as possible over time.

    There are two classes of methods for reinforcement learning: (a) methods that search the space of value functions and (b) methods that search the space of policies. The first class is exemplified by the temporal difference (TD) methods and the second by the evolutionary algorithm (EA) approaches[31-43].

    TD methods, such asQ-learning, learn by backing up experienced rewards through time.Q-learning method is considered as one of the most important algorithms in RL (Sutton et al., 1998). It consists of aQ-mapping from state-action pairs by rewards obtained from the interaction with the environment.

    In this case, the learned action-value function,Q:S×A→R, directly approximatesQ*, the optimal action-value function, independent of the policy being followed. The current best policy is generated fromQby simply selecting the action that has the highest value from the current state.

    Q(s,a)←Q(s,a)+a[r+γmaxa′Q(s′,a′)-Q(s,a)]

    (1)

    In equation 1,s,r,aandΓdenote state, action, learning rate and discount-rate parameter respectively. In this case, the learned action-value function,Q, directly approximatesQ*, the optimal action-value function, independent of the policy being followed.

    The pseudo code ofQ-learning algorithm is shown in Algorithm 1. One policy for choosing a proper action isε-greedy. In this policy, the agent selects a random action with a chanceε, and the current best action is selected with probability 1-ε(whereεis in [0,1]).

    Algorithm1Q-Learning

    01 InitializeQ(s,a)arbitrarily02 Repeat(foreachepisode):03 Initializes04 Repeat(foreachstepofepisode):05 Chooseafromsusingpolicyderivedfrom Q(e.g.,ε-greedy)06 Takeactiona,observer,s'07 Q(s,a)←Q(s,a)+a[r+γmaxa'Q(s',a')-Q(s,a)]08 s←s'09 Untilsisterminal

    Because of its slowness in nondeterministic and dynamic environments reinforcement learning, some secondary methods are used to improve learning performance in terms of both speed and accuracy. In this work, it has applied evolutionary algorithms and has employed chaotic functions in Q learning algorithm.

    2.2 Evolutionary computations in RL

    A large number of studies concerning dynamic or uncertain environments that have been performed so far have used Evolutionary Computation algorithms[4-6]. It tries to reach the goal as soon as possible in these problems. A significant issue that must be mentioned is that the agents could get assistance from their previous experiences.

    As mentioned before, EC could be more efficient in a problem with a very large space and also in a dynamic or uncertain environment. In a research on MTL field, EC has been applied to RL algorithm[6-7]. In that case, a population of solutions (as individuals) and an archive of past optimal solutions containing each episode optimal solution are added to original structure of RL Algorithm. Individuals are pairs of real-valued vectors of design variables and variance. If the variance of the optimal value distribution is not so large, the MTL would be better, because of using past acquired knowledge of previous problem instances. The fitness function in EC algorithm is defined as total acquired rewards. The algorithm is presented in Algorithm 2.

    Algorithm2ArchivebasedEC

    01 Generatetheinitializepopulationofxin-dividuals02 Evaluateindividuals03 Eachindividual(parent)createssingleoffspring04 Evaluateoffsprings05 Conductpair-wisecomparisonoverpar-entsandoffsprings06 Selectthexindividuals, whichhavethemostwins,fromparentsandoffsprings07 Stopifhaltingcriterionissatisfied. OtherwisegotoStep3

    Similar to Handa’s work (2007), in another research an approach to explore the optimal control policy for RL problem by using GAs is proposed[42]. Also each chromosome represents a policy. Those policies can be modified by genetic operations such as mutation and crossover. There are some variety compared with Handa’s algorithm such as chromosome setting, and GA operators.

    An efficiency of Genetic Algorithm is that it can directly learn decision policies without studying the model and state space of the environment in advance. The only feedback for GAs is the fitness values of different potential policies. In many cases, fitness function can be expressed as the sum of rewards, which are used to update the Q-values of value function-based Q-learning algorithm.

    In a recent work, Beigi et al. (2010) have presented a modified Evolutionary Q-learning algorithm (EQL). Also, they have shown there the superiority of EQL over QL in learning of multi task agents in a nondeterministic environment. They uses a population of potential solution in the process of learning of agents. The algorithm is exhibited in Algorithm 3.

    Thus, Evolutionary Reinforcement Learning (ERL) is a method of probing the best policy in RL problem by applying genetic algorithms and in this work it tries to consider the effects of simple interactions presence between agents and experience exchanges between partial solutions.

    2.3 Chaotic methods in RL

    One of the challenges in reinforcement learning is the tradeoff between exploration and exploitation. Exploration means trying actions that improve the model, whereas exploitation means behaving in the optimal way given the current model. To obtain a lot of reward, a reinforcement learning agent must prefer actions that it has tried in the past and has found to be effective in producing reward. But to discover such actions, it has to try actions that it has not selected before. In other word, Exploitation is the right thing to do and to maximize expected reward on the one play, but exploration may produce greater total reward in the long run. The exploration-exploitation dilemma has been intensively studied by mathematicians for many decades[43].

    Algorithm3EQL

    01 InitializeQ(s,a)byzero02 Repeat(foreachgeneration):03 Repeat(foreachepisode):04 Initializes05 Repeat(foreachstepofepisode):06 ChooseafromsusingpolicyderivedfromQ (e.g.,-greedy)07 Takeactiona,observer,s'08 s←s'09 Untilsisterminal10 AddvisitedpathasaChromosometoPopulation11 Untilpopulationiscomplete12 DoCrossover()byCRate13 EvaluatethecreatedChilds14 DotournamentSelection()15 Selectthebestindividualforupda-tingQ-Tableasfollows:16 Q(s,a)←Q(s,a)+a[r+γmaxa'Q(s',a')-Q(s,a)]17 Copythebestindividualinnextpopulation18 Untilsatisfyingconvergence

    The schema of the exploration is called a policy. There are many kinds of policies such asε-greedy, softmax, weighted roulette and so on.ε-greedy is a simple approach that tries to balance between exploration and exploitation. In these policies, exploring is decided by using stochastic numbers as its random generator.

    Chaos theory studies the behavior of certain dynamical systems that are highly sensitive to initial conditions. Small differences in initial conditions (such as those due to rounding errors in numerical computation) result in widely diverging outcomes for chaotic systems, and consequently obtaining long-term predictions impossible to take in general. This happens while these systems are deterministic, meaning that their future dynamics are fully determined by their initial conditions, without random elements involved. In other words, the deterministic nature of these systems does not make them predictable if the initial condition is unknown[1-5].

    However, it is known that chaotic source also provides a random-like sequence similar to stochastic source. Employing the chaotic generator based on the logistic map in the exploration phase gives better performances than employing the stochastic random generator in a nondeterministic maze problem[6-9].

    In a research in this scope[5], it has shown that the usage of chaotic pseudorandom generator instead of stochastic random generator results in a better exploration. In that environment, goals or solution paths are changed along with exploration. Note that the foresaid algorithm is severely sensitive to value ofεinε-greedy. It is important to note that it has not used chaotic random generator in nondeterministic and dynamic environments.

    As it is mentioned, there are many kinds of exploration policies in the reinforcement learning. It is common to use the uniform pseudorandom number as the stochastic exploration generator in each of the mentioned policies. There is another way to deal with the problem of exploration generators which is to utilize chaotic deterministic generator as their stochastic exploration generators[42].

    In a previous work[43], a chaotic based evolutionary Q learning method (CEQL) is introduced to improve learning performance in both speed and accuracy in a nondeterministic environment. In that paper, a logistic map is used according to equation 2 as the chaotic deterministic generator for stochastic exploration generators. It leads to generate a value in the closed interval [0 1].

    xt+1=ηxt(1-xt)

    (2)

    In equation 2,x0is a uniform pseudorandom generated number in the [0 1] interval andηis a constant chosen from [0 4]. It can be illustrated that sequencexiwill be converged to a number in [0 1] provided that the coefficientηis a number near to and below 4[41-43].It is important to note that this sequence may be divergent for theηgreater than 4. The closerηto 4, the more different convergence points of the sequence. Ifηis selected 4, the vastest convergence points (maybe all points in the [0 1] interval) will be covered per different initializations of the sequence. The algorithm of CEQL is presented in Algorithm 4.

    Algorithm4CEQL

    01 InitializeQ(s,a)byzero02 Repeat(foreachgeneration):03 Repeat(foreachepisode):04 Initializes05 Repeat(foreachstepofepisode):06 Initiate(Xcurrent)byRnd[0,1]07 Repeat08 Xnext=η*Xcurrent*(1-Xcurrent)09 Until(Xnext-Xcurrent<ε)10 ChooseafromsusingXnext11 Takeactiona,observer,s'12 s←s'13 Untilsisterminal14 AddvisitedpathasaChromosometoPopulation15 Untilpopulationiscomplete16 DoCrossover()byCRate17 EvaluatethecreatedChilds18 DotournamentSelection()19 Selectthebestindividualforupda-tingQ-Tableasfollows:Q(s,a)←Q(s,a)+a[r+γmaxa'Q(s',a')-Q(s,a)]20 Copythebestindividualinnextpopula-tion21 Untilsatisfyingconvergence

    3 Interaction Between Agents

    An interaction occurs when two or more agents are brought into a dynamic relationship through a set of reciprocal actions. During interactions, agents are in contact with each other directly, through another agent and through the environment.

    Multi-Agent Systems may be classified as containing (1) NI-No direct Interactions, (2) SI-Simple Interactions and (3) CI-Complex, Conditional or Collective Interactions between agents. SI is basically one-way (may be reciprocal) interaction type between agents and CI could be conditional or collective interactions. In NI, agents do not interact and just do their activities with their knowledge. In such models of MAS, inductive inference methods are used.

    Some forms of learning can be modeled in NI MAS, in this case the only rule is: “move randomly until finding goal”. Agents learn from evaluating their own performance and probably from those of other agents, e.g. looking at other agents’ behavior and how they choose the correct actions. Therefore, agents learn either from their own knowledge or from other agents, without direct interaction with each other.

    Basically, while the RL framework is designed to solve a single learning task, and such a concept as reuse of past learning experiences is not considered inside it, there are lots of cases where multiple tasks are imposed on the agents.

    In this paper, we try to apply interactions between learning agents and use the best experiences that have achieved by agents during multi task reinforcement learning.

    4 Implementation

    4.1 Maze problem

    It is supposed that there are some robots in a gold mine as workers. The tasks of robots are to explore the mine to find the gold that is in an unknown location, from the start point. The mine has a group of corridors which robots can pass through them. In some of corridors, there exist some obstacles which do not let robots continue. Now, assume that because of decadent corridors, it is possible that in some places, there are some pits. If a robot enters to one of the pits, it may be unable to exit from the pit by some moves with a probability above zero. If it fails to exit by the moves, it has to try again. It makes a nondeterministic maze problem. The effort is to find the gold state as soon as possible. In such problems, the shortest path may not be the best path; because it may have some pit cells and it leads agent act many movements until finding goal state. Thus the optimum path has less pit cells together with shorter length.

    The defined time to learn a single task is called Work Time (WT). In such nondeterministic problem, it is supposed that the maze is fixed in a WT. during a WT; agent explores its environment to find the goal as many times as possible during learning.

    In this situation, imagine that there are some repairing men to fix some pits that have been seen at the end of a WT. But as corridors decadent, it is possible that some pits have been created randomly in the same or other locations in mine corridors before the beginning of next WT. Nonetheless, the main structure of maze is the same, but locations of pits may be changed. In other words, through a certain WT, locations of pits are fixed and in next WT, the locations of pits may be changed. Note that, the number of pits is the same in all WTs.

    Consequently, robots start to learn their environment at the beginning of a WT. They have to learn their environment for a long time. Because of the similarity existing between tasks, this problem can be considered as a multi task learning problem.

    Task of a robot in a WT is like other tasks in next WTs. The structure of maze is the same but some environmental changes may have done. In this way, if robots try to apply their past achieved knowledge during learning, they can do their task faster. Hence it is an appropriate approach for robots to use their past experiences to find defined goals.

    In mentioned dynamic version of mine problem, it is possible to repair the pits immediately even during a WT but some other pits may be generated in other locations. It means that, in dynamic version of the problem, a number of pits are fixed but their locations may be changed.

    It is necessary to have interactions between agents to exchange the promising experiences achieved through trial-and-error exploring of the maze. A simple interaction approach can be modeled by evolutionary computations.

    4.2 Simulation

    Sutton’s Maze[2]is a well-known problem in reinforcement learning field and many researchers in this scope are validated their methods via Sutton’s Maze. The original Sutton’s maze problem consists of 6×9 cells, 46 common states, 1 goal state and 7 collision cells which are depicted in Fig.1. Start and goal states are shown with ‘S’ and ‘G’ letters and collision cells are indicated with grey color.

    Fig.1 Original Sutton’s maze.

    Each time, agents move from start state and explore the maze to find the goal state. Agent can move to its neighbor cells with the defined action: Up, Down, Left, and Right. It can’t pass through collision cells and can’t exit the maze board. The original Sutton’s Maze problem is a deterministic problem.

    To simulate the mine maze, as a nondeterministic and dynamic problem, a modified version of Sutton’s maze is presented. In nondeterministic version, a number of Probabilistic Cells (PC), which is depicted by hachure in Fig.2, is added to the original one. An agent can’t leave each of the probabilistic cells by its taken actions certainly and the probability of remaining in the same states is greater than zero. Hence if an agent is located in such PCs, despite of its chosen actions to move through the maze, it may stay in the same cell with a positive (above zero) probability. Note that the probability of moving is equal to “1-probability of stay”.

    Fig.2 Modified Sutton’s maze.

    An agent initiates from start state and explores the maze through taking some actions. It will gain a reward +1, if it reaches goal state. All other states don’t give any reward to agent. There is no punishment in this problem.

    Each action in this space could be modeled by an MDP sample such as delineated in Fig.3. Right part shows an MDP model of certain case in which agent moves to next state by choosing any possible action with a probability equal to 1. Conversely, the left part reveals that it is possible that the agent can’t move to a neighbor state by choosing a possible action and it may remain in its position. For example, according to Fig.3, if an agent is in the PC and takes “Up” action, next state may be the same state with probability equal to 0.2. It is clear that Fig.3 shows the situations of states that are not boarder cells or have no obstacle in their neighborhood. Otherwise, the MDPs have less next states.

    In nondeterministic case, the values of probabilities corresponding to every action in PCs are sampled from a normal distribution with averageμ=0 and varianceσ=1 according to equation 3.

    (3)

    These MDPs presented to learning algorithm sequentially. The presentation time of each problem instance is enough to learn. The aim is to maximize the total acquired rewards for lifespan. Agents are returned to start state after arriving goal state or after finishing the lifetime.

    Fig.3 Actions MDP models.

    In this problem, there are multiple learning tasks and it is essential for the agents to keep their best past experiences and utilize them in the current learning task to solve the tasks faster. Generally in RL, it costs a large number of learning trials, and then exploiting them seems to be promising as it could reduce the number of learning trials significantly. Here multiple tasks are defined through the distribution of MDPs, and a learning agent maintains and exploits the value statistics to improve its RL performance. Based on this, applying evolutionary reinforcement learning is more useful for such problems.

    In this work, we evaluated the proposed method on a problem that defined as a multi task reinforcement learning example. In this example, agents learn their nondeterministic and dynamic environment by chaotic based evolutionaryQlearning algorithm.

    5 Experiments and Results

    Two sets of experiments have been done in nondeterministic and dynamic cases. In both of them, the number of PCs is equal to 25. A single experiment is composed of 100 tasks. Each task presents a valid path policy of an agent from start point to goal and contains a two dimensional array of the states and actions which the agent has visited. A valid path is the track of agent states from start to goal states, if agent has achieved the goal before defined Max Steps. In other word, if an agent can find the goal before defined Max Steps, its trajectory is accepted as an individual to insert to initial population. Through these chromosomes, past obtained experiences of agents is maintained. In this case, Max Steps is designated to 2 000. Each individual has a numeric value which is calculated by Value Function.

    Value function tries to simulate an agent behavior in a given sequence of states. In this function to evaluate an individual, the Mainstream of agent movement is first extracted by eliminatoffing the duplications in path. It means the path only contains effective pairs of state-action which leads to state transition in agent’s trajectory. Second, the acquired mainstream is presented to an object and it is forced to follow the direction. Now the number of actions that object would do to end is accounted. This algorithm is executed for a certain time and the average of these paths’ size makes the value of the path. The existence of PCs in mainstream can cause a longer path. There is a relationship between the fitness and the inverse of value.

    Population size is set to 100. In every experiment, 100 generation epochs are done. In every epoch, a two-point crossover method is used. In this method, two individuals are selected and two states in first individuals are determined randomly. Then, to seek first state from start and second state from end in second individual. After this, these sub-streams are swapped between individuals. Finally, children are evaluated and the best individual is kept. Due to this crossover method, agents can interact with each other and exchange the best experience to others.

    Truncation selection is applied to the algorithm. After the last generation, the best chromosome is selected to update the state-space table in reinforcement learning algorithms (Four reinforcement learning algorithms include OQL, CQL, EQL and CEQL, as mentioned before). As discussed before, in this work, the value ofηin equation 2 is chosen 4 to make the output of the sequence similar to the uniform pseudorandom number.

    In this work for both nondeterministic and dynamic cases and for any of four RL algorithms, 100 experiments have been done. Thus 4 000 000 generations in 100 experiments have been executed.

    Table 1 summarizes the experiment results in nondeterministic mode. As it can be inferred from Table 1, by using interactions between agents and then applying the best achieved experiences (instead of no interaction mode), results in considerable improvement in all three terms of Best average, Worst average and Total average of path lengths. Also performing chaotic generator as a replacement of stochastic random generator improved the outcomes (results in tarred columns).

    Table 1 Effect of using interactions between agents in nondeterministic environment.

    The usage of evolutionary Q-learning in Original Q-Learning results in a 5.09% improvement. It has shown that the usage of chaotic pseudorandom generator instead of stochastic random generator results in a 6.85% improvement. This isn’t unexpected that the usage of interactions and experience exchanges and in addition, the chaotic generator in reinforcement learning algorithm can improve the results. Because it is explored in[13]before and the superior of chaotic generator over stochastic random generator in exploration phase of reinforcement learning has been shown. Also as it is reported in[14]that the evolutionary reinforcement learning can improve the average found paths significantly comparing with the average paths found by the original version.

    Also it is shown here that the performance of using interactions in the form of evolutionary Q-learning can precede chaotic based Q-learning. So it can be expected that the employing of chaotic generator instead of stochastic random generator in evolutionary reinforcement learning can also yield a better performance.

    As it is reported in Table 1, the average paths found by the chaotic based evolutionary reinforcement learning can be improved comparing with the average paths found by non-chaotic version of evolutionary reinforcement learning by 5.09% expectedly. So it can be concluded that leveraging the chaotic random generator in exploration phase of the reinforcement learning can be effective in a better environment exploration in both evolutionary base version and non-evolutionary one.

    Experiments in dynamic mode, as explants in Table 2 are examined in all four modes, i.e., original, chaotic based, evolutionary, and chaotic based evolutionary Q learning. These tests are conducted in two phases. First phase is similar to nondeterministic experiments, that the agents learn their tasks in the same nondeterministic maze environment. Then in second phase, the environmental changes are done for 10 times and the agents have to find the goal in the same way without learning and relying on their past experiences every time. Table 2 is the summarization of these experiments. The proposed method has promising results in dynamic environments.

    Table 2 Experimental result for dynamic environment.

    6 Conclusion

    Learning through interacting with environments and other agents is a common problem in the context of multi agent system. Multi task learning allows some related tasks to be simultaneously jointly learned using a joint model. In this scope, learners use the commonality among the tasks. Also reinforcement learning is a type of learning agent concerned with how an agent should take actions in an environment in order to make the most of agents’ reward. RL is a useful approach for an agent to learn its policy in a nondeterministic environment. However, it is considered such a time consuming algorithm that it can’t be employed in every environment.

    We found that the existence of interaction between agents is a proper approach for betterment in speed and accuracy of RL methods. To improve the performance of multi task reinforcement learning in a nondeterministic and dynamic environment, especially for dynamic maze problem, we store past experiences of agents. By simulating agents’ interactions by structures and operators of evolutionary algorithms, we explore the best exchange of agents’ experiences. We also switch to a chaotic exploration instead of a random exploration. Applying Dynamic Chaotic Evolutionary Q-Learning Algorithm to an exemplary maze, we reach significantly promising results.

    Our examinations have shown about 91.02% improvement in using interactions between agents and applying the best past experiences of agents compared to original RL algorithm without any interactions in terms of average case of found path length. Also it has about 5.09% improvement in using chaotic exploration compared with non-chaotic in average.

    This can be inferred from experimental results that employing chaos as random generator in exploration phase as well as evolutionary-based computation can improve the reinforcement learning, in both rate of learning and its accuracy. It can also be concluded that using chaos in exploration phase is efficient for both evolutionary based version and non-evolutionary one.

    [1]H.P.Kriegel, P.Kr?ger, and A.Zimek, Clustering high-dimensional data: a survey on subspace clustering, pattern-based clustering, and correlation clustering,ACMTransactionsonKnowledgeDiscoveryfromData, vol.3, no.1, pp.1-58,2009.

    [2]N.H.Park and S.L.Won, Grid-based subspace clustering over data streams, inProceedingsofthesixteenthACMconferenceonConferenceoninformationandknowledgemanagement, Lisbon, Portugal, 2007,pp.801-810.

    [3]N.H.Park and S.L.Won Suk, Cell trees: an adaptive synopsis structure for clustering multi-dimensional on-line data streams,Data&KnowledgeEngineering,vol.63,no.2, pp.528-549,2007.

    [4]W.Liu and J.OuYang, Clustering algorithm for high dimensional data stream over sliding windows, inProceedingsoftheIEEE10thInternationalConferenceonTrust,SecurityandPrivacyinComputingandCommunications, Paris, France, 2011, pp.1537-1542.

    [5]Z.Aoying, Tracking clusters in evolving data streams over sliding windows,KnowledgeandInformationSystems,vol.15,no.2, pp.181-214, 2008.

    [6]S.Lühr and M.Lazarescu, Connectivity Based Stream Clustering Using Localised Density Exemplars, inProceedingsof12thPacific-AsiaConferenceonKnowledgeDiscoveryandDataMining, Osaka, Japan, 2008, pp.662-672.

    [7]B.Minaei-Bidgoli, H.Parvin, H.Alinejad-Rokny, H.Alizadeh, and W.Punch, Effects of resampling method and adaptation on clustering ensemble efficacy,ArtificialIntelligenceReview, vol.41, no.1, pp.27- 48,2012.

    [8]C.B?hm, Computing Clusters of Correlation Connected objects, inProceedingsofthe2004ACMSIGMODinternationalconferenceonManagementofdata, Paris, France, 2004,pp.455-466.

    [9]M.Mohammadpour, H.Parvin, and M.Sina, Chaotic Genetic Algorithm based on Explicit Memory with a new Strategy for Updating and Retrieval of Memory in Dynamic Environments,JournalofAIandDataMining, 2017, doi: 10.22044/JADM.2017.957.

    [10] D.Arthur and V.Sergei, k-means++: the advantages of careful seeding, inProceedingsoftheeighteenthannualACM-SIAMsymposiumonDiscretealgorithms, New Orleans,USA, 2007, pp.1027-1035.

    [11] H.Parvin, B.Minaei-Bidgoli, H.Alinejad-Rokny, and S.Ghatei, An innovative combination of particle swarm optimization, learning automaton and great deluge algorithms for dynamic environments,InternationalJournalofPhysicalSciences, vol.6,no.22, pp.5121-5127,2011.

    [12] M.H.Fouladgar, B.Minaei-Bidgoli, H.Parvin, and H.Alinejad-Rokny, Extension in The Case of Arrays in Daikon like Tools,AdvancedEngineeringTechnologyandApplication, vol.2,no.1, pp.5-10,2013.

    [13] I.Jamnejad, H.Heidarzadegan, H.Parvin and H.Alinejad-Rokny, Localizing Program Bugs Based on Program Invariant,InternationalJournalofComputingandDigitalSystems, vol.3,no.2, pp.141-150,2014.

    [14] B.Prasad, D.C.Dimri, and L.Bora, Effect of pre-harvest foliar spray of calcium and potassium on fruit quality of Pear cv.Pathernakh,ScientificResearchandEssays,vol.10,no.11, pp.392-396,2015.

    [15] G.Singh, Optimization of Spectrum Management Issues for Cognitive Radio,JournalofEmergingTechnologiesinWebIntelligence,vol.3,no.4, pp.263-267, 2011.

    [16] H.Parvin, H.Alinejad-Rokny, and S.Parvin, A Classifier Ensemble of Binary Classifier Ensembles,InternationalJournalofLearningManagementSystems, vol.1,no.2, pp.37- 47,2013.

    [17] D.Madhuri, Linear Fractional Time Minimizing Transportation Problem with Impurities,InformationSciencesLetters,vol.1,no.1, pp.7-19, 2012.

    [18] J.Mayevsky, J.Sonn, and E.Barbiro-Michaely, Physiological Mapping of Brain Functions In Vivo: Surface Monitoring of Hemodynamic Metabolic Ionic and Electrical Activities in Real-Time,JournalofNeuroscienceandNeuroengineering,vol.2,no.2, pp.150-177,2013.

    [19] H.Parvin, H.A.Rokny, S.Parvin, and H.Shirgahi, A new conditional invariant detection framework (CIDF),ScientificResearchandEssays,vol.8,no.6,pp.265-273,2013.

    [20] S.P.Singh and B.K.Konwar, Insilico Proteomics and Genomics Studies on ThyX of Mycobacterium Tuberculosis,JournalofBioinformaticsandIntelligentControl,vol.2,no.1, pp.11-18,2013.

    [21] H.Parvin, H.Alinejad-Rokny, B.Minaei-Bidgoli, and S.Parvin, A new classifier ensemble methodology based on subspace learning,JournalofExperimental&TheoreticalArtificialIntelligence, vol.25,no.2,pp.227-250,2013.

    [22] M.Kaczmarek, A.Bujnowski, J.Wtorek, and A.Polinski,JournalofMedicalImagingandHealthInformatics,vol.2,no.1, pp.56-63,2012.

    [23] M.Zamoum, and M.Kessal, Analysis Of Cavitating Flow Through A Venturi,ScientificResearchandEssays,vol.10,no.11, pp.383-391,2015.

    [24] H.Parvin, H.Alinejad-Rokny, and S.Parvin, A New Clustering Ensemble Framework,InternationalJournalofLearningManagementSystems, vol.1,no.1, pp.19-25,2013.

    [25] D.Rawtani and Y.K.Agrawal, Study the Interaction of DNA with Halloysite Nanotube-Gold Nanoparticle Based Composite,JournalofBionanoscience, vol.6,no.2,pp.95-98, 2012.

    [26] H.Parvin, B.Minaei-Bidgoli, and H.Alinejad-Rokny, A New Imbalanced Learning and Dictions Tree Method for Breast Cancer Diagnosis,JournalofBionanoscience, vol.7,no.6,pp.673- 678,2013.

    [27] Z.Chen, F.Wang, and Li Zhu, The Effects of Hypoxia on Uptake of Positively Charged Nanoparticles by Tumor Cells,JournalofBionanoscience,vol.7,no.5, pp.601-605,2013.

    [28] J.G.Bruno, Electrophoretic Characterization of DNA Oligonucleotide-PAMAM Dendrimer Covalent and Noncovalent Conjugates,JournalofBionanoscience,vol.9,no.3, pp.203-208,2015.

    [29] K.K.Tanaeva, Yu.V.Dobryakova, and V.A.Dubynin, Maternal Behavior: A Novel Experimental Approach and Detailed Statistical Analysis,JournalofNeuroscienceandNeuroengineering,vol.3,no.1, pp.52-61,2014.

    [30] M.I.Jamnejad, H.Parvin, H.Alinejad-Rokny, and A.Heidarzadegan, Proposing a New Method Based on Linear Discriminant Analysis to Build a Robust Classifier,JournalofBioinformaticsandIntelligentControl, vol.3,no.3,pp.186-193,2014.

    [31] R.Ahirwar, P.Devi, and R.Gupta, Seasonal Incidence Of Major Insect- Pests And Their Biocontrol Agents Of Soybean Crop (Glycine Max L.Merrill),ScientificResearchandEssays, vol.10,no.12, pp.402- 406,2015.

    [32] S.Sharma and B.Singh, Field Measurements for Cellular Network Planning,JournalofBioinformaticsandIntelligentControl,vol.2,no.2, pp.112-118,2013.

    [33] H.Parvin, M.MirnabiBaboli, and H.Alinejad-Rokny, Proposing a Classifier Ensemble Framework Based on Classifier Selection and Decision Tree,EngineeringApplicationsofArtificialIntelligence, vol.37, pp.34- 42,2015.

    [34] M.Kaczmarek, A.Bujnowski, J.Wtorek, and A.Polinski, Journal of Medical Imaging and Health Informatics,vol.2,no.1, pp.56-63,2012.

    [35] N.Zare, H.Shameli, and H.Parvin, An innovative natural-derived meta-heuristic optimization method,AppliedIntelligence, doi: 10.1007/s10489-016-0805-z,2016.

    [36] D.Rawtani and Y.K.Agrawal, Study the Interaction of DNA with Halloysite Nanotube-Gold Nanoparticle Based Composite,JournalofBionanoscience, vol.6,no.2, pp.95-98,2012.

    [37] A.Aminsharifi, D.Irani, S.Pooyesh, H.Parvin, S.Dehghani, K.Yousofi, E.Fazel, and F.Zibaie, Artificial neural network system to predict the postoperative outcome of percutaneous nephrolithotomy,JournalofEndourology, vol.31,no.5, pp.461- 467,2017.

    [38] H.Parvin, H.Alinejad-Rokny, N.Seyedaghaee, and S.Parvin, A Heuristic Scalable Classifier Ensemble of Binary Classifier Ensembles,JournalofBioinformaticsandIntelligentControl, vol.1,no.2,pp.163-170,2013.

    [39] Z.Chen, F.Wang, and Li Zhu, The Effects of Hypoxia on Uptake of Positively Charged Nanoparticles by Tumor Cells, Journal of Bionanoscience, vol.7,no.5,pp.601- 605,2013.

    [40] J.G.Bruno, Electrophoretic Characterization of DNA Oligonucleotide-PAMAM Dendrimer Covalent and Noncovalent Conjugates,JournalofBionanoscience,vol.9,no.3, pp.203-208,2015.

    [41] K.K.Tanaeva, Yu.V.Dobryakova, and V.A.Dubynin, Maternal Behavior: A Novel Experimental Approach and Detailed Statistical Analysis,JournalofNeuroscienceandNeuroengineering,vol.3,no.1, pp.52-61,2014.

    [42] E.Zaitseva and M.Rusin, Healthcare System Representation and Estimation Based on Viewpoint of Reliability Analysis,JournalofMedicalImagingandHealthInformatics,vol.2,no.1, pp.80-86, 2012.

    ?Sadrolah Abbasi is with Department of Computer Engineering, Iran Health Insurance Organization, Yasouj, Iran.

    ?Hamid Parvin, Mohamad Mohamadi, and Eshagh Faraji are with Department of Computer Engineering, Nourabad Mamasani Branch, Islamic Azad University, Nourabad Mamasani, Iran.

    ?Hamid Parvin and Eshagh Faraji are with Young Researchers and Elite Club, Nourabad Mamasani Branch, Islamic Azad University, Nourabad Mamasani, Iran. E-mail: h.parvin.iust@gmail.com.

    *To whom correspondence should be addressed. Manuscript

    2017-05-26; accepted: 2017-06-30

    99re6热这里在线精品视频| 丰满迷人的少妇在线观看| 最新的欧美精品一区二区| 国产一区二区三区综合在线观看 | 久久精品久久久久久噜噜老黄| 黑丝袜美女国产一区| 久热这里只有精品99| 久久精品国产鲁丝片午夜精品| 婷婷色综合www| 又大又黄又爽视频免费| 精品少妇黑人巨大在线播放| 老熟女久久久| 一本一本综合久久| 97超视频在线观看视频| 久久久久久久久久久久大奶| 中国美白少妇内射xxxbb| 一本大道久久a久久精品| 视频区图区小说| 久久久午夜欧美精品| 国产精品99久久久久久久久| 午夜精品国产一区二区电影| 午夜福利视频精品| 国产伦精品一区二区三区视频9| 满18在线观看网站| 亚洲精品美女久久av网站| 考比视频在线观看| 国产在线一区二区三区精| 水蜜桃什么品种好| 精品一区在线观看国产| 国产国语露脸激情在线看| 午夜福利网站1000一区二区三区| 少妇的逼水好多| 成年av动漫网址| 亚洲精品日本国产第一区| 搡老乐熟女国产| 日本wwww免费看| 观看av在线不卡| 国产有黄有色有爽视频| 国产毛片在线视频| 中文字幕久久专区| 久久精品久久久久久久性| 午夜激情av网站| 国产综合精华液| 亚洲综合色网址| 日韩大片免费观看网站| 亚洲av中文av极速乱| 性高湖久久久久久久久免费观看| 一级毛片aaaaaa免费看小| 天堂俺去俺来也www色官网| 久久99热这里只频精品6学生| 久久久久久久久大av| 亚洲精品日韩在线中文字幕| 交换朋友夫妻互换小说| 中文天堂在线官网| 性高湖久久久久久久久免费观看| 久久婷婷青草| 自线自在国产av| 日韩 亚洲 欧美在线| 久久精品久久精品一区二区三区| 国产成人精品婷婷| 少妇人妻久久综合中文| av播播在线观看一区| 美女xxoo啪啪120秒动态图| 99热网站在线观看| 婷婷色麻豆天堂久久| 亚洲国产精品一区二区三区在线| 亚洲中文av在线| 久久精品国产亚洲av涩爱| 国产高清三级在线| 国产精品蜜桃在线观看| 国产69精品久久久久777片| 亚洲国产色片| 少妇猛男粗大的猛烈进出视频| 国产综合精华液| 久久精品久久精品一区二区三区| 久久99热这里只频精品6学生| xxx大片免费视频| 亚洲人成77777在线视频| 亚洲精品一二三| 亚洲av成人精品一区久久| 日日啪夜夜爽| 欧美亚洲日本最大视频资源| 99久久精品国产国产毛片| 搡老乐熟女国产| 一本—道久久a久久精品蜜桃钙片| 黄片无遮挡物在线观看| 人成视频在线观看免费观看| 日本-黄色视频高清免费观看| 国产精品99久久久久久久久| 一级毛片aaaaaa免费看小| 特大巨黑吊av在线直播| 下体分泌物呈黄色| 亚洲精品国产av成人精品| 校园人妻丝袜中文字幕| 一级片'在线观看视频| 欧美精品一区二区免费开放| 日本-黄色视频高清免费观看| 伦精品一区二区三区| 大又大粗又爽又黄少妇毛片口| 精品亚洲乱码少妇综合久久| 99热这里只有精品一区| 久久99热6这里只有精品| 日韩一区二区三区影片| 亚洲精品第二区| 老司机亚洲免费影院| 精品国产国语对白av| 亚洲精品av麻豆狂野| 一区二区三区四区激情视频| 91久久精品国产一区二区成人| 国精品久久久久久国模美| 色视频在线一区二区三区| 成人国产av品久久久| 亚洲国产av影院在线观看| 母亲3免费完整高清在线观看 | 久久久午夜欧美精品| 三级国产精品欧美在线观看| 九九在线视频观看精品| 少妇高潮的动态图| 岛国毛片在线播放| 欧美日韩综合久久久久久| 婷婷色综合大香蕉| 97在线视频观看| 国产男人的电影天堂91| 午夜激情久久久久久久| 精品久久久久久久久av| 一级a做视频免费观看| 国产国拍精品亚洲av在线观看| 多毛熟女@视频| 大片电影免费在线观看免费| 大码成人一级视频| 精品久久蜜臀av无| 久久国产亚洲av麻豆专区| 午夜91福利影院| 美女福利国产在线| 一级二级三级毛片免费看| 久久人人爽人人爽人人片va| 少妇的逼好多水| 97超视频在线观看视频| 国产成人免费观看mmmm| av免费在线看不卡| 少妇的逼好多水| 午夜免费鲁丝| 国产精品久久久久久av不卡| 中文字幕精品免费在线观看视频 | 丰满迷人的少妇在线观看| 日韩成人av中文字幕在线观看| 午夜免费鲁丝| 午夜免费男女啪啪视频观看| 精品视频人人做人人爽| 国产av一区二区精品久久| av专区在线播放| 伊人久久精品亚洲午夜| 国产白丝娇喘喷水9色精品| 狠狠婷婷综合久久久久久88av| 在线看a的网站| 午夜日本视频在线| 久久久久精品久久久久真实原创| 一边亲一边摸免费视频| 欧美日韩视频高清一区二区三区二| 久久综合国产亚洲精品| 国产爽快片一区二区三区| 日本wwww免费看| 亚洲伊人久久精品综合| 免费播放大片免费观看视频在线观看| 亚洲天堂av无毛| 日韩大片免费观看网站| 九色成人免费人妻av| 久久久久久久精品精品| 女性被躁到高潮视频| 欧美 日韩 精品 国产| 精品亚洲乱码少妇综合久久| 久久婷婷青草| 成人免费观看视频高清| 纵有疾风起免费观看全集完整版| 亚洲av国产av综合av卡| 日韩中字成人| 一区在线观看完整版| 少妇人妻 视频| 一级毛片电影观看| 91久久精品国产一区二区成人| 人人妻人人添人人爽欧美一区卜| 91精品伊人久久大香线蕉| 国产精品嫩草影院av在线观看| 丝袜在线中文字幕| 久久精品熟女亚洲av麻豆精品| 少妇被粗大的猛进出69影院 | 男人爽女人下面视频在线观看| 亚洲成人一二三区av| 久久精品国产亚洲av天美| 久久毛片免费看一区二区三区| 伦精品一区二区三区| 人妻夜夜爽99麻豆av| 最新中文字幕久久久久| 自拍欧美九色日韩亚洲蝌蚪91| 久久97久久精品| 一本大道久久a久久精品| 人妻少妇偷人精品九色| 丝袜在线中文字幕| 99久久人妻综合| 80岁老熟妇乱子伦牲交| 精品少妇黑人巨大在线播放| 我的老师免费观看完整版| 99久久中文字幕三级久久日本| 好男人视频免费观看在线| 午夜福利影视在线免费观看| 亚洲av免费高清在线观看| 欧美3d第一页| 国产高清三级在线| 亚洲人成网站在线观看播放| 久久国产精品大桥未久av| 亚洲欧美一区二区三区黑人 | 国产亚洲精品久久久com| a 毛片基地| 亚洲精品第二区| 在线亚洲精品国产二区图片欧美 | 五月伊人婷婷丁香| 婷婷色综合大香蕉| 五月玫瑰六月丁香| 欧美性感艳星| 国语对白做爰xxxⅹ性视频网站| 天堂俺去俺来也www色官网| 国产亚洲精品第一综合不卡 | 在线观看一区二区三区激情| 新久久久久国产一级毛片| 80岁老熟妇乱子伦牲交| 十八禁网站网址无遮挡| 我的老师免费观看完整版| 一级毛片 在线播放| 久久久久人妻精品一区果冻| 99久久精品国产国产毛片| 久热久热在线精品观看| 伊人久久国产一区二区| 80岁老熟妇乱子伦牲交| 女性生殖器流出的白浆| 91久久精品电影网| 国产极品粉嫩免费观看在线 | 久久午夜福利片| 热99国产精品久久久久久7| 久久99热6这里只有精品| 日日啪夜夜爽| 欧美人与善性xxx| 日本黄大片高清| 欧美精品一区二区免费开放| av免费在线看不卡| 校园人妻丝袜中文字幕| 一本大道久久a久久精品| 三级国产精品片| 伊人久久国产一区二区| 母亲3免费完整高清在线观看 | 婷婷色av中文字幕| 99久久精品国产国产毛片| 日韩av在线免费看完整版不卡| 久久久久久久久大av| 一本一本综合久久| 欧美 亚洲 国产 日韩一| 水蜜桃什么品种好| 亚洲国产毛片av蜜桃av| 人体艺术视频欧美日本| 亚洲av男天堂| 久久国产亚洲av麻豆专区| 蜜桃国产av成人99| 久久久久久久久久成人| videosex国产| 观看av在线不卡| 青青草视频在线视频观看| 午夜激情福利司机影院| 国产午夜精品一二区理论片| 久久免费观看电影| 国产精品人妻久久久影院| 春色校园在线视频观看| 亚洲av在线观看美女高潮| 看免费成人av毛片| 如日韩欧美国产精品一区二区三区 | 欧美最新免费一区二区三区| 国产一区二区在线观看日韩| 搡老乐熟女国产| 美女国产高潮福利片在线看| 男人添女人高潮全过程视频| 99视频精品全部免费 在线| 桃花免费在线播放| 精品少妇内射三级| 国产爽快片一区二区三区| 国产白丝娇喘喷水9色精品| 免费播放大片免费观看视频在线观看| 久久青草综合色| 欧美日韩亚洲高清精品| 五月开心婷婷网| 久久久久精品久久久久真实原创| 国产高清有码在线观看视频| 在线亚洲精品国产二区图片欧美 | 搡老乐熟女国产| 久久精品夜色国产| 熟女av电影| 亚洲熟女精品中文字幕| 日韩成人av中文字幕在线观看| 91精品国产九色| 久久精品国产亚洲av涩爱| 超色免费av| 日韩成人av中文字幕在线观看| 欧美丝袜亚洲另类| www.av在线官网国产| 精品国产乱码久久久久久小说| 免费观看性生交大片5| 晚上一个人看的免费电影| 欧美日本中文国产一区发布| 男人操女人黄网站| 亚洲成人手机| 日本91视频免费播放| av.在线天堂| 亚洲精品乱久久久久久| 大片免费播放器 马上看| 中文字幕免费在线视频6| 欧美+日韩+精品| 超色免费av| 亚洲国产欧美在线一区| 黄色毛片三级朝国网站| 蜜桃国产av成人99| 高清视频免费观看一区二区| 2022亚洲国产成人精品| 午夜激情久久久久久久| av卡一久久| 18禁观看日本| 看免费成人av毛片| 69精品国产乱码久久久| 久久青草综合色| 美女视频免费永久观看网站| 97超碰精品成人国产| 久久热精品热| 日本猛色少妇xxxxx猛交久久| 中文乱码字字幕精品一区二区三区| 国产成人精品一,二区| 日韩熟女老妇一区二区性免费视频| 91aial.com中文字幕在线观看| 青青草视频在线视频观看| 性色av一级| 国产成人午夜福利电影在线观看| 亚洲精品自拍成人| 黑人欧美特级aaaaaa片| 最近2019中文字幕mv第一页| 又黄又爽又刺激的免费视频.| 日本-黄色视频高清免费观看| 亚洲精品色激情综合| 亚洲综合精品二区| 精品亚洲成国产av| 老司机影院毛片| 五月伊人婷婷丁香| 免费大片18禁| 蜜桃国产av成人99| 国产精品99久久久久久久久| 日本午夜av视频| 91在线精品国自产拍蜜月| 妹子高潮喷水视频| 尾随美女入室| 免费高清在线观看视频在线观看| 午夜久久久在线观看| 一边摸一边做爽爽视频免费| 国产在线视频一区二区| 99热国产这里只有精品6| 午夜福利影视在线免费观看| 在线观看人妻少妇| 天堂中文最新版在线下载| 欧美少妇被猛烈插入视频| 久久久久精品性色| 热99久久久久精品小说推荐| 91国产中文字幕| 日日爽夜夜爽网站| 精品99又大又爽又粗少妇毛片| .国产精品久久| 99久久中文字幕三级久久日本| 在现免费观看毛片| 91精品一卡2卡3卡4卡| 大话2 男鬼变身卡| 精品午夜福利在线看| 丰满少妇做爰视频| videos熟女内射| 成人毛片a级毛片在线播放| 精品久久国产蜜桃| 少妇 在线观看| 十八禁网站网址无遮挡| 91精品一卡2卡3卡4卡| 视频在线观看一区二区三区| 一级二级三级毛片免费看| 国产亚洲一区二区精品| 国产欧美亚洲国产| 亚洲av.av天堂| 欧美xxxx性猛交bbbb| 日本黄色日本黄色录像| 日日撸夜夜添| 亚洲人与动物交配视频| 两个人免费观看高清视频| 在线观看免费日韩欧美大片 | 亚洲av不卡在线观看| 亚洲在久久综合| 18禁裸乳无遮挡动漫免费视频| 99久久综合免费| 国产亚洲av片在线观看秒播厂| 高清黄色对白视频在线免费看| 有码 亚洲区| 欧美日韩亚洲高清精品| 日本-黄色视频高清免费观看| 丰满迷人的少妇在线观看| 成人二区视频| 在线天堂最新版资源| 中国国产av一级| 自线自在国产av| 你懂的网址亚洲精品在线观看| 欧美少妇被猛烈插入视频| 妹子高潮喷水视频| 久久久久久久久久久免费av| 精品午夜福利在线看| 国产精品一二三区在线看| 精品国产露脸久久av麻豆| 制服人妻中文乱码| 日日啪夜夜爽| 国产精品国产三级专区第一集| 卡戴珊不雅视频在线播放| 日韩电影二区| 午夜av观看不卡| 日本欧美国产在线视频| 日韩一区二区视频免费看| 亚洲熟女精品中文字幕| 精品久久久精品久久久| 国产午夜精品一二区理论片| 亚洲欧美成人综合另类久久久| 欧美日韩av久久| 伦理电影大哥的女人| 日本猛色少妇xxxxx猛交久久| 国产亚洲一区二区精品| 五月开心婷婷网| 日本猛色少妇xxxxx猛交久久| 蜜桃国产av成人99| 天堂中文最新版在线下载| 校园人妻丝袜中文字幕| 日本91视频免费播放| 五月伊人婷婷丁香| 亚洲精品日本国产第一区| 免费大片黄手机在线观看| 久久久国产一区二区| 亚洲不卡免费看| 国产精品偷伦视频观看了| 亚洲高清免费不卡视频| 亚洲精品一二三| 亚洲av男天堂| 九九爱精品视频在线观看| 男女边摸边吃奶| 亚洲高清免费不卡视频| 伊人久久国产一区二区| 亚洲精品视频女| 卡戴珊不雅视频在线播放| 男女无遮挡免费网站观看| 欧美人与性动交α欧美精品济南到 | 激情五月婷婷亚洲| 亚洲美女搞黄在线观看| 这个男人来自地球电影免费观看 | 精品国产国语对白av| 中文字幕久久专区| 国产极品天堂在线| 国模一区二区三区四区视频| 18+在线观看网站| 少妇人妻 视频| 九色亚洲精品在线播放| 国产精品秋霞免费鲁丝片| 精品少妇久久久久久888优播| 一个人免费看片子| 国产精品无大码| 丝袜脚勾引网站| 成人免费观看视频高清| 国产高清国产精品国产三级| 国产高清三级在线| 看非洲黑人一级黄片| 国产精品一二三区在线看| 欧美精品一区二区大全| 亚洲成人手机| 99国产精品免费福利视频| 2018国产大陆天天弄谢| tube8黄色片| 少妇被粗大猛烈的视频| 一级黄片播放器| 国产av码专区亚洲av| av在线播放精品| 亚洲无线观看免费| 夫妻午夜视频| 久久久国产欧美日韩av| 日本色播在线视频| 亚洲欧美日韩另类电影网站| 久久人妻熟女aⅴ| 99久久综合免费| 久久av网站| √禁漫天堂资源中文www| 中文字幕人妻熟人妻熟丝袜美| 欧美日韩视频精品一区| 国产 精品1| 亚洲av二区三区四区| 国产黄片视频在线免费观看| 99久久精品一区二区三区| 亚洲精品乱久久久久久| 18禁动态无遮挡网站| 免费高清在线观看视频在线观看| 亚洲国产精品成人久久小说| 美女脱内裤让男人舔精品视频| 美女xxoo啪啪120秒动态图| 青春草视频在线免费观看| 亚洲伊人久久精品综合| 亚洲精品456在线播放app| 91aial.com中文字幕在线观看| 国产精品国产三级国产专区5o| av在线观看视频网站免费| 99热这里只有是精品在线观看| 一级,二级,三级黄色视频| 亚洲欧美日韩卡通动漫| 国产精品不卡视频一区二区| 多毛熟女@视频| 国产成人精品一,二区| 草草在线视频免费看| 久久精品国产亚洲网站| 亚洲精品aⅴ在线观看| 国产免费一区二区三区四区乱码| 亚洲精品一二三| 国产精品三级大全| 男人添女人高潮全过程视频| 国产男人的电影天堂91| 精品亚洲成国产av| 亚洲国产av影院在线观看| 欧美3d第一页| 久久久午夜欧美精品| 在线 av 中文字幕| 永久免费av网站大全| 国产精品久久久久成人av| 人人妻人人澡人人爽人人夜夜| 一区二区三区四区激情视频| 岛国毛片在线播放| 国产成人av激情在线播放 | 亚洲av欧美aⅴ国产| 曰老女人黄片| 国产免费一级a男人的天堂| 久久久久精品性色| 久久久亚洲精品成人影院| 欧美精品亚洲一区二区| 人成视频在线观看免费观看| 久久精品久久精品一区二区三区| 91精品一卡2卡3卡4卡| 激情五月婷婷亚洲| 国产午夜精品一二区理论片| 国产成人精品婷婷| 在线观看免费视频网站a站| 国产精品三级大全| 久久久欧美国产精品| 男女国产视频网站| 午夜免费观看性视频| 大片电影免费在线观看免费| 欧美一级a爱片免费观看看| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲欧美色中文字幕在线| 国产欧美另类精品又又久久亚洲欧美| 精品亚洲成国产av| 一本色道久久久久久精品综合| 日日啪夜夜爽| 一本色道久久久久久精品综合| 久久精品国产a三级三级三级| 精品国产一区二区久久| 高清在线视频一区二区三区| 久久久久久伊人网av| 国产亚洲精品第一综合不卡 | 日本wwww免费看| 亚洲图色成人| 日本vs欧美在线观看视频| 亚洲国产精品国产精品| 日韩伦理黄色片| 国产爽快片一区二区三区| 99热国产这里只有精品6| 欧美成人午夜免费资源| 少妇人妻 视频| 黑人巨大精品欧美一区二区蜜桃 | 国产精品秋霞免费鲁丝片| 久久精品国产鲁丝片午夜精品| 九九久久精品国产亚洲av麻豆| 丝瓜视频免费看黄片| 亚洲综合色惰| 久久久a久久爽久久v久久| 精品午夜福利在线看| 秋霞伦理黄片| 国产亚洲最大av| 日韩欧美一区视频在线观看| 一级片'在线观看视频| h视频一区二区三区| 国国产精品蜜臀av免费| kizo精华| 天美传媒精品一区二区| 人人妻人人爽人人添夜夜欢视频| 亚洲情色 制服丝袜| 高清视频免费观看一区二区| 一区二区三区四区激情视频| 成人免费观看视频高清| 成人无遮挡网站| 成年美女黄网站色视频大全免费 | 你懂的网址亚洲精品在线观看| 成年人午夜在线观看视频| 亚洲精品亚洲一区二区| 在线观看www视频免费| 国产一区二区在线观看日韩| 久久精品国产a三级三级三级| 97超视频在线观看视频| 亚洲三级黄色毛片| 伦理电影免费视频| 麻豆成人av视频| 亚洲,一卡二卡三卡| 老司机影院毛片| 免费观看的影片在线观看| 在线看a的网站| 99热全是精品| 69精品国产乱码久久久| 精品亚洲成a人片在线观看| 有码 亚洲区| 午夜福利,免费看| 日日爽夜夜爽网站|