• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A New Action-Based Reasoning Approach for Playing Chess

    2021-12-10 11:53:30NorhanHeshamOsamaAbuElnasrandSamirElmougy
    Computers Materials&Continua 2021年10期

    Norhan Hesham,Osama Abu-Elnasrand Samir Elmougy

    Faculty of Computers and Information,Department of Computer Science,Mansoura University,35516,Egypt

    Abstract:Many previous research studies have demonstrated game strategies enabling virtual players to play and take actions mimicking humans.The Case-Based Reasoning(CBR)strategy tries to simulate human thinking regarding solving problems based on constructed knowledge.This paper suggests a new Action-Based Reasoning(ABR)strategy for a chess engine.This strategy mimics human experts’approaches when playing chess,with the help of the CBR phases.This proposed engine consists of the following processes.Firstly,an action library compiled by parsing many grandmasters’cases with their actions from different games is built.Secondly,this library reduces the search space by using two filtration steps based on the defined action-based and encoding-based similarity schemes.Thirdly,the minimax search tree is fed with a list extracted from the filtering stage using the alpha-beta algorithm to prune the search.The proposed evaluation function estimates the retrievably reactive moves.Finally,the best move will be selected,played on the board,and stored in the action library for future use.Many experiments were conducted to evaluate the performance of the proposed engine.Moreover,the engine played 200 games against Rybka 2.3.2a scoring 2500,2300,2100,and 1900 rating points.Moreover,they used the Bayeselo tool to estimate these rating points of the engine.The results illustrated that the proposed approach achieved high rating points,reaching as high as 2483 points.

    Keywords:Action based reasoning;case-based reasoning;chess engine;computer games;search algorithm

    1 Introduction

    Generally,various gamers are playing games on personal computers.There are many different game categories,such as simulations,adventures,Real-Time Strategy(RTS),puzzles,and board games.A player can play against others or against a computer-virtual player,where artificial intelligence can be used to generate responsive,adaptive intelligent behaviors similar to human intelligence.While games are mostly played for fun,the majority are designed for learning purposes[1,2].

    A chess game is a board game between two players,while an RTS game is a game among multiple ones.Chess is a perfect information game where both players,at any position,know all the actions that can be taken at this position.On the other hand,in RTS games,the information is hidden from the participants.In chess,each player has an army consisting of 16 pieces of six types(pawn,rook,knight,bishop,queen,and king).The players are asked to use a plan or a strategy to mate the opponent king while defending his pieces.On the contrary,in RTS games,each player is supposed to uplift his resources not only to defend the castle but to wage attacks as well.

    CBR is mainly applicable to RTS games.Also,its phases mostly used when designing a chess engine[3–6].CBR simulates human thinking concerning solving problems.Humans try to recall similar past situations from their experiences and adjust their solutions.CBR consists of four phases;retrieval,reuse,revision,and retention[7–9].In CBR,the human experience is similar to the cases stored in a library called the Case Library(CL)[10].Throughout the retrieval phase,the engine searches for similar CL cases and returns the most similar ones.The reuse phase checks if both retrieved and current states are identical,and then the solution is reused.Otherwise,it is adapted.Next,we can test solutions in the revision phase and we are able to evaluate the whole situation to measure the credibility of this solution to the current case.Finally,in the retention phase,the current state and the reused solution are collected in the CL as a new experience that could be retrieved and reused for future situations.

    In chess,the goal of each player is to mate the opponent king through attacking the opponent’s pieces and defending his pawns.In addition to that the player attempt to pick the best move from a set of legal ones.Shannon[11]estimated that the total number of moves in a chess game is 10∧120,approximately 40 possible moves.As a result,these moves lead to thousands of positions that need an evaluation each time initiating an action.The problem associated with developing a chess engine is related to the time and space complexity required to locate the best moves.

    In chess engine,the evaluation process requires estimating many features such as pawn structure,king safety,and available material for each side.Many researchers focused on tuning the weights of the evaluation function using different optimization algorithms.The algorithms included Genetic Algorithm(GA)[12,13],Genetic Programming(GP)[14,15],Evolutionary Programming(EP)[16–18]and Neural Network(NN)[19].On the other hand,this paper tends to empower the evaluation function by extending its basic control parameters.It gives a high priority to the safety of pieces on the board.

    This paper is organized as follows:Section 2 presents some related works.Section 3 discusses the proposed work.Section 4 presents the experimental results and compares them against some existing methods.Section 5 provides the conclusion.

    2 Related Work

    In 1950,Shannon[11]was the first one who started teaching programming a computer to play chess.In 1951,Alan Turing made the first computer program playing chess and tested it through paper,not an actual machine.However,the first computer program that was able to play a complete chess game was developed in 1958.In the last decades,researchers’aim was to build a chess engine capable of simulating human beings,evaluating positions,and choosing moves like humans.

    Sinclair[10]proposed a selective move generation mechanism based on example-based reasoning.The games played by chess grandmasters were analyzed and,accordingly,the positions characterized.These characterizations mapped using Principle Component Analysis(PCA)and compared to those in the example base.Besides,a list of candidate moves returned and ranked according to the similarity to the mapping.

    Hauptman et al.[15]used GP to build up the evaluation function of the chessboard endgames.They developed evaluation strategies similar to human experts’analysis based on IF conditional functions that return TRUE or FALSE values.During the testing,the GP paradigm achieved a draw or won against the expert human-based strategy.It also got a draw against CRAFTY,which is a world-class chess engine.

    Flinter et al.[20]seek to improve the process of creating the case library.They demonstrated a method for automatically generating a case library from a large set of grandmaster games using the chunking technique.

    Kerner[21]proposed an intelligent system for educational purposes,which empowered the performance of the weak and intermediate players.This system developed a new case-based model for analyzing any given chess position by studying the most significant features in that position.This model provided explanatory comments better than those of the other existing game-playing programs.However,to strengthen this model,some searching capabilities are needed urgently to strengthen this model.

    Ganguly et al.[22]proposed an IR-based approach for retrieving chess positions that resemble those stored in the chess games database.They presented and interpret each position in terms of mobility,reachability,and connectivity between pieces with a textual representation.The experiments proved that this approach could retrieve similar game positions to analyze the current piece’s position and predict each piece’s next best move.

    Kubat et al.[23]tended to reduce the number of positions calculated for each move.They designed an agent that searches for the learned patterns,which reduced the search tree.The agent collected positive and negative models of scarification from the middle game patterns for the learning process and used Quinlan’s C5 for creating the decision tree.Instead of calculating billions of positions,the agent could recognize patterns with achieving acceptable classification performance.

    Kendall et al.[24]proposed a methodology that used GA to select optimal weights for the evaluation function.Also,the minimax tree with the alpha-beta pruning algorithm was used for locating the best moves.Shannon’s simple evaluation function used includes player material and mobility parameters.Furthermore,a population of players was collected where two players were selected to play a chess game against each other.During the testing,the player could beat the chess master after 69 moves as the white player and lose after 97 ones as the black player.However,this methodology needed more additional games to get acceptable results.It also needed more depth for the search tree.Consequently this increased the learning time.

    Boskovic et al.[25]proposed an approach that uses Differential Evolution Algorithm(DEA)to find the optimal weights for the evaluation function.To run the same chess engine many times,was the main purpose.Each time the engine is embedded with different parameters of the evaluation function.Preliminary results demonstrated a smaller number of generations generated acceptable parameters and improved the evaluation function.

    Nasreddine et al.[26]proposed the dynamic boundary strategy,which uses an Evolutionary Algorithm(EA).It tried to find an optimal evaluation function by dynamically updating the weights associated with each chess piece except the king and the pawn.The engine’s performance is tested against another one that is using the same Shannon’s evaluation function.After 520 learning generations,it achieved a draw in the first game via a 50-moves limit,and however,it won in the second game after 21 moves.Furthermore,the chess engine is tested against the chess master 8000 in two games.It lost in the first game after 81 moves and achieved a draw in the second game because of the 50-moves limit.

    Vázquez-Fernández et al.conducted a series of studies concerning the performance of the evaluation function.In[27],they build a search engine for a chess program based on EA.It automatically adjusted the weights of the evaluation function.Additionally,in[28],the research empowered the evaluation process using a selection mechanism that decides which virtual player will pass to the next generation.The virtual player passed according to the matching degree of his moves to the grandmasters’moves.In[29],the study enhanced the engine’s rating using EA and the Hooke-Jeeves algorithm.In[30],work refined the performance using Neural Network(NN)based on unsupervised learning.

    3 The Proposed Approach

    Fig.1 shows our proposed framework.It is composed of four consecutive continuous phases as follows:

    Figure 1:The proposed framework

    (1)Building the action library.This library was built by parsing a large set of chess grandmasters games and stored as a set of descriptive cases,as described in Subsection 3.1.

    (2)Minimizing the search space.When the opponent makes a move,the proposed engine retrieves a list of reactions from the action library filtered twice as described in Subsection 3.3.Firstly,it returns only the reactions of the grandmasters to the same opponent’s move.Secondly,a similarity function is applied to select the most similar board.

    (3)Feeding the minimax search tree with the list resulted from the filtering stage and pruning the search using the alpha-beta algorithm.The minimax search tree expands the moves,and the retrieved positions are estimated using the proposed evaluation function.Besides,the best action that increases the case’s power is selected,as described in Subsection 3.4.

    (4)Retaining the updated representation of the case.After locating the best move,the board is updated and stored in the action library as a new case for future use,as described in Subsection 3.5.

    3.1 Building Action Library

    The proposed action library was formed by mining 6,000 games.It consists of 300000 mapping action moves of the white opponent and the reactive actions of the black opponent.Moreover,the scheme of the case within the action library takes the form:C = {Wm,Wf,Bm,Bf,S}.Wmspecifies the actions of the white opponents.Wfrepresents the Forsyth-Edwards Notation(FEN)after the move action of the white opponent.Additionally,Bmembeds a description of the reactions of the black opponents.Bfalso represents the FEN after the reaction of the black opponent.Finally,the string ‘S’encodes the complete textual representation of the case after playing the move[22].

    3.2 Engine Specification

    The researchers have to clarify the initial state,terminal state,set of operators,and the utility function to specify the chess engine.

    3.2.1 Initial State

    Initially,all pawns are locating on the board in their legal positions.Once the game is starting,the white opponent starts to move one pawn to an allowed position.This case referred to the board’s physical representation as to the search space’s initial state.

    3.2.2 Terminal State

    A chess game can be terminated in one of the following cases;1)Win/Loss:It happens when the king piece of one any player is threatened by the opponent and cannot move to a safe square while the player cannot capture the attacked piece.2)Insufficient material:A game ends in a draw when both players do not have sufficient material,where capturing a checkmate through using the remaining pieces is impossible.3)Threefold repetition:It happens when a specific position occurs three times in a game.In such a case,one of the players can ask for a draw and ends the game.The researchers refer to the terminal state as T,in which each case takes the value of 0 or 1.The starting position is referred to as T = {0,0,0}.Moreover,the terminal state in case of Win/Loss is T = {1,0,0}.

    3.2.3 Set of Operators

    A set of operators make reference to all the possible actions that a player can make in his game turn.The possible action is a legal move in the game.for instance,a pawn can move either one square or two squares forward if it is its first move.A knight can move in an ‘L’shape,which is one square along with any rank or file and then at an angle.A bishop can move in a diagonal direction any number of empty squares.A rook can move whether horizontally or vertically,any number of empty squares,in addition to its move in castling.A queen can proceed with any number of empty squares in a diagonal,horizontal,or vertical direction.Finally,a king can move only one square diagonally,horizontally,or vertically,in addition to a special castling move.The current research refers to the set of operators ‘O’as the following:

    O = {P:{F1,F2},Kn:{L},B:{Dn},R:{Hn,Vn},Q:{Dn,Hn,Vn},K:{D1,H1,V1}}

    3.2.4 Utility Function

    Utility function measures the performances over a set of game positions.The utility is calculated using an evaluation function,which evaluates the current chess position.Moreover,the selection of the best next move is made according to this evaluation.The proposed evaluation function is computed as given in Eq.(1).

    3.3 Minimizing the Search Space

    After each opponent’s move,the proposed engine performs two filtration steps to minimize the search space based on the similarity scheme.Two levels of similarity are considered;the actionbased similarity and the encoding scheme-based similarity,respectively.

    3.3.1 Action Based Similarity

    It retrieves all the grandmaster cases containing reactions to the current opponent’s move from the action library.Moreover,Fig.2 shows the physical representation of the considered root of the search space from the Adamsvs.the Pasman grandmaster game after the 20th move.Figs.3–5 show the physical representation of the retrieved black reaction cases in response to the move hxg6 of the white opponent as in Fig.2.

    Figure 2:Board grand master games,Adams vs.Pasman after move 21

    Figure 3:hxg6;black pawn at square h6 captures white pawn at square g6

    Figure 4:fxg6;black pawn at square f7 captures white pawn at square g6

    3.3.2 The Encoding Scheme Based Similarity

    It retrieves a list of the most similar retrieved cases based on two subsequence steps.The first step measures the similarity between the encoding string of the current and retrieved cases.Then afterward,it assigns the similarity degrees to them.Moreover,each similarity degree has a range(0 to 1).The second step discards the cases with a similarity degree,which is less than the value of the proposed similarity degreeAlgorithm 1 illustrates the similarity scheme along with the sequence of the steps.

    Figure 5:Qd7;black queen at squarre d8 moves to square d7

    Algorithm 1:Minimizing the search space Function Minimize Search Space(board,library)return similar cases Input:board,the current case of the game library,an action-based library,a table indexed by case sequence Variables:Wm,the current move of the white opponent S,the encoding string of the current case,the similarity degree L,the list of complete cases that contain black move responding to Wm ,the list of similar grandmaster cases that are like the current case Begin:1.Wm= Case-Parsing(board)2.S = Case-Parsing(Wm,board)3.L = Look-Up(Wm,library)4.= Case-Match(S,L,5.return End

    3.4 Search

    This stage includes three subsequence steps:expanding,evaluating,and selecting the best case responding to the active movement of the other opponent.The engine used the minimax algorithm with the alpha-beta algorithm to control the proposed search strategy.

    3.4.1 Expanding the Search Tree

    The retrieved list now contains the best reactions with their cases,which could be played as responses to the current opponent’s move.These reactions feed the search tree and expand to select the best of them.

    3.4.2 Evaluate Cases

    Once the engine constructs the search tree and specifies the depth limit,it starts recursively from the leaf nodes to measure the power of its expanded nodes within the search space.It guides the search procedure to locate the best reactive move.The proposed engine alters the evaluation function form by maximizing the surrounding defenders’and attackers’roles.Moreover,the proposed evaluation function takes the form given below in Eq.(1).

    ?Pow(S)is the power of the side S under the evaluation.

    ?P is the pieces’count of the side S under the evaluation,starting from n = 1 to n = p.

    ?Pow(P)is the power of the piece P.

    Pow(P)is calculated using the formula given below in Eq.(2).

    ?Pow(P)is the power of the piece P of the side under evaluation.

    ?Pos(P)is the positional value of the piece P.

    ?Mat(P)is the material value of the piece P.

    ?D is the number of the defenders for the piece P starting from n = 1 to n = d.

    ?Pos(D)is the positional value of a defending piece D.

    ?Mat(D)is the material value of a defending piece D.

    ?A is the number of attackers for piece P,starting from n = 1 to n = a.

    ?Pos(A)is the positional value of an attacking piece A.

    ?Mat(A)is the material value of an attacking piece A.

    Algorithm 2 illustrates that the implementation of the evaluation process is decomposed into three main steps as follows:

    (1)Creating both defenders’net chain list and attackers’net chain list of each piece.

    (2)Tracking the frequency of pushing each defender and attacker inside the lists.

    (3)Getting the power of each piece,as well as taking into account their associated net chain lists.

    Algorithm 2:The evaluation process Function Evaluate()return power Input:,the current case of the minimax search tree Variables:power,the calculated power of the board,initially 0 P,an individual piece of white or black opponent within the board

    DL,the net chain list of the defenders of each piece AL,the net chain list of the attackers of each piece Begin:1.Foreach p in 2.Push the current piece to the main list 3.DL = Build Defender List(P,)4.AL = Build Attacker List(P,),DL,AL)6.End foreach 7.return power End 5.power += Estimate Power(P,

    Moreover,to clarify the proposed evaluation function introduced in the research’s procedure,we traced the mechanism of getting the black pawn’s power resides at g6.In this case,we referred to it as P/g6,as shown in Fig.6.

    Figure 6:The physical representation of p/g6 position and its surrounding pieces

    Once presenting the pawn P/g6 to calculate its power,the algorithm pushes the piece P to the Main list(ML)and locates it at position g6.Furthermore,the textual representation that configures the memory takes the form {piece/position:frequency}.In this case,the defenders’net chain list(DL)for P/g6 and its appearance frequencies as a defender is represented by {P/h7:1}.On the other hand,the attackers’net chain list(AL)for P/g6 and its appearance count as an attacker is represented by {N/h4:1}.

    Besides,the algorithm iteratively passes through each piece within the associated DL and AL.It checks whether one of them with the specified position is visited or not,as follows:

    (1)If it is visited before,then its defenders and attackers will exist;there is no need to expand them again.Here the frequency of its repeated appearance is increased by 1.

    (2)If it has not been visited yet,the algorithm will push the piece as an item in the main list and expand its defenders and attackers.

    In general,the above condition prevents the search procedure from getting stuck in an infinite loop.The relation between Rc8 and Qd8 is a clear example of such a case.Every time the engine investigates the defenders of the piece Rc8,the piece Qd8 appears as an item in its DL and vice versa.Tab.1 illustrates the expansion lists(ML,DL,and AL)for the piece P/g6,as shown in Fig.6.Algorithm 3 presents the procedure of estimating the power of every single piece within the board.

    Table 1:Tracing expansion lists of a piece p/g6

    Algorithm 3:Estimation of the power of a single piece Function Estimate Power(P,,DL,AL)return power Input:P,the current piece within the board,the current case of the minimax search tree DL,the net chain list of the defenders of each piece AL,the net chain list of the attackers of each piece Variables:power,the calculated power of the board,initially 0 def_power,the calculated power of the defenders in DL,initially 0 att_power,the calculated power of the attackers in AL,initially 0 def_pos,the position of the defender piece within the board.att_pos,the position of the attacker piece within the board Begin:1.Foreach def in DL 2.def_pos = GetPosition(def,)3.def_power +=(GetPosVal(,def,def_pos)+ GetMatVal(def))* GetFreq(def)4.End Foreach 5.Foreach att in AL 6.att_pos = GetPosition(att,)

    7.att_power +=(GetPosVal(,att,att_pos)+ GetMatVal(att))* GetFreq(att)8.End Foreach 9.power =(GetPosVal(,p,piece_pos)+ GetMatVal(p)+ def_power)– att_power 10.return power End

    3.4.3 Select the Best Case

    Once evaluating each expanded case,the engine strives to select an optimal move for an opponent by assuming that the other opponent is also playing optimally.It motivates the max player to identify the maximum value of the power and the min player to select the minimum value[31].

    3.5 Retaining

    Once selecting the best action is selected,the engine alters the given case’s incomplete textual representation.It adds the best black moveBm,FEN after the black moveBfand the complete specification of the encoding string S.Additionally,the engine extends its experience by pushing the new case to the action library as a revised gained experience,to which enhance its performance for further actions.Algorithm 4 illustrates the steps of updating the action library.

    Algorit Functi hm 4:The retain process for empowering the action library on Retain(C,best move)Input:C,a fully specified description of the case best move,the best move selected using minimax with alpha-beta.Begin:1.Wm= Case-Parsing(C)2.Wf = Case-Parsing(Wm,C)3.Bf = Case-Parsing(best move,C)4.S = Case-Parsing(best move,C)5.C = Update-Case(C,Wm,Wf,best move,Bf,S)6.Append C to the end of the library.End

    4 Experiments

    4.1 Dataset

    The course experiments used our action library,built by parsing the grandmaster games taken from:(https://ccrl.chessdom.com/ccrl/4040/games.html).Moreover,as mentioned in Section 3.5,its size grows while playing;therefore,the engine gains experience.

    4.2 The Evaluation Criteria

    The proposed engine played 200 games for each experiment against Rybka 2.3.2a,which are available at:(http://www.rybkachess.com/index.php?auswahl=Demo+version).We conducted the experiments at 2500,2300,2100,and 1900 rating points.The Bayeselo tool responsible for estimating the engine’s rating is available at:(https://www.remi-coulom.fr/Bayesian-Elo/).We counted the number of the games where the engine achieved a draw,win-win,or defeat lost the game.

    4.3 Experiments

    4.3.1 Experiment A

    This experiment estimated the effect of implementing the evaluation function on empowering the pieces’ safety ratio.The proposed engine played 200 games twice.It used the standard evaluation function for the first round and the proposed evaluation function for the second.

    Throughout this experiment,the proposed engine fed its search trees using all the legal moves at any position.Tab.2 illustrates the results of both trials.At all rating points,the implementation of the proposed evaluation function produces better results due to considering both the defenders’and the attackers’net chain list of each piece.

    Table 2:Results of experiment A

    4.3.2 Experiment B

    Throughout this experiment,the engine implemented the proposed evaluation function.Also,it fed its search tree with the legal moves retrieved from the action library.

    Tab.3 illustrates the results of extending the action library at different rating points.At the rating point of 1900,the number of games the engine achieved won case increased from 113 to 167 compared to the results mentioned in experiment A.The won cases also increased from 98 to 117 and 69 to 78,at the rating points of 2100 and 2300,respectively.It means that utilizing the moves played by grandmasters to feed the engine as a response to the current opponent’s move enhanced the results.

    Table 3:Results of experiment B

    4.3.3 Experiment C

    Throughout this experiment,the engine implemented the proposed evaluation function.Also,it fed its search tree with the legal moves retrieved from the action library after applying the two levels of the similarity schemes.Also,a series of similarity degrees tried to find the best value used to retrieve the most similar cases.

    Tab.4 illustrates the results of conducting a series of similarity degrees including 0.4,0.5,0.6,0.7,and 0.8,respectively,with 200 games played for each.The results showed that the performance at the similarity degrees 0.4 and 0.5 are too close.Moreover,their values are better than the results of playing the games at other similar degrees.Furthermore,the performance of the engine has enhanced after adding the effects of these similarity schemes.

    Table 4:Results of experiment C using different similarity degrees

    4.4 The Proposed Engine vs.Another Engine

    The proposed engine’s performance is compared with another engine developed by Vázquez-Fernández et al.[18].The rating of the proposed engine in experiment A was 2167 points.It reached 2217 points and 2483 points in experiments B and C,respectively.Tab.5 illustrates that the proposed engine achieved rating points higher than those presented in[18].It achieved 166 rating points higher than those accomplished by the evolved virtual player.It also achieved 982 rating points higher than those managed by the non-evolved virtual player.

    Table 5:The proposed engine results compared to Vázquez-Fernández et al.[18]results

    5 Conclusion

    In this research,we developed a new chess game engine that thinks and plays as human experts.It retrieved the best move by searching the constructed action library,acting as a repository of the grandmasters’ actions.This work introduced a proposed form of the evaluation function that implemented a minimax search tree with an alpha-beta pruning algorithm.Besides,it included two levels of similarity schemes.

    Throughout the experiments,200 games were played for each test against Rybka 2.3.2a,as the opponent chess engine,at different rating points.The number of games wherein the engine achieved a draw,win-win,or defeat lost was counted.The results showed that the similarity schemes enhanced the list of the moves retrieved from the action library and the two similarity levels.Overall,this work keeps the pieces safe as long the associated defenders and attackers could be tracked throughout the game.Moreover,the experience of the engine is increased with each game played.Additionally,the results demonstrated that the proposed engine achieved rating points higher than those of the other engine.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产真人三级小视频在线观看| 国产成人av激情在线播放| 中文字幕久久专区| 变态另类成人亚洲欧美熟女| 成人三级黄色视频| 最近视频中文字幕2019在线8| 中国美女看黄片| 欧美中文日本在线观看视频| 欧美高清成人免费视频www| 亚洲一区二区三区色噜噜| 丁香欧美五月| 极品教师在线免费播放| 女警被强在线播放| 看片在线看免费视频| 九九在线视频观看精品| 日韩欧美精品v在线| 90打野战视频偷拍视频| 精品无人区乱码1区二区| 怎么达到女性高潮| 国产爱豆传媒在线观看| 亚洲精品美女久久av网站| 免费看十八禁软件| 久久久久久久久免费视频了| 国产午夜福利久久久久久| 草草在线视频免费看| 在线观看美女被高潮喷水网站 | 亚洲专区中文字幕在线| 97超视频在线观看视频| 亚洲五月婷婷丁香| 国产亚洲欧美98| 99久久精品一区二区三区| 日本黄色视频三级网站网址| 日本精品一区二区三区蜜桃| 亚洲欧美日韩高清专用| 可以在线观看毛片的网站| 99视频精品全部免费 在线 | 国产激情偷乱视频一区二区| 大型黄色视频在线免费观看| 久久久久久国产a免费观看| 一本一本综合久久| 日日摸夜夜添夜夜添小说| 亚洲精品在线美女| 亚洲精品美女久久av网站| 美女cb高潮喷水在线观看 | 天堂√8在线中文| 禁无遮挡网站| 精品久久久久久久久久久久久| 老熟妇乱子伦视频在线观看| 91麻豆精品激情在线观看国产| 久久亚洲真实| 成人18禁在线播放| 欧美三级亚洲精品| 亚洲熟妇熟女久久| 久久这里只有精品19| 国产av麻豆久久久久久久| 国产一区二区三区视频了| 美女黄网站色视频| 给我免费播放毛片高清在线观看| 国产高清三级在线| 又紧又爽又黄一区二区| 天堂动漫精品| 在线观看免费午夜福利视频| 9191精品国产免费久久| 婷婷精品国产亚洲av在线| 久久精品综合一区二区三区| 免费看十八禁软件| 亚洲精华国产精华精| 午夜免费成人在线视频| 久久久国产欧美日韩av| 久久国产乱子伦精品免费另类| 亚洲欧美日韩无卡精品| 美女午夜性视频免费| 黑人巨大精品欧美一区二区mp4| 全区人妻精品视频| 日韩欧美 国产精品| 天天躁狠狠躁夜夜躁狠狠躁| 日本 av在线| 在线免费观看的www视频| 床上黄色一级片| 99国产精品一区二区蜜桃av| svipshipincom国产片| h日本视频在线播放| 国产精品久久久av美女十八| 色在线成人网| svipshipincom国产片| 午夜成年电影在线免费观看| 精品久久蜜臀av无| 午夜成年电影在线免费观看| 两个人看的免费小视频| 亚洲成av人片在线播放无| 精品福利观看| 欧美一区二区精品小视频在线| 美女大奶头视频| 免费av毛片视频| 九九热线精品视视频播放| 色播亚洲综合网| 麻豆成人午夜福利视频| 亚洲男人的天堂狠狠| 欧美绝顶高潮抽搐喷水| 久久精品国产亚洲av香蕉五月| 天天躁日日操中文字幕| 蜜桃久久精品国产亚洲av| 免费无遮挡裸体视频| 婷婷六月久久综合丁香| 亚洲色图 男人天堂 中文字幕| 天堂网av新在线| 人妻夜夜爽99麻豆av| АⅤ资源中文在线天堂| 国产单亲对白刺激| 亚洲中文字幕日韩| 长腿黑丝高跟| 在线免费观看不下载黄p国产 | 国产成年人精品一区二区| 亚洲成av人片免费观看| 国产亚洲精品av在线| 亚洲国产精品999在线| 18禁美女被吸乳视频| 99热这里只有精品一区 | 91av网站免费观看| 国产精品永久免费网站| 两个人的视频大全免费| 淫秽高清视频在线观看| 欧洲精品卡2卡3卡4卡5卡区| 国产精品综合久久久久久久免费| 日本免费a在线| 亚洲av电影不卡..在线观看| 三级国产精品欧美在线观看 | 757午夜福利合集在线观看| 国产不卡一卡二| 日本 欧美在线| 男女那种视频在线观看| 婷婷精品国产亚洲av在线| 一个人免费在线观看电影 | 99精品欧美一区二区三区四区| 一本一本综合久久| 无限看片的www在线观看| 在线观看一区二区三区| 久久久国产成人免费| 成人精品一区二区免费| 色在线成人网| 男插女下体视频免费在线播放| 久久久精品大字幕| 久久人人精品亚洲av| 两个人视频免费观看高清| svipshipincom国产片| 一区二区三区激情视频| 嫩草影院精品99| 久久久久国产一级毛片高清牌| 国产乱人伦免费视频| 国产91精品成人一区二区三区| 成人特级av手机在线观看| 国产成人aa在线观看| 成人精品一区二区免费| 国产精品久久久人人做人人爽| 男女之事视频高清在线观看| www国产在线视频色| 757午夜福利合集在线观看| 每晚都被弄得嗷嗷叫到高潮| 亚洲精品美女久久av网站| 国内精品久久久久久久电影| 99视频精品全部免费 在线 | 在线观看免费视频日本深夜| or卡值多少钱| 国产人伦9x9x在线观看| 午夜福利在线在线| 日本 欧美在线| 国产精品亚洲av一区麻豆| 欧美一级毛片孕妇| 国产精华一区二区三区| 久久这里只有精品19| 高潮久久久久久久久久久不卡| 亚洲欧美日韩东京热| 亚洲中文字幕日韩| 久久精品夜夜夜夜夜久久蜜豆| 国产私拍福利视频在线观看| 欧美日韩乱码在线| 免费在线观看成人毛片| 人妻丰满熟妇av一区二区三区| av福利片在线观看| 亚洲av成人不卡在线观看播放网| 国产私拍福利视频在线观看| а√天堂www在线а√下载| 无人区码免费观看不卡| 级片在线观看| 欧美日韩瑟瑟在线播放| 少妇人妻一区二区三区视频| 视频区欧美日本亚洲| 亚洲欧美日韩高清专用| 日韩欧美国产在线观看| 国产精品乱码一区二三区的特点| 日本与韩国留学比较| 亚洲国产欧美人成| 亚洲欧美日韩高清专用| 99热这里只有是精品50| 亚洲欧美日韩高清在线视频| 中文字幕最新亚洲高清| 窝窝影院91人妻| 国产精品乱码一区二三区的特点| 国产精品影院久久| 久久中文字幕人妻熟女| xxx96com| 一级毛片高清免费大全| 18禁观看日本| 国产黄a三级三级三级人| 在线免费观看不下载黄p国产 | 精品久久久久久久久久免费视频| 国产激情欧美一区二区| 高清毛片免费观看视频网站| 两性午夜刺激爽爽歪歪视频在线观看| 国产午夜福利久久久久久| 亚洲狠狠婷婷综合久久图片| 亚洲自拍偷在线| 亚洲av成人不卡在线观看播放网| 熟女电影av网| 亚洲最大成人中文| 亚洲国产精品合色在线| 国产午夜精品论理片| 国产av麻豆久久久久久久| 中文在线观看免费www的网站| 成人亚洲精品av一区二区| 成年免费大片在线观看| 久久这里只有精品19| 亚洲avbb在线观看| 欧美国产日韩亚洲一区| 色av中文字幕| 高清在线国产一区| 成人亚洲精品av一区二区| 久久午夜综合久久蜜桃| 99热这里只有是精品50| 免费观看精品视频网站| 好看av亚洲va欧美ⅴa在| 成年版毛片免费区| 精品国产超薄肉色丝袜足j| 精品国产超薄肉色丝袜足j| 国产三级在线视频| 久久精品aⅴ一区二区三区四区| 黄色女人牲交| 他把我摸到了高潮在线观看| 亚洲熟妇中文字幕五十中出| 国产精品一及| 久久久久久国产a免费观看| 亚洲七黄色美女视频| 精品免费久久久久久久清纯| a级毛片在线看网站| 欧美色视频一区免费| 1024手机看黄色片| 国产麻豆成人av免费视频| 中出人妻视频一区二区| 午夜福利成人在线免费观看| 天堂网av新在线| 不卡av一区二区三区| 变态另类成人亚洲欧美熟女| 日本免费一区二区三区高清不卡| 免费看a级黄色片| 五月伊人婷婷丁香| 亚洲熟妇中文字幕五十中出| 偷拍熟女少妇极品色| av视频在线观看入口| 真人一进一出gif抽搐免费| 免费在线观看成人毛片| 午夜福利成人在线免费观看| 中文在线观看免费www的网站| 国产精品电影一区二区三区| 99久久精品一区二区三区| 在线十欧美十亚洲十日本专区| 一个人看的www免费观看视频| 精品电影一区二区在线| 国产野战对白在线观看| 可以在线观看的亚洲视频| 久久久久国内视频| 在线免费观看不下载黄p国产 | 午夜福利18| 91麻豆精品激情在线观看国产| 变态另类丝袜制服| 两个人视频免费观看高清| 久久午夜亚洲精品久久| 国产69精品久久久久777片 | 日本五十路高清| 色噜噜av男人的天堂激情| 免费大片18禁| 桃红色精品国产亚洲av| 在线观看免费午夜福利视频| 亚洲avbb在线观看| 久久久久国产精品人妻aⅴ院| 国语自产精品视频在线第100页| 三级国产精品欧美在线观看 | 可以在线观看的亚洲视频| 熟女少妇亚洲综合色aaa.| 淫妇啪啪啪对白视频| 国产亚洲精品av在线| 麻豆成人av在线观看| www.自偷自拍.com| 岛国视频午夜一区免费看| 日日干狠狠操夜夜爽| 狂野欧美白嫩少妇大欣赏| www.熟女人妻精品国产| 看黄色毛片网站| 一级黄色大片毛片| 精品久久久久久,| АⅤ资源中文在线天堂| 亚洲av第一区精品v没综合| 亚洲人成伊人成综合网2020| xxx96com| 亚洲色图av天堂| av天堂在线播放| 美女高潮喷水抽搐中文字幕| 一夜夜www| 国产成人aa在线观看| 亚洲欧美精品综合久久99| 窝窝影院91人妻| 国产精品一区二区精品视频观看| 免费看a级黄色片| 老汉色av国产亚洲站长工具| 天天躁日日操中文字幕| 97超级碰碰碰精品色视频在线观看| 色吧在线观看| 免费在线观看视频国产中文字幕亚洲| 噜噜噜噜噜久久久久久91| 久久久国产成人精品二区| 久久精品亚洲精品国产色婷小说| 免费看a级黄色片| 成人鲁丝片一二三区免费| 亚洲欧美日韩卡通动漫| 嫩草影院入口| 99久久综合精品五月天人人| 国产成人av激情在线播放| 国产精品一区二区三区四区免费观看 | 久久久久九九精品影院| 久久久久国产一级毛片高清牌| 国产av麻豆久久久久久久| 九九在线视频观看精品| 国产欧美日韩一区二区三| 一本一本综合久久| 精品国内亚洲2022精品成人| 12—13女人毛片做爰片一| 欧美日韩中文字幕国产精品一区二区三区| 精品熟女少妇八av免费久了| 国产亚洲精品综合一区在线观看| 午夜免费成人在线视频| 最近视频中文字幕2019在线8| 欧美丝袜亚洲另类 | 中文字幕av在线有码专区| 日韩欧美 国产精品| 久久精品综合一区二区三区| 久久这里只有精品19| 99国产精品一区二区三区| 在线观看一区二区三区| 欧美日韩乱码在线| 精品99又大又爽又粗少妇毛片 | 欧美黑人巨大hd| 狂野欧美激情性xxxx| 亚洲av五月六月丁香网| 免费av毛片视频| 免费观看的影片在线观看| 高潮久久久久久久久久久不卡| 亚洲中文字幕一区二区三区有码在线看 | av在线天堂中文字幕| 91麻豆av在线| 午夜两性在线视频| 亚洲熟妇中文字幕五十中出| 三级毛片av免费| 免费无遮挡裸体视频| 国产精品一及| 精品熟女少妇八av免费久了| 一本综合久久免费| 国产一区在线观看成人免费| 国产精华一区二区三区| 国产精品精品国产色婷婷| 午夜激情福利司机影院| 成人永久免费在线观看视频| av国产免费在线观看| 久久精品91无色码中文字幕| 黑人欧美特级aaaaaa片| 精品人妻1区二区| 一个人免费在线观看电影 | 国产高清视频在线播放一区| 90打野战视频偷拍视频| 少妇的丰满在线观看| 亚洲精品一卡2卡三卡4卡5卡| 日韩欧美三级三区| 国产亚洲欧美98| 婷婷精品国产亚洲av| 久久香蕉国产精品| 少妇的丰满在线观看| 九九在线视频观看精品| 琪琪午夜伦伦电影理论片6080| 一卡2卡三卡四卡精品乱码亚洲| 老熟妇乱子伦视频在线观看| 久久天堂一区二区三区四区| 嫁个100分男人电影在线观看| 亚洲欧美日韩高清在线视频| 国产三级中文精品| 精品久久久久久久末码| 脱女人内裤的视频| 神马国产精品三级电影在线观看| 欧美成人性av电影在线观看| 制服人妻中文乱码| 婷婷精品国产亚洲av在线| 成在线人永久免费视频| 国产成人啪精品午夜网站| 12—13女人毛片做爰片一| 免费观看人在逋| 国产综合懂色| 国产精品影院久久| 精品久久久久久久末码| 丁香六月欧美| 成人三级黄色视频| 丝袜人妻中文字幕| 啦啦啦韩国在线观看视频| 国产精品久久久久久亚洲av鲁大| 国产精品av久久久久免费| 精品久久久久久成人av| 动漫黄色视频在线观看| 舔av片在线| 99久久成人亚洲精品观看| 人妻夜夜爽99麻豆av| av在线天堂中文字幕| 国产aⅴ精品一区二区三区波| 久久精品人妻少妇| 亚洲精品中文字幕一二三四区| 真实男女啪啪啪动态图| 亚洲精品在线观看二区| 天天躁日日操中文字幕| 日韩欧美在线乱码| 1024香蕉在线观看| 国产v大片淫在线免费观看| 黄色日韩在线| 19禁男女啪啪无遮挡网站| 特大巨黑吊av在线直播| 亚洲国产高清在线一区二区三| 精品久久久久久久毛片微露脸| 欧美中文日本在线观看视频| 久久精品91蜜桃| 久久久久久国产a免费观看| 老司机福利观看| 亚洲国产高清在线一区二区三| 听说在线观看完整版免费高清| 级片在线观看| 欧美一区二区精品小视频在线| 在线观看美女被高潮喷水网站 | 熟妇人妻久久中文字幕3abv| 99热只有精品国产| 女人被狂操c到高潮| 搡老岳熟女国产| 精品乱码久久久久久99久播| 久久久国产成人免费| 亚洲va日本ⅴa欧美va伊人久久| 一二三四社区在线视频社区8| 国产v大片淫在线免费观看| 999精品在线视频| av视频在线观看入口| 中文字幕高清在线视频| 啦啦啦免费观看视频1| 亚洲成人久久爱视频| 女同久久另类99精品国产91| 香蕉丝袜av| 欧美最黄视频在线播放免费| 国内揄拍国产精品人妻在线| 99在线视频只有这里精品首页| 免费在线观看视频国产中文字幕亚洲| 亚洲18禁久久av| 成熟少妇高潮喷水视频| 国产精品综合久久久久久久免费| 老鸭窝网址在线观看| 可以在线观看毛片的网站| 精品国产美女av久久久久小说| 欧美色欧美亚洲另类二区| 国产精品亚洲一级av第二区| 我要搜黄色片| 国产激情欧美一区二区| 日本 av在线| 两人在一起打扑克的视频| 久久精品91蜜桃| 91久久精品国产一区二区成人 | 欧美日韩一级在线毛片| 国产黄片美女视频| 国产亚洲欧美在线一区二区| 中文字幕最新亚洲高清| 国产精品久久久av美女十八| 亚洲成人久久爱视频| 亚洲中文字幕日韩| 亚洲aⅴ乱码一区二区在线播放| 午夜日韩欧美国产| 欧美黄色片欧美黄色片| 国产成人欧美在线观看| 欧美日韩黄片免| 久久天躁狠狠躁夜夜2o2o| 91九色精品人成在线观看| 国产一区二区在线av高清观看| 国产精品 欧美亚洲| 午夜福利18| 很黄的视频免费| 欧美日韩黄片免| 国产精品 国内视频| 波多野结衣巨乳人妻| 精品国产美女av久久久久小说| 欧美一级毛片孕妇| 日本免费一区二区三区高清不卡| 成人无遮挡网站| 免费在线观看视频国产中文字幕亚洲| 又黄又粗又硬又大视频| 国产av不卡久久| 国产高清videossex| 亚洲成人中文字幕在线播放| 国产精品美女特级片免费视频播放器 | 怎么达到女性高潮| 一a级毛片在线观看| 欧美黄色淫秽网站| 欧美+亚洲+日韩+国产| 两性夫妻黄色片| 国产毛片a区久久久久| av在线天堂中文字幕| 一级毛片高清免费大全| 深夜精品福利| 欧美国产日韩亚洲一区| 搡老岳熟女国产| 三级毛片av免费| 久久久久久人人人人人| 青草久久国产| av福利片在线观看| 91久久精品国产一区二区成人 | 女人被狂操c到高潮| 神马国产精品三级电影在线观看| 国产欧美日韩一区二区精品| 三级国产精品欧美在线观看 | 国内精品一区二区在线观看| 麻豆国产97在线/欧美| 制服人妻中文乱码| 久久婷婷人人爽人人干人人爱| 老司机午夜福利在线观看视频| 亚洲精品在线观看二区| 欧美色视频一区免费| 亚洲中文字幕一区二区三区有码在线看 | 老司机午夜十八禁免费视频| 色哟哟哟哟哟哟| 国产亚洲精品久久久com| 亚洲成人免费电影在线观看| 母亲3免费完整高清在线观看| 国产免费男女视频| 99国产综合亚洲精品| 中文字幕高清在线视频| 亚洲精品国产精品久久久不卡| 99热这里只有是精品50| 色综合亚洲欧美另类图片| 国内久久婷婷六月综合欲色啪| 一级a爱片免费观看的视频| 狂野欧美激情性xxxx| 村上凉子中文字幕在线| 免费在线观看亚洲国产| 亚洲真实伦在线观看| xxxwww97欧美| 亚洲真实伦在线观看| 99久久国产精品久久久| 国内少妇人妻偷人精品xxx网站 | 亚洲熟女毛片儿| a在线观看视频网站| 熟女电影av网| 青草久久国产| 不卡一级毛片| 国产亚洲精品久久久com| 亚洲电影在线观看av| 国产亚洲精品久久久com| 色哟哟哟哟哟哟| 麻豆成人午夜福利视频| 老熟妇仑乱视频hdxx| 99精品久久久久人妻精品| av在线蜜桃| 精品熟女少妇八av免费久了| 麻豆久久精品国产亚洲av| 男人的好看免费观看在线视频| 国产亚洲av高清不卡| 一本久久中文字幕| 亚洲中文字幕日韩| 男人和女人高潮做爰伦理| 亚洲成人久久性| 国产精品美女特级片免费视频播放器 | 男女做爰动态图高潮gif福利片| 亚洲av电影不卡..在线观看| 国产又色又爽无遮挡免费看| 久久久久国产一级毛片高清牌| 亚洲国产精品999在线| www国产在线视频色| 精华霜和精华液先用哪个| 老司机午夜福利在线观看视频| 日韩欧美精品v在线| 特大巨黑吊av在线直播| 亚洲片人在线观看| 99久久成人亚洲精品观看| 免费电影在线观看免费观看| 久久人妻av系列| 18禁国产床啪视频网站| 亚洲在线观看片| 99在线人妻在线中文字幕| 天堂av国产一区二区熟女人妻| 日韩三级视频一区二区三区| 免费在线观看日本一区| 国产精品一区二区三区四区久久| 一本精品99久久精品77| 男女床上黄色一级片免费看| 啦啦啦韩国在线观看视频| 欧美一区二区国产精品久久精品| 91九色精品人成在线观看| 51午夜福利影视在线观看| 日韩精品中文字幕看吧| 99riav亚洲国产免费| 亚洲成人久久性| x7x7x7水蜜桃| 国产真人三级小视频在线观看| 日本在线视频免费播放| 免费电影在线观看免费观看| 日韩大尺度精品在线看网址| 久久久精品欧美日韩精品| 欧美乱色亚洲激情| 日韩av在线大香蕉| 精品不卡国产一区二区三区|