• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    SmartCrawler:A Three-Stage Ranking Based Web Crawler for Harvesting Hidden Web Sources

    2021-12-15 07:06:44SawroopKaurAmanSinghGeethaMehediMasudandMohammedAlzain
    Computers Materials&Continua 2021年12期

    Sawroop Kaur,Aman Singh,*,G.Geetha,Mehedi Masud and Mohammed A.Alzain

    1Computer Science and Engineering,Lovely Professional University,144411,Punjab,India

    2CEO,Advanced Computing Research Society,Tamil Nadu,India

    3Department of Computer Science,Taif University,Taif,21944,Saudi Arabia

    4Department of Information Technology,College of Computer and Information Technology,Taif University,Taif,21944,Saudi Arabia

    Abstract:Web crawlers have evolved from performing a meagre task of collecting statistics,security testing,web indexing and numerous other examples.The size and dynamism of the web are making crawling an interesting and challenging task.Researchers have tackled various issues and challenges related to web crawling.One such issue is efficiently discovering hidden web data.Web crawler’s inability to work with form-based data, lack of benchmarks and standards for both performance measures and datasets for evaluation of the web crawlers make it still an immature research domain.The applications like vertical portals and data integration require hidden web crawling.Most of the existing methods are based on returning top k matches that makes exhaustive crawling difficult.The documents which are ranked high will be returned multiple times.The low ranked documents have slim chances of being retrieved.Discovering the hidden web sources and ranking them based on relevance is a core component of hidden web crawlers.The problem of ranking bias, heuristic approach and saturation of ranking algorithm led to low coverage.This research represents an enhanced ranking algorithm based on the triplet formula for prioritizing hidden websites to increase the coverage of the hidden web crawler.

    Keywords: Hidden web; coverage; adaptive link ranking; query selection;depth crawling

    1 Introduction

    Web crawling is defined as the automatic exploration of the web.One of the trending research topics in this area is hidden web crawling.The information available on the hidden web is in the form of HTML forms.It can be accessed either by posing a query to general search engines or by submitting forms.To reach a certain high rate of coverage, using any of the methods is full of challenges.Locating the hidden web sources, selecting relevant sources and then extraction of underlying content are the three main steps in the hidden web.The scale of the hidden web is large, so manual identification of hidden web sources is hard, it has to be automatic.In the case of dynamic web pages, data is required from the web server, but the problem is HTML does not deal with databases.So here an advanced application server is required.When a client asks for a web page from the server.It will check if it has a page.If it has to create a page, this is called a dynamic page.Now web server will interact with the application server, which will ask the page from the database.The database will process the data and it will send a page to the client.The client will never come to know how its page is processed.Pages of dynamic websites are not coded and saved separately; these are dynamically populated every time.

    Web dynamism is the non-availability of static links and the web crawler’s inability to automatically fill query interfaces are the main reason for the existence of hidden web [1].A webform is composed of one or more fields and control elements [2].The purpose of attributes of form is conveyed through labels.For example, if a book is to be searched, the attribute title of the form indicates the name of the book to be searched.The purpose of this query form would be to satisfy the queries like “find books from XYZ publication”.An HTML form is recognized from<form>and<form>tag.When the form is submitted with suitable values, the web browser sends an HTTP request to the server.This request includes input and their corresponding values.Two methods named ‘GET’and ‘POST’exist for this process.In the GET method, parameters that have to act are included as part of the URL in the request.So, URLs are unique.During the POST method, parameters are sent in the body of the HTTP request.These methods are used only when the submission of form change the state of form.

    Dimensions of form identification are domainvs.function, form structure, and pre-vs.postquery approaches [3].It has been inferred that most of the existing hidden web crawlers assume that all the corresponding documents are returned by queries.But in practice, queries return only the topkmatches.Due to which exhaustive data harvesting is difficult.In this way, the top k documents will be retrieved more than one time and low documents have slim chances of being retrieved.The information extraction techniques have matured but with the pervasiveness of programmable web application, a lot of research has been done in the first two steps in hidden web crawling.Query selection techniques play a vital role in the quality of crawled data.Our present work is in line with the query selection problem in text-based ranked databases.To remedy the shortcoming of ranking algorithms, the major contributions of this research are:

    ? An effective triplet function based ranking reward is proposed to eliminate the ranking bias towards popular websites.The algorithm incorporates out of site links, weighting function and similarity as collective value to rank a document.The drawback of ranking bias is solved by adding number of out of site links to reward functions.The approach considerably increases the coverage of the crawler.

    ? We proposed a function for retrieving the quality data and designed a stopping criterion that saves crawlers from falling into spider traps.These rules also mitigate the draining of the frontier for domains and status codes.Ranking, as well as learning algorithms, are adaptive and leverage the information of the previous runs.

    ? Since the data sources are form-based, this approach works well with both GET and POST methods.With a certain return limit, the algorithm is proved to be efficient in curating unique documents.Both the detection and submission of forms is automatic and with successive runs dependency on seed sources is reduced.

    The next Section 2, throws light on existing research in hidden web crawling and ranking algorithms.In Section 3, we define our proposed approach.Section 4 shows a discussion of the results of our experiments.Section 5 concludes our paper and also includes the future line of research.

    2 Literature Review

    The goal of the hidden web crawler is to uncover data records.The resultant pages can be indexed by search engines [3].The attempt is to retrieve all the potential data by giving queries to a web database.Finding values for the forms is a challenging task.For few controls where the value is definite i.e., to choose from existing is relatively easy as compared to text-based queries to get resultant pages.Query generation methods in the hidden web are either based on prior knowledge i.e, to construct a knowledge base beforehand or without any knowledge [4].A query selection technique, which generated the next query based on frequent keywords from the previous records were first presented in [5].To locate the entry points of the hidden web, four classifiers are being used.If the page has information about a topic or not is decided by a term-based classifier.To find links that will lead to a page with a search form link page classifier is used.Search form classifier to discard non-searchable forms and domain-specific classifier to collect only relevant information from search form available on websites.The crawler is focused to find relevant forms(searchable) by using multi-level queues initialized with a seed set [6].Search forms have different types, each type fulfilling a different purpose.

    a.Importance of Coverage in the Hidden Web Crawler

    Authors in [7] suggested that along with scalability and freshness, coverage is another important measure for hidden web crawlers.As scalability and freshness cannot measure the effectiveness of form-based crawlers.Coverage is defined as the ratio of the total number of relevant web pages that the crawler has extracted and the total number of relevant web pages in hidden web databases.For this, a crawler is dependent on database content.Another metric called is submission efficiency is defined as the ratio of response webpages with search results to the total number of forms submitted by the crawler during one crawl activity.Suppose a hidden web crawler has crawled Ncpages, and let NTdenote the total number of domain-specific hidden web pages.Nsfis the total number of searchable forms that are domain-specific.Then harvest ratio is defined as ratio if Nsfand Nc.Coverage is defined as Nsfand NT.Harvest ratio measure the ratio of relevant forms crawled from per web pages while coverage is defined as the ability to crawl as many relevant pages with a single query.Whether the crawled content is relevant to the query or not is measured by precision [8].Authors in [9] have introduced another measure called specificity for the hidden web.In another study coverage is defined as the number of web pages that can be downloaded by updating query keywords [10].The above literature shows that different studies have defined coverage in different ways.

    b.The Crawling Problem

    Every crawler has to set a limit for the documents to be returned.The crawlable relationship is can be represented by a document query matrix.But unlike traditional document term matric,it does not guarantee even if the document contains the query keyword.The document has to score a high rank from top k matches.Problem is to select a subset ofQterms that can cover as many documents as it can.IfDHis a hidden web database or a data source that hasDIdocuments.Mis the ranking number.

    In a data source, SupposeDmis placed at the rank higher thanDn.A database can be queried with a set of queries.There exist two types of techniques for it.(i) set covering approach,(ii) machine learning-based (iii) heuristic methods (iv) ranked crawling.Set covering method was first implemented by [11].The document frequencies are estimated from fully or partially downloaded pages.Fig.1 shows the basic steps involved in hidden web crawling.Unlike traditional crawling, when forms are encountered these are analysed and filled with suitable values.Response generated after submitting the form-based data is used for further harvesting or indexing.In machine learning methods for ranking, each feature is assigned a weight, then the machine learning algorithm learns the relation between features and weights.

    Figure 1:Basic steps of hidden web crawling

    It has been observed that if the document has a low rank its probability of being retrieved is low.If the document has a high rank it can be retrieved multiple times.And we infer that if the document is retrieved multiple times, it is a waste of resources.So, the system needs to keep a check on the documents already retrieved.From the seven domains of consideration, the data set in total consist of 25798 documents.The document ID’s were considered document rank.The query words are the most popular words with frequency ranging 1000–6000.If we set the return limit k = 100, only top 1000 documents can be reached.To address this problem, we have proposed a ranking method based on term weighting, number of out of site links and site similarity.Term weighting is estimated by the sample data.The term and documents which have frequencies less than k are selected.In performance evaluation we have tested our method over book, product, auto, flight, hotel, music and premier domains.

    The critical literature review portrays the inadequacies like the form filling process execute only the current run, the outcomes of a query from the previous run are not considered.The effect of ranking the web page is not part of most of the techniques.Crawling can be made more intractable if queries are dependent on the ranking.Ordinary crawling methods face query bias problem and do not perform well on ranked data sources.Ranking bias increase with the size of the dataset.If all queries match more than N web pages, the bottom-most webpages have the least chance to be retrieved.Due to which the crawling process becomes inefficient.Therefore, in this paper, a novel technique for ranking hidden web crawling has been proposed.A hidden web database is said to be ranked if it returns k top sources.The ranking in this approach is not dependent on queries.The ranking formula is triplet factor of out of site links, term weighting and site similarity.

    The following observations motivated the idea:

    (1) It is desirable to have a system that can automate data retrieval from hidden web resources.It is not possible to get all the data in a single crawl, so to retrieve the data multiple runs of crawler is required.No single crawl can provide all the desired data that is resident in a database, therefore multiple crawls need to be carried out to exhaustively retrieve all the content in the web database.

    (2) To address this problem of ranking, we have proposed a ranking formula that helps to retrieve the document even if it is at the bottom of the list.The key idea is to use the term weighting, site frequency and cosine similarity for a ranking formula.Since weighting frequencies are estimated based on sample documents.unknown, we need to estimate them based on sample data.

    (3) During form filling, for every domain, the crawler tries to match its attributes with the fields of the form, using cosine similarity.

    (4) It is also desirable that the crawler should work on both types of submission methods.The proposed crawler has not only implemented both get and post method but also considered more types of forms of status codes.

    3 Proposed System

    Let D be the database withd1,d2,d3.......dmdocuments, where1≤i≤m.The subscriptiofdiis the rank of document.If (i <j), this means thatdihas high rank thandj.Suppose for database D, queries are denoted byq.

    (1≤j≤n)is the coverage of the document in D

    For the sake of simplicity, we have ignored the query terms that are not present in any of the document.Ifqjcovers total number of documents then F(qj) denotes document frequency ofqj.Now if the return limit of document is ‘L’then in query matrix A = (mij) in which rows and columns represent the documents and queries respectively.

    Goal is to select a subset of termsq′(q′?q)to cover maximum ‘d’inDwith minimum cost.If probability of document to be matched by query is unequal, then a query is said to have query bias.Query bias and size of document are directly proportional to each other.So, query bias is larger in large documents.The ranking bias is probability that document can be returned.One of the most prominent feature of hidden web sources is that documents are ranked to return top documents for queries.

    To find the weighted average the assumption are- The actual number of documents isN,andqqueries are sent toD.Mddenotes the total number of documents that are matched.δjis total number of new documents retrieved by q.Ujis the total number of unique documents.Dddenotes the total number of duplicate documents.Lis the return limit.Ris ratio of distinct document and actual document.

    If all the document matched with query and all the documents are returned, then letM0is the model in which subscript 0 is the variation of match probability.

    Since our method is based on machine learning the query selection function is

    s = state of the crawler

    R(s,q)= reward function, for simplicity lets denoteR(s,q)=€

    However only this function could not return promising queries for hidden web where the documents are ranked.And we cannot measure the quality of the data.In web crawling quality of the URL is indicated by frequency of out of site links.So, the ranking function is enhanced by performing sum of weighting term and out of site frequency.But the issue here is to find similar data.So, the third function is cosine similarity between the vectors.

    Let SF be the frequency of out-links.Section 3.1 explains the details of the derived formula.If (Sq) represent small queries then the (Sq) is subset of (SQ).SupposeF(q)is the term frequency inverse document frequency.F(q)<‘L’.Sqis selected randomly fromSQ.

    Fig.2 shows the way the query could be chosen.According to our model system can choose queries with small frequencies to reach high coverage.and query words are not chosen beyond the scope of D.Trying all the frequencies is beyond the scope of this research so we have assumed the return limit of 100.Among this range only queries are selected.in case the sample size is small, then to enlarge the sample of frequencies Maximum likelihood estimator is used.For each termf(q)enlargement factorF′qis defined as:

    Figure 2:Working of query system

    DB is the size of the original database, and D is the sample size.With computation of (F′q)document frequencies are used to rank all the terms in D.In case of small queries or single word query, terms are used to enhance the local terms.Function (rj)computes the complete rank, while sample size issues are handled by (F′q).

    In the model discussed above, we have randomly selected the documents directly from the seed set.Hence, we assumed that all the matched documents are returned.Now in case of documents with unequal probability, ranked data will retrieve only top (L/M) documents.In practice it would not be easy to select the queries with document frequencies fixed to any number.Suppose that t′number of queries are sent to the data source, each match with mi number of documents, where 1 ≤i ≤t.We define the overflow rate as:

    IN actual the data source size estimation, documents will have varying probabilities.Let’s denote this case by Mh..the relation between probability and estimation function is obtained as follows:

    Every crawler has a list of queues called frontier or crawl frontier.It has all the unvisited links.These links are fetched by requesting the HTTP server.It takes one seed URL to start the crawling.The page is processed and parsed to extract its content.All the links available are checked for the forms.Links are collected and the frontier is rearranged.This module extracts the links or the hyperlinks.The two heuristics are followed by this module.First for every extracted link rejection criterion is followed [12].After that stopping criteria is followed as follows:

    (1) The crawler stops crawling if a depth of three is reached.It is proved in [13] that most of the hidden web pages are found till depth 3.We have decided that at each depth maximum number of pages to be crawl is 100.

    (2) At any depth maximum number of forms to be found is 100 or less than a hundred.If the crawler is at depth 1, it has crawled 50 pages, but no searchable form is found, it will directly move to the next depth.And the same rule is followed at depth.Suppose if at depth 2, 50 pages are crawled and no searchable form is found.The crawler will fetch a new link from the URL.This dramatically decreases the number of web pages for the crawler to crawl.But these links are relevant for a focused crawl.

    The crawler according to its design keeps on crawling the webpages.But exhaustive crawling is not beneficial.So, it is required to limit the crawl.The stopping criteria discussed above serve this purpose.The most generic steps involved in designing a hidden web crawler that employs the ranking of URLs is defined in Algorithm 1 as follows:

    Algorithm 1:Hidden web algorithm Step 1 Crawler get a URL from the frontier and request the web server to fetch a page.Step 2 The second step has two parts:a.After the webpage is fetched, it is parsed and analysed, links available on it are extracted.b.The page is checked according to rejection rules.Step 3 Filtered links are sent to crawl frontier.Then rank of the URL is computed and the frontier list is re-ordered.Step 4 Now the forms are analyzed for submission method, whether it post or get.With suitable values, forms are sent to the server.The server replies to the crawler about each entry to that form.Crawler sends the filled form by placing the received query words to the HTTP server.Step 5 Fetched pages are ranked and URLs are maintained in the database.Step 6 Links received on web pages are further sent to the crawling list.Step 7 Step 1-6 are repeated.

    Most of the times the ranking formula is based on ranking top k terms.This leads to ranking bias.To remedy this shortcoming, the traditional ranking algorithm is enhanced with triplet factor.The following section describes an efficient ranking algorithm.The proposed algorithm is within the context of text-based ranked hidden web sources to maximize the number of hidden web website but minimize the number of visits to a particular source repeatedly.

    a.Enhanced Ranking Algorithm

    The aim of ranking in hidden web crawling is to extract topndocuments for the queries.Ranking helps the crawler to prioritize the potential hidden websites.Three function formula is designed for ranking hidden web sites.Initially, the crawler is checked on pre-designed queries.The ranking formula is adopted from [14].But our reward function is based on several out-links and site similarity and term weighting.The formula is explained in Eqs.(9) and (10).

    wis the weight of balancing€and cj.as defined in Eq.(4).δjis the number of new documents.The computation ofrjshows the similarity of€and returned documents.If the value of €is closer to 0, it means that returned value is more similar to the already seen document.cj is afunction of network communication and bandwidth consumption.The value of the ranking reward will be closer to 0 if the new URL is similar to the already discovered URL.Similarity (S) is computed between the already discovered URL and the newly discovered URL.The similarity is required in the ranking section.The similarity is computed as.

    After pre-processing is performed, the crawler has a list of keywords.The similarity is computed using cosine similarity.The exact match found means value of cosine similarity is 1, zero otherwise.This system has to generate a repository of words for form generation.Our previous work does not include the return limit for the crawler.In addition to the work is setting the crawling return limit k = 100.Algorithm 1 explains the general steps in hidden web crawling,while steps of enhanced ranking algorithm is explained in Algorithm 2.

    Algorithm 2:Proposed Enhanced Ranking Algorithm Step 1 Extract the new coming URL for U, A and T.Step 2 Identify the domain of the web page under consideration.Step 3 Order the site frontier according to the similarity.Step 4 Similarity is computed as the cosine similarity of vectors.Step 5 Calculate the out of site links for the encountered URL.Step 6 Calculate the term frequency-inverse document frequency.Step 7 Calculate the ranking using the formula (rj)=(1-w).δj+w.(€)/cj.where cj is the factor of the network.Step 8 Repeat steps 1–8 for ‘n’number of web pages.

    Figure 3:Steps involved in the learning process

    In hidden web crawling, ranking and learning algorithm depends on each other.The crawler has to learn the path of the searchable forms present on the URL ranked by a crawler.The learning algorithm employs topic similarity, rejection rules and relevance judgement of seed URL.Firstly, the topic similarity will indicate how similar the web pages are comparing to an interesting topic.For example, if the crawler has visited a pre-defined number of forms i.e., 50 new hidden website or 100 new searchable forms.Each time a run of crawler is completed, the feature space of hidden web site and feature space of links are updated.New patterns are reflected due to periodic updating.Feature space is split into a feature space of a hidden website and feature space of link to allow effective learning of features.We have used a similarity score to calculate how relevant a newly discovered webpage is to the previously discovered web pages.Fig.3 shows the basic steps of learning in a proposed approach.

    There exist numerous ways to compute the topic similarity.But we have focused on the vector space approach to represent the webpage as a vector.The content of the web page is represented by the direction of the vector.The two pages are said to be relevant if their vector points in the same direction i.e mathematically angle between them and cosine similarity is zero and one respectively.In the second phase, the crawler collects better pages depending on previous experiences.In our approach, the crawler starts directly from the home page, and then move to further links.The feature space also includes a path of the URL to help crawler the path of searchable forms.

    ?

    When the crawler has retrieved searchable forms, the next step is to fill the forms with suitable values.Before that, the system is required to associate the form fields with the domain attributes.

    b.Associating Form Fields and Domain Attributes

    If a form is denoted by F, suppose it is found on an HTML page denoted by W of domain‘d’.Our goal is to find, if F allows queries related to ‘d’, to be executed over it.For this, the first step is to find text associated with the form field.The system will get these values from the feature vector.For the sake of simplicity, we have considered only URL, anchor and text around an anchor.The second step is to find a relationship between fields and forms w.r.t the attributes of form.This is performed by computing the similarity between the form field text and texts of attributes in the domain.To find the similarity between the text we have applied a cosine similarity measure.Now the crawler has to detect fields of forms.These fields should correspond to the target domain’s attributes.The form fields are of bounded and unbounded type.Bounded fields offer a finite list of query values.For example, the select option, radio button and checkbox.While the unbounded fields have unlimited query values.For example, a text box.Our approach is limited to bounded controls only.After the forms are filled with possible values.it could result in 200 (web page correctly submitted), 400 (Bad request response status code),401 (Unauthorized), 403 (Forbidden client error status response code), 404 (Page not found),405 (Method Not Allowed response status code), 413 (Payload too large), 414 (URI Too Long response status code), 500 (Internal Server Error), 503 (Service Unavailable), 524 (A time out occurred).The approach is designed to submit forms using both the GET and POST method.

    4 Result Discussion

    The crawler is implemented in python and is evaluated over 11 types of response.In our implementation, site classification is based on a Na?ve Bayes classifier trained from samples of the DMOZ directory.As explained earlier different authors have defined coverage in different ways.In our approach, coverage is the number of correctly submitted forms with status code 200.We have excluded those pages which do not generate response.Like if the status code is 400, this means form is submitted with correct values, but response is not generated due to unauthorized status code.

    In Tab.1, the number of forms submitted per domain are shown.The book domain has the highest coverage as compared to others.

    Table 1:Shows the number of forms retrieved per domain

    Tabs.2 and 3 show the number forms submitted using GET method and POST method out of the total number of forms per domains.

    Table 2:Shows the number of forms submitted using the GET method

    Table 3:Shows the number of forms submitted using the POST method

    If the form is submitted with status code 200, it means that the form has been submitted with correct values.Sometimes the form is submitted but the response is not generated due to some reasons like internal server error, service unavailable etc.For coverage, we have considered only those pages for which response is generated back.

    Figs.4 and 5 shows the number of forms correctly submitted and the comparison of GET and POST method respectively.Our approach has worked better with POST methods.Which indicate the efficiency of the ranking algorithm and form submission method as explained in [12].Tab.4 shows the comparison of GET and POST method w.r.t to documents per domain and new documents.

    Figure 4:Coverage of crawler in terms forms submitted

    Figure 5:Comparison of coverage for GET and POST methods

    Table 4:Comparison of GET and POST method w.r.t number of documents per domain vs. new document captured

    In Tab.4, keeping the number of queries same, the methods of submission are compared.Efficiency is compared with respect to unique documents retrieved.From the total number of document new unique documents are calculated.In it, Q (number of queries), N (Number of documents),Ujnumber of new documents retrieved.Since the method has not performed well in premier domain, so we have skipped its comparison in terms of number of documents.At present we have experimented with only three value of L, i.e., 100, 200 and 300.Another inference from the above table shows that our system has worked well with return limit 100.After return 100,the system retrieved lesser number of unique values.

    Our method is static limit based ranking method.If we choose many high frequencies,coverage rate is decreased.This led to skipping some high-ranking documents, this is the reason our system has not worked well with premier.But in future with use of multiple query words,this problem could be overcome.Figs.6 and 7 shows the comparison of submission methods in terms of new document captured.

    Figure 6:Comparison of domains for number of document and new document captured using GET method

    Figure 7:Comparison of domains for number of document and new document captured using POST method

    5 Conclusion

    We have proposed an enhanced ranking algorithm for collecting hidden websites based on priority.This paper has tackled problem when the document is missed if it has low rank.This algorithm is a triplet formula to calculate the rank of the website.By including site frequency, the documents which have low rank earlier, can have high rank.By ranking the website, the crawler minimises the number of visits and maximize the number of websites with embedded forms.The convergence of our algorithm is fast as we have also implemented stopping rules and rejection rules.In searching for new rules to improve the efficiency, we have imposed the limit on number of documents to be returned.This has also served as the drawback of the system as the crawler should not pose any limit on number of documents.One another limitation of the system is in premier domain.The number of forms submitted is very low.For this reason, the domain could not be included in the new document retrieved factor.In future we will improve this area.We have also discussed the stopping criteria.The stopping rules save the crawler from the exhaustive crawling traps.This not only save the memory and time but also help retrieving more unique documents.On the same line we have implemented concept of crawling up to depth of three and after that new URL is picked up from the frontier.The efficiency of the crawler is shown by correctly submitted web pages.The inclusion of more domains and status code remains as future work.We are also going to combine the distance rank algorithm which we believe will yield better results.In future we will also work on unbounded forms.

    Acknowledgement:We are very thankful for the support from Taif University Researchers Supporting Project (TURSP-2020/98).

    Funding Statement:Taif University Researchers Supporting Project number (TURSP-2020/98), Taif University, Taif, Saudi Arabia.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    免费观看在线日韩| 青春草国产在线视频| 亚洲精品一区蜜桃| 网址你懂的国产日韩在线| 亚洲精品,欧美精品| 大片电影免费在线观看免费| 热99国产精品久久久久久7| 精品亚洲乱码少妇综合久久| 国产精品一区二区在线观看99| 久久国产乱子免费精品| 日本爱情动作片www.在线观看| a 毛片基地| 乱码一卡2卡4卡精品| 99re6热这里在线精品视频| 亚洲精品视频女| 国产一区二区三区av在线| 天堂俺去俺来也www色官网| 一区二区三区免费毛片| 久久精品久久精品一区二区三区| 免费大片18禁| 成人18禁高潮啪啪吃奶动态图 | 国产精品久久久久久精品电影小说 | 亚洲图色成人| 中文字幕免费在线视频6| 成年免费大片在线观看| 干丝袜人妻中文字幕| 三级国产精品片| 尤物成人国产欧美一区二区三区| 精品一区二区三卡| 国产av精品麻豆| 一区二区av电影网| 一级毛片我不卡| 亚洲图色成人| 成年美女黄网站色视频大全免费 | 国产精品偷伦视频观看了| 一级a做视频免费观看| 久久久久网色| 国产精品国产av在线观看| 久久久久久九九精品二区国产| 大码成人一级视频| 王馨瑶露胸无遮挡在线观看| 亚洲第一av免费看| 国产乱人视频| 日本猛色少妇xxxxx猛交久久| 久久国内精品自在自线图片| 青春草视频在线免费观看| 午夜福利在线观看免费完整高清在| 久久久久久伊人网av| 亚洲成人手机| 又黄又爽又刺激的免费视频.| 美女脱内裤让男人舔精品视频| 七月丁香在线播放| 国产淫片久久久久久久久| 大又大粗又爽又黄少妇毛片口| 制服丝袜香蕉在线| 欧美国产精品一级二级三级 | 蜜桃在线观看..| freevideosex欧美| 欧美日韩视频高清一区二区三区二| 久久99热6这里只有精品| kizo精华| 欧美精品一区二区免费开放| 91在线精品国自产拍蜜月| 国产精品伦人一区二区| 成人亚洲欧美一区二区av| 亚洲怡红院男人天堂| 91精品国产国语对白视频| 午夜精品国产一区二区电影| 人人妻人人澡人人爽人人夜夜| 搡女人真爽免费视频火全软件| 亚洲aⅴ乱码一区二区在线播放| 欧美 日韩 精品 国产| 亚洲最大成人中文| 国产成人91sexporn| 欧美成人一区二区免费高清观看| 又爽又黄a免费视频| 欧美日韩综合久久久久久| 亚洲av.av天堂| 99久久精品国产国产毛片| 丝袜喷水一区| freevideosex欧美| 在线观看免费高清a一片| 国产久久久一区二区三区| 国产精品一区二区在线观看99| 日日撸夜夜添| 国产片特级美女逼逼视频| 国产淫语在线视频| 91aial.com中文字幕在线观看| 日韩成人av中文字幕在线观看| 亚洲av不卡在线观看| 国产精品一区二区三区四区免费观看| 午夜福利网站1000一区二区三区| 性高湖久久久久久久久免费观看| 免费在线观看成人毛片| 秋霞在线观看毛片| 国产精品伦人一区二区| 观看免费一级毛片| 欧美97在线视频| 国产精品99久久久久久久久| 国产精品一区二区三区四区免费观看| 在线天堂最新版资源| 欧美亚洲 丝袜 人妻 在线| 久久久成人免费电影| 国产精品免费大片| 国产黄片视频在线免费观看| 最近手机中文字幕大全| 欧美xxxx性猛交bbbb| 国产高清三级在线| 亚洲精品一二三| 免费高清在线观看视频在线观看| 国产精品女同一区二区软件| 亚洲精品国产av蜜桃| av福利片在线观看| av一本久久久久| 色视频www国产| 日韩,欧美,国产一区二区三区| 男女下面进入的视频免费午夜| 一区二区三区免费毛片| 少妇的逼水好多| 国产精品福利在线免费观看| 少妇猛男粗大的猛烈进出视频| 日日啪夜夜爽| 男人舔奶头视频| 在线观看美女被高潮喷水网站| 国产欧美日韩一区二区三区在线 | 亚洲美女搞黄在线观看| 91狼人影院| 22中文网久久字幕| 丰满人妻一区二区三区视频av| 嫩草影院新地址| 成年人午夜在线观看视频| 丝袜喷水一区| 久久99精品国语久久久| 精品久久久精品久久久| 99久国产av精品国产电影| 久久久a久久爽久久v久久| 日韩一本色道免费dvd| 少妇的逼好多水| 国产视频内射| 美女脱内裤让男人舔精品视频| 九色成人免费人妻av| 一级黄片播放器| 日本黄色日本黄色录像| 18禁裸乳无遮挡动漫免费视频| 亚洲国产精品成人久久小说| 不卡视频在线观看欧美| 又粗又硬又长又爽又黄的视频| 国产精品久久久久久精品电影小说 | 久久99热这里只有精品18| 亚洲av免费高清在线观看| 人人妻人人看人人澡| 美女主播在线视频| 亚洲国产精品成人久久小说| 国产黄片美女视频| 大码成人一级视频| 纯流量卡能插随身wifi吗| 免费观看在线日韩| 丰满迷人的少妇在线观看| 国产爽快片一区二区三区| 午夜福利在线观看免费完整高清在| 伦理电影免费视频| 最近最新中文字幕大全电影3| 国产高潮美女av| 成人18禁高潮啪啪吃奶动态图 | 国模一区二区三区四区视频| 国产成人午夜福利电影在线观看| 日韩制服骚丝袜av| 人妻系列 视频| 伦理电影大哥的女人| 国产精品国产三级国产av玫瑰| 熟女人妻精品中文字幕| 国产精品一区二区在线观看99| 国产国拍精品亚洲av在线观看| 青春草国产在线视频| 免费av不卡在线播放| 大片电影免费在线观看免费| 国产精品蜜桃在线观看| 22中文网久久字幕| 一级毛片我不卡| 另类亚洲欧美激情| 一级毛片 在线播放| 一级黄片播放器| 人人妻人人澡人人爽人人夜夜| 国产男女内射视频| 国产久久久一区二区三区| 男的添女的下面高潮视频| 亚洲色图综合在线观看| 人人妻人人看人人澡| 少妇人妻 视频| 精品一区二区三区视频在线| 亚洲国产日韩一区二区| 国产欧美另类精品又又久久亚洲欧美| 热re99久久精品国产66热6| 国产av码专区亚洲av| 国产视频首页在线观看| 国产色爽女视频免费观看| 蜜臀久久99精品久久宅男| 亚洲婷婷狠狠爱综合网| 日韩在线高清观看一区二区三区| 在线观看三级黄色| 熟妇人妻不卡中文字幕| 黄片无遮挡物在线观看| 国产淫片久久久久久久久| 国产在线视频一区二区| 国产亚洲最大av| 老熟女久久久| 免费观看a级毛片全部| 女性被躁到高潮视频| 99国产精品免费福利视频| 日韩中文字幕视频在线看片 | 久久久久久伊人网av| 亚洲欧美日韩另类电影网站 | 综合色丁香网| av视频免费观看在线观看| 亚洲怡红院男人天堂| 在线免费观看不下载黄p国产| 精品酒店卫生间| 少妇的逼好多水| 精品99又大又爽又粗少妇毛片| 成人毛片60女人毛片免费| 久久99热这里只频精品6学生| 国产色爽女视频免费观看| 亚洲美女黄色视频免费看| 免费观看的影片在线观看| 久久精品国产亚洲av涩爱| 日本vs欧美在线观看视频 | 欧美人与善性xxx| 菩萨蛮人人尽说江南好唐韦庄| 久久精品国产亚洲网站| 91久久精品电影网| 亚洲婷婷狠狠爱综合网| 国产成人freesex在线| 午夜激情久久久久久久| av在线app专区| av在线观看视频网站免费| 亚洲va在线va天堂va国产| 91狼人影院| 亚洲av福利一区| 久热久热在线精品观看| 国产黄色免费在线视频| 日韩欧美精品免费久久| 免费大片黄手机在线观看| 多毛熟女@视频| 日日啪夜夜爽| 亚洲精品日韩av片在线观看| 少妇 在线观看| 一级毛片黄色毛片免费观看视频| 一区二区三区四区激情视频| 中国美白少妇内射xxxbb| 日韩三级伦理在线观看| 伊人久久国产一区二区| 久久久久精品久久久久真实原创| 亚洲精品456在线播放app| 久久久久久九九精品二区国产| 美女内射精品一级片tv| 亚洲最大成人中文| 男女国产视频网站| 国产欧美日韩精品一区二区| 97超视频在线观看视频| av线在线观看网站| 26uuu在线亚洲综合色| 51国产日韩欧美| 国产精品久久久久久久电影| 美女福利国产在线 | 国产精品免费大片| 国产日韩欧美在线精品| 亚洲一级一片aⅴ在线观看| 亚洲国产成人一精品久久久| 精品人妻视频免费看| 中文乱码字字幕精品一区二区三区| 少妇高潮的动态图| 午夜免费鲁丝| 熟女av电影| 亚洲自偷自拍三级| 中文在线观看免费www的网站| 黄片无遮挡物在线观看| 99热这里只有是精品在线观看| 天堂中文最新版在线下载| 日本av手机在线免费观看| 成人一区二区视频在线观看| xxx大片免费视频| 国产欧美日韩一区二区三区在线 | 男的添女的下面高潮视频| 最后的刺客免费高清国语| 亚洲在久久综合| 丰满乱子伦码专区| 成年女人在线观看亚洲视频| 国产亚洲欧美精品永久| 国产一区有黄有色的免费视频| 精品久久久久久久久亚洲| 最近手机中文字幕大全| 色网站视频免费| 啦啦啦视频在线资源免费观看| 免费看不卡的av| 大片电影免费在线观看免费| 成人综合一区亚洲| 18禁裸乳无遮挡动漫免费视频| 熟妇人妻不卡中文字幕| 成人国产麻豆网| 亚洲精品中文字幕在线视频 | 少妇丰满av| 性色av一级| 在线观看av片永久免费下载| 熟女人妻精品中文字幕| 在线观看av片永久免费下载| 国产精品99久久99久久久不卡 | 国产男女超爽视频在线观看| 国产免费视频播放在线视频| 亚洲性久久影院| 免费av中文字幕在线| 老师上课跳d突然被开到最大视频| 观看av在线不卡| 免费在线观看成人毛片| 蜜臀久久99精品久久宅男| 国产成人aa在线观看| 高清黄色对白视频在线免费看 | 欧美一级a爱片免费观看看| 日本vs欧美在线观看视频 | 黑丝袜美女国产一区| 亚洲精品久久午夜乱码| 久久鲁丝午夜福利片| 乱系列少妇在线播放| 秋霞伦理黄片| 国产91av在线免费观看| 午夜福利网站1000一区二区三区| 嫩草影院入口| 一个人免费看片子| 中文字幕av成人在线电影| 免费观看性生交大片5| 亚洲图色成人| 亚洲av欧美aⅴ国产| 青青草视频在线视频观看| 久久精品国产亚洲av天美| 精品国产一区二区三区久久久樱花 | 日日啪夜夜爽| 日韩av不卡免费在线播放| 美女内射精品一级片tv| 国产伦理片在线播放av一区| 成人国产麻豆网| 久久国内精品自在自线图片| 又黄又爽又刺激的免费视频.| 全区人妻精品视频| 97热精品久久久久久| 国产精品一区二区在线观看99| 国精品久久久久久国模美| 欧美人与善性xxx| 久久人人爽av亚洲精品天堂 | 91精品国产国语对白视频| 成人高潮视频无遮挡免费网站| 天美传媒精品一区二区| 免费看光身美女| 欧美日韩视频高清一区二区三区二| 中国美白少妇内射xxxbb| 久久久久人妻精品一区果冻| 亚洲,欧美,日韩| 国产精品女同一区二区软件| 亚洲最大成人中文| av卡一久久| 少妇高潮的动态图| 97热精品久久久久久| 色网站视频免费| 丰满少妇做爰视频| 夜夜看夜夜爽夜夜摸| 春色校园在线视频观看| 成年美女黄网站色视频大全免费 | 亚洲婷婷狠狠爱综合网| 国产精品久久久久久久电影| 亚洲精品日本国产第一区| 亚洲第一区二区三区不卡| 国产亚洲精品久久久com| 成年美女黄网站色视频大全免费 | 亚洲国产最新在线播放| 亚洲第一区二区三区不卡| 亚洲四区av| 三级国产精品欧美在线观看| 国产人妻一区二区三区在| 午夜福利网站1000一区二区三区| 51国产日韩欧美| 久久久午夜欧美精品| 人妻系列 视频| 国产精品成人在线| 美女视频免费永久观看网站| 久久久久精品久久久久真实原创| 精品人妻一区二区三区麻豆| 亚洲国产欧美在线一区| 久久婷婷青草| 狂野欧美激情性xxxx在线观看| 亚洲精品一二三| 久久综合国产亚洲精品| 五月开心婷婷网| 少妇猛男粗大的猛烈进出视频| 亚洲成色77777| 最后的刺客免费高清国语| 亚洲国产高清在线一区二区三| 男人和女人高潮做爰伦理| 亚洲精品aⅴ在线观看| 男人爽女人下面视频在线观看| 1000部很黄的大片| 免费在线观看成人毛片| 五月伊人婷婷丁香| 国产精品久久久久久久久免| 观看av在线不卡| 免费黄网站久久成人精品| 亚洲怡红院男人天堂| 涩涩av久久男人的天堂| 亚洲一级一片aⅴ在线观看| 欧美一级a爱片免费观看看| 亚洲内射少妇av| 人人妻人人看人人澡| 国产精品女同一区二区软件| 五月开心婷婷网| 国模一区二区三区四区视频| 人妻夜夜爽99麻豆av| 水蜜桃什么品种好| 国产一区二区三区av在线| 午夜精品国产一区二区电影| 色吧在线观看| 久久精品熟女亚洲av麻豆精品| 国产精品欧美亚洲77777| 国产精品不卡视频一区二区| 免费观看无遮挡的男女| 国产精品久久久久久久电影| 亚洲欧美一区二区三区国产| 少妇人妻一区二区三区视频| 亚洲国产精品国产精品| 在线观看三级黄色| 97超碰精品成人国产| 综合色丁香网| 大陆偷拍与自拍| 一本一本综合久久| 亚洲精品自拍成人| 免费看日本二区| 九九久久精品国产亚洲av麻豆| tube8黄色片| 高清av免费在线| 亚洲欧美成人精品一区二区| av国产精品久久久久影院| 亚洲经典国产精华液单| 亚洲精品日本国产第一区| freevideosex欧美| 欧美极品一区二区三区四区| 亚洲精品乱久久久久久| 久热这里只有精品99| 久久久精品94久久精品| 成年免费大片在线观看| 99久久精品国产国产毛片| 国产高清不卡午夜福利| 午夜福利视频精品| 看免费成人av毛片| 国产精品一区二区在线不卡| 成人美女网站在线观看视频| 久久久成人免费电影| 男人狂女人下面高潮的视频| 亚洲精品日韩在线中文字幕| 久久久色成人| 精品国产一区二区三区久久久樱花 | 久久久久人妻精品一区果冻| 久久这里有精品视频免费| 五月玫瑰六月丁香| 精品人妻一区二区三区麻豆| 国产一区二区在线观看日韩| 亚洲成人一二三区av| 日韩av在线免费看完整版不卡| 亚洲精品日本国产第一区| 国产精品久久久久久久电影| 18禁在线无遮挡免费观看视频| 色综合色国产| 免费观看性生交大片5| 高清视频免费观看一区二区| 免费大片18禁| 久久av网站| 97在线视频观看| tube8黄色片| 亚洲怡红院男人天堂| 精品99又大又爽又粗少妇毛片| 免费观看a级毛片全部| 国产无遮挡羞羞视频在线观看| 国产亚洲5aaaaa淫片| 国产精品99久久99久久久不卡 | 婷婷色av中文字幕| 亚洲伊人久久精品综合| 成年女人在线观看亚洲视频| 最近中文字幕高清免费大全6| av天堂中文字幕网| 性色avwww在线观看| 中文天堂在线官网| 亚洲国产成人一精品久久久| 国产一区亚洲一区在线观看| 91精品一卡2卡3卡4卡| 一二三四中文在线观看免费高清| 欧美最新免费一区二区三区| 简卡轻食公司| 久久精品国产自在天天线| 国产成人a区在线观看| 成人18禁高潮啪啪吃奶动态图 | 久久久a久久爽久久v久久| 精品一区在线观看国产| 大香蕉久久网| 国产片特级美女逼逼视频| 国产一区二区三区av在线| 国产日韩欧美亚洲二区| 久久久久人妻精品一区果冻| 色网站视频免费| 久久精品夜色国产| 亚洲美女视频黄频| 大又大粗又爽又黄少妇毛片口| 人人妻人人添人人爽欧美一区卜 | 久久97久久精品| 制服丝袜香蕉在线| 美女视频免费永久观看网站| 国产精品人妻久久久影院| 在线播放无遮挡| 一区二区三区免费毛片| 这个男人来自地球电影免费观看 | 亚洲av不卡在线观看| 亚洲av福利一区| 99国产精品免费福利视频| 我要看黄色一级片免费的| 日本与韩国留学比较| 老师上课跳d突然被开到最大视频| 国产亚洲最大av| 亚洲成色77777| 亚洲人与动物交配视频| 久久久久精品久久久久真实原创| 日韩电影二区| 国产成人免费无遮挡视频| 一级毛片黄色毛片免费观看视频| 精品人妻熟女av久视频| 搡女人真爽免费视频火全软件| 日韩欧美一区视频在线观看 | 高清在线视频一区二区三区| 国产视频首页在线观看| 国产av精品麻豆| 麻豆精品久久久久久蜜桃| 最新中文字幕久久久久| 午夜视频国产福利| 欧美xxxx黑人xx丫x性爽| 男女啪啪激烈高潮av片| 久久精品国产鲁丝片午夜精品| 国国产精品蜜臀av免费| 国产精品一区二区在线观看99| av女优亚洲男人天堂| 日韩伦理黄色片| 午夜免费观看性视频| 美女福利国产在线 | 女性被躁到高潮视频| 亚洲美女黄色视频免费看| 一级爰片在线观看| 天美传媒精品一区二区| 午夜福利网站1000一区二区三区| 亚洲国产成人一精品久久久| 麻豆国产97在线/欧美| 五月开心婷婷网| 狂野欧美白嫩少妇大欣赏| 精品一区二区免费观看| 日日摸夜夜添夜夜添av毛片| 国产在视频线精品| 日本色播在线视频| 丝袜喷水一区| 啦啦啦中文免费视频观看日本| 高清黄色对白视频在线免费看 | 波野结衣二区三区在线| 国产中年淑女户外野战色| 亚洲国产av新网站| 久久av网站| 国产精品嫩草影院av在线观看| 久久精品国产自在天天线| 国产av一区二区精品久久 | 黄色一级大片看看| 国产美女午夜福利| 日日撸夜夜添| 欧美人与善性xxx| 亚洲国产色片| 日韩制服骚丝袜av| 人人妻人人爽人人添夜夜欢视频 | 久热久热在线精品观看| 亚洲欧洲日产国产| 国产av精品麻豆| 亚洲精品中文字幕在线视频 | 免费观看无遮挡的男女| 亚洲av中文av极速乱| 观看免费一级毛片| 一区二区三区四区激情视频| 男女啪啪激烈高潮av片| 观看免费一级毛片| 国产高清有码在线观看视频| 中文字幕av成人在线电影| 久久精品国产a三级三级三级| 18禁在线播放成人免费| 午夜福利网站1000一区二区三区| 国产久久久一区二区三区| 欧美性感艳星| 国产高潮美女av| av黄色大香蕉| 免费观看性生交大片5| 91午夜精品亚洲一区二区三区| 国产 一区 欧美 日韩| 一区二区三区四区激情视频| 亚洲一区二区三区欧美精品| 亚洲人成网站高清观看| 少妇猛男粗大的猛烈进出视频| av在线观看视频网站免费| 亚洲三级黄色毛片| 国产成人a∨麻豆精品| 亚洲精品,欧美精品| 国产淫语在线视频| 性色avwww在线观看| 欧美日韩精品成人综合77777| 久久久欧美国产精品| 美女cb高潮喷水在线观看| 观看av在线不卡| 亚洲精品一二三| 国产淫片久久久久久久久| 久久99热这里只有精品18| 日韩国内少妇激情av|