• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automatic Eyewitness Identification During Disasters by Forming a Feature-Word Dictionary

    2022-11-11 10:45:46ShahzadNazirMuhammadAsifShahbazAhmadHananAljuaidShahbazAhmadYazeedGhadiandZubairnawaz
    Computers Materials&Continua 2022年9期

    Shahzad Nazir,Muhammad Asif,*,Shahbaz Ahmad,Hanan Aljuaid,Shahbaz Ahmad,Yazeed Ghadi and Zubair nawaz

    1Department of Computer Science,National Textile University,Faisalabad,Pakistan

    2Computer Sciences Department,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University(PNU),P.O.Box 84428,Riyadh 11671,Saudi Arabia

    3Department of Software Engineering and Computer Science,Al Ain University,Abu Dhabi,United Arab Emirates

    4Department of Data Science,University of the Punjab,Pakistan

    Abstract: Social media provide digitally interactional technologies to facilitate information sharing and exchanging individuals.Precisely,in case of disasters,a massive corpus is placed on platforms such as Twitter.Eyewitness accounts can benefit humanitarian organizations and agencies, but identifying the eyewitness Tweets related to the disaster from millions of Tweets is difficult.Different approaches have been developed to address this kind of problem.The recent state-of-the-art system was based on a manually created dictionary and this approach was further refined by introducing linguistic rules.However,these approaches suffer from limitations as they are dataset-dependent and not scalable.In this paper,we proposed a method to identify eyewitnesses from Twitter.To experiment,we utilized 13 features discovered by the pioneer of this domain and can classify the tweets to determine the eyewitness.Considering each feature, a dictionary of words was created with the Word Dictionary Maker algorithm,which is the crucial contribution of this research.This algorithm inputs some terms relevant to a specific feature for its initialization and then creates the words dictionary.Further,keyword matching for each feature in tweets is performed.If a feature exists in a tweet,it is termed as 1;otherwise,0.Similarly,for 13 features,we created a file that reflects features in each tweet.To classify the tweets based on features, Na?ve Bayes, Random Forest, and Neural Network were utilized.The approach was implemented on different disasters like earthquakes, floods, hurricanes, and Forest fires.The results were compared with the state-of-the-art linguistic rule-based system with 0.81 F-measure values.At the same time, the proposed approach gained a 0.88 value of F-measure.The results were comparable as the proposed approach is not dataset-dependent.Therefore, it can be used for the identification of eyewitness accounts.

    Keywords:Word dictionary;social media;eyewitness identification;disasters

    1 Introduction

    Social media is enriched with many individuals[1].For example,Twitter has 1.3 billion accounts and 330 million active users among social media applications.Twitter contains 23%population of the Internet,and 83%of leaders of the world are using this social media application.Twitter handles 6000 Tweets per second,500 million Tweets per day and 200 billion Tweets per year[2].

    Twitter provides a platform where users share their opinions, ideas, news [3], and sentiments with others.The extreme usage of Twitter has turned the focus of researchers to extract beneficial information from it [4].Researchers have explored Twitter for recommendations [5], alerts [6],advertisements [7], journalism [8], etc.Specifically, at the time of any disaster [9], people share the information that disaster management can utilize.Twitter provides the platform to users.They can break the news of any happenings around them before any television channel, such as the news of an airplane crash in New York, was broken by an eyewitness on Twitter.A News agency detected a Tweet by the plane passenger,whose engine was failed,and it made an emergency landing on a remote Island.An eyewitness of the attack on Westgate shopping mall posted the information of the incident on Twitter about thirty-three minutes earlier than the news channels [10].Similarly, the news of the Bombing attack of Boston Well[11]was initially broken on Twitter by an eyewitness.

    Researchers have adopted different location-based, linguistic-based, content-based, etc.Researchers in [12] proposed a hybrid approach based on linguistic and meta-features to identify eyewitnesses during disastrous events.[13]utilized grammatical rules and natural language techniques to find out the eyewitness from Twitter.The research was conducted by[14]to identify the eyewitness of natural disasters on Twitter.The author performed an analysis on Tweets and identified thirteen different features.The author claimed that eyewitness tweets contain the prescribed features.Similar features were utilized by [15], the author proposed linguistic rules for eyewitness identification.The state-of-the-art techniques are still not fully automatic, and some are dataset-dependent.Therefore,the need of the hour is to find an automated solution for eyewitness identification.

    This paper proposed an approach to identify the eyewitness of any natural disaster from Twitter.The proposed method is based on word dictionary formation related to features.To conduct the experiment, the dataset collected by [14] was utilized, which was consisted of 8000 Tweets.Further,thirteen different features were deemed for the classification of Tweets.The state-of-the-art approaches stated that these 13 features could potentially classify the eyewitness accounts on Twitter.A word dictionary was created for each feature by an algorithm that inputs a random list of words for each feature,reflecting the sense of the corresponding feature.

    The algorithm considers the list of words and mines the synonyms for each list word.Further,a new list of original words and their synonyms is formed.In the next module, we utilize Wiktionary to extract the pre-derived and post-derived words and embed them into the list.In the last module,all the original words, synonyms and derived words are searched on Google while adding “and”after each word.For each heading on google, the word after “and” is extracted while forming a dictionary of words for features.The developed algorithm provided the dictionary for features and further,we performed keyword matching.If the words in Tweets matched with any word in the feature dictionary,it was termed as 1 otherwise,the feature value was kept as 0.We utilized 11 features,and the classification of feature values produced F-measure as 0.886, comparable with state-of-the-art techniques.

    The rest of the article is organized as Related Work,Research Aim and Objective,Methodology,Results,Discussions and Conclusion.

    2 Related Work

    Social media, specifically Twitter, has become an emerging platform where users express their emotions opinions and share different happenings in the world.Twitter is a microblog and the posts on Twitter are termed as Tweets.It was founded in 2006 by the United States.According to Pear Analytics,Twitter contains news 3.6%,Spam 3.8%,Conversation 37.6%,self-promotion 5.9%,pointless 40.1%and pass-along value 8.7%.The posts on Twitter are public and are easily accessible by anyone.The users can also retweet the post and have the facility to follow the other users.Twitter handles on average 1.6 billion queries per day.

    The individuals usually search for different events that have occurred or will happen.When it comes to some natural disaster,people use Twitter to inform others about the current situation nearby them or share their views and concerns.[16]argued that Twitter is also used for news and headlines and not only as social media.The authors claimed that more than 85% of highlighted topics are news.On Twitter,a massive corpus exists and therefore,it has become challenging to extract valuable information.Researchers in[17]explored the usage of Twitter during disasters.The author stated that users Tweet about disasters.They include both original Tweets and Retweets.[18] researched while exploiting the disaster of Forest fire.The author utilized Twitter data set to detect this disaster.Imran et al.[19] investigated the Tweets if they belonged to the information category with the help of volunteers.Kumar et al.[20]developed a crawler to crawl the Tweets.

    The researcher in[21]proposed an approach to identify eyewitness accounts of some events.The authors identified the accounts from where the Tweets about the Bush fire of 2013 were mentioned.The authors achieved a 77%score,but the eyewitness was not cleared in their Tweets because of distance.Their model utilized the location of the Tweet and Network for eyewitness identification.A filterbased approach was proposed by[22]to identify eyewitnesses.The author used five features based on linguistic factors.First,predefined keywords were considered to determine the eyewitness from their posts.The proposed approach achieved an average accuracy of 62%, but this approach also suffers from limitations as it requires the Tweet’s location and considers events,not the eyewitness.

    Diakopolous et al.[23] proposed an approach to identify eyewitnesses for journalism.For automatic identification of eyewitnesses,a technique was introduced by scientists.The author defined linguistics features and labeled the events.OpenCalais was utilized for this purpose.Linguistic inquiry and word count(LIWC)dictionary was used to find out the keywords related to the events.The list of terms was created manually,and the model requires language information and location to identify the eyewitness.The author produced the average F-measure as 89.7%.To identify different events[24]utilized natural language algorithms.The author identified the events from news articles.The author produced a Precision value of 42 and Recall as 66.An approach for understanding the eyewitness reports was conducted by[25].Later,they used thirteen different features to identify eyewitnesses of disasters.The dataset was collected from Twitter.The domain experts classified the Tweets.The author achieved a score of F-measure of 0.917.The proposed approach of the author was not suitable for a large number of Tweets.This technique was manually implemented and proposed linguistic rules to identify eyewitnesses.The author utilized 13 characteristics that were proposed in the literature.Using the characteristics,the author developed linguistic rules.This approach was dataset dependent as rules were created utilizing the specific dataset and after that,the approach was tested on the same dataset.

    3 Research Aim and Objective

    The key objective of this research is to identify eyewitnesses in disastrous situations, as the information of direct eyewitnesses can be helpful to the disaster management department and nongovernmental organizations.For this purpose, the literature supports 13 different features that can highlight the eyewitness Tweets.We used the Twitter dataset having disastrous information and developed an algorithm that creates a dictionary of words for each feature; further, the algorithm matches feature dictionary words with tweet tokens.If a feature exists in a tweet, the feature value is marked as 1; otherwise,0.After that,the obtained feature values for all tweets are classified with potential classifiers and the eyewitness accounts on Twitter are identified.

    4 Methodology

    This section presents the adopted approach for identifying eyewitnesses from the Twitter platform.The first task was the Feature Identification from the benchmark dataset.After that,we formed the word list.Further, each word’s synonyms were identified and added to the original word list.After that, we extracted the derived words from Wiktionary for each word and its synonym.After that,related words from Google were parsed considering the list formed with original words, synonyms,and derived words.Finally,the overall Methodology diagram of the approach is presented in Fig.1.

    Figure 1:Methodology diagram

    4.1 Dataset

    To implement the proposed approach, the dataset was collected by Zahra in 2020.This dataset was collected from Twitter using Twitter streaming application programming interface.The author considered specific keywords reflecting disaster situations such as earthquake,flood,heavy rain,hurricane,forest fire,wildfire,etc.Four categories were selected for further manipulation 1)Earthquake,2)Flood, 3)Wildfire and 4)Hurricane.The span for dataset collection was from July 2016 to May 2018.This span was chosen as many natural disasters occurred during this period.The Tweets were classified into three categories such as 1)eyewitness,2)non-eyewitness and 3)vulnerable,as presented in Tab.1.

    Table 1: Statistics of dataset

    An eyewitness is an individual that has first-hand knowledge and experience about the event.The information contained by the eyewitness can be helpful in different local departments.While the non-eyewitness shares the information received from an eyewitness.Several Tweets went undecided;therefore,such Tweets were categorized as vulnerable.For annotation of the dataset,crowdsourcing was utilized.

    4.2 Feature Identification

    Zahra specified thirteen features to perform the experiment,which was utilized after performing manual text analysis on Tweets.These features include surrounding details,words reflecting the impact of a disaster, expletives, first-person pronouns, the length of Tweet, words indicating location, etc.The classification of Tweets is based on these features,as the presence of these features in Tweets can indicate the eyewitness.The detail of the features and their examples are given in the following Tab.2.

    Table 2: Features in tweets

    4.3 Word Dictionary Formation

    Considering each feature,a word dictionary is formed that consists of all the words which belong to that specific feature.Then,an algorithm is developed to form the word dictionary for features that takes a list of feature-related words as input.The algorithm needs a few words to start.After that,it further extracts all related terms.

    Figure 2:Word dictionary formation for features

    This algorithm is composed of three modules, as described in Fig.2.Initially, the algorithm extracts all synonyms from wordnet and merges them with the input list for each word.In the second module,the algorithm considers the list of original words and their synonyms.Further,each word is searched on Wiktionary and the derived words are extracted.The derived words are again merged with the actual words and synonyms list.Finally,the words of the updated list are searched on Google while concatenating with the“and”keyword.On the result page of Google,the words that came after the“and”keyword were extracted.The steps for word dictionary formation are explained in the following.

    Word Dictionary Maker(WDM)

    4.3.1 Word List Formation

    To initialize the WDM algorithm,a list of feature-related words is required.The list should contain 4 to 5 keywords,as it will reduce the time consumption by the algorithm and fewer irrelevant words would be part of the word dictionary.Therefore, for all features except Feature 7, Feature 8, and Feature 11,we considered a list of words that contained the sense of that specific feature and created a word dictionary.The word lists for features are presented in the following Tab.3.

    For making the word lists, specific words were considered that reflect the respective feature’s characteristics.These word lists were further fed to WDM,a word dictionary-making algorithm.For Feature 7, we used the question mark and the exclamation mark.For Feature 8, the expletives were manually added,while for Feature 11,we checked the length of the Tweets.

    4.3.2 Synonyms Extraction

    For synonym extraction,we used Natural Language Tool Kit[26].For each word in the list,the algorithm extracts all the synonyms of the respective word.Further, the synonyms and the original words were combined in the list.After merging the synonyms and words,we removed repeating words as there was a possible duplication of synonyms.

    4.3.3 Wiktionary Words Extraction

    Wiktionary is a web-based project that provides a dictionary of words,phrases,linguistic reconstructions, proverbs, etc.it can be accessed in 171 languages.To extract the derived words from Wiktionary[27],we searched the word automatically by placing the word at the end of the Wiktionary link and then parsed the whole page.The class tag containing derived words was considered and all the derived words were extracted.The extraction rules are given in the following.

    ? google_search=parse page(“https://en.wiktionary.org/wiki/”+word)

    ? derived_words=extract word(‘div’,{“class”:“derivedterms term-list ul-column-count”})

    The extracted derived words were merged with the original words and their synonyms.Before moving to the next phase,we apply the distinction function on word list to remove the repeating words.

    4.3.4 Google Words Extraction

    Google is a search engine that provides billions of web links against input queries[28].We used the google search capability for mining the related words.All the words in the list were searched on Google with the keyword“and”automatically and the whole page was parsed.All the headings were extracted and then tokenized.The word next to the keyword “and”was extracted.The search was based on the concept that when a word is searched on Google with the“and”keyword,the word after“and”would be more relevant in headings.For this module,we developed the following rules:

    ? google_search=parse page(“https://www.google.com/search?q=”+word+“and”)

    ? search_result=extract headings(‘h3’)

    ? google_words=extract words after“and”

    After extracting words from google,a word dictionary was created consisting of input list words,synonyms,Wiktionary words and google words.For each feature,a word list was given to the algorithm WDM.The algorithm returns the word dictionary for that specific input list.

    4.4 Tokenization

    Tokenization is splitting text corpus into smaller units such as phrases or words[29].These smaller units are termed as tokens.We performed the tokenization and converted the Tweets into tokens in this step.For each Tweet,a list of separated words was generated.

    4.5 Keyword Matching

    To inspect if the Tweets contain feature keywords,we performed keyword matching.After making a word dictionary of features and performing Tokenization of Tweets, we matched the tokens with feature keywords.If any keyword of a feature is matched with the token, the value of the feature is considered as 1 and if no match occurs,the value of the feature is termed as 0.

    In the first column of Tab.4, Category value 1 indicates the Tweet is from an eyewitness, while 2 presents that the Tweet is from a non-eyewitness.For feature values, 1 indicates that the feature keyword is found in Tweet and 0 indicates no keyword is found.

    Table 4: Feature keywords matching

    4.6 Feature Reduction

    Feature reduction eliminates the feature from the dataset without losing essential information[30].It is also termed Dimensionality Reduction.Eradicating the features from the dataset reduces the number of computations.We used cfsSubsetEval as attribute evaluator and Best First as searching method.Feature reduction was performed for all the datasets such as earthquakes,Flood,Hurricane and Forest Fire.

    4.7 Evaluation

    We used the performance metrics F-measure,Precision and Recall[31]for evaluation.Precision is used for measuring the quality of classification.If the Precision is high,it would be meant that the algorithm has returned more relevant results and less irrelevant ones.

    On the other hand,Recall is used to express completeness.Eqs.(1)-(3)represents Precision,Recall and f-measure.High Recall means that the algorithm has returned most of the relevant ones.Precision is calculated as retrieved relevant articles divided by total articles.At the same time,Recall is calculated as retrieved relevant articles divided by total relevant articles.These measures are based on(1)True Positive,(2)False Positive,(3)False Negative and(4)True Negative.

    5 Results and Discussions

    The manual analysis of massive corpus to identify eyewitnesses is problematic for humans in disastrous situations.Therefore, we have proposed an approach based on a word dictionary.In this section,we present the proposed approach results and the discussions based on results.

    5.1 Dictionary Formation

    To form the word dictionary [32], we developed an algorithm that inputs a list of words and extracts all the related words.The algorithm first extracts all synonyms from wordnet, further it combines the original list words and synonyms to find the associated words from Wiktionary.After combining original words,synonyms and Wiktionary words,the new list searches for linked words on Google.This algorithm was executed for all the features except 7,8 and 11.A specific list of words was given to the algorithm related to the feature and the algorithm returned the words reflecting that feature.For Feature 8, the expletives were collected from Wiktionary and web.While Feature 7 consisted of only two symbols and Feature 11 was related to the length of the Tweets.All the feature dictionaries are presented in the following Fig.3.

    After collecting the related words,we performed text processing by removing 1)non-English words 2)blank symbols.This task was considered as during scraping, some non-English words and some blank symbols were extracted.The total number of words are presented in the following Tab.5:

    The maximum words were extracted for Feature 3,and minimum cells were occupied by Feature 7.For Feature 3,the extracted words were 783,while for Feature 7,only two symbols were considered.

    Figure 3:Feature words

    Table 5: Total words for each feature

    The total number of words extracted for all features except Feature 11,were 3,267.For feature 11,we only consider the length of Tweets.Therefore,no word was extracted for this feature.

    5.2 Feature Reduction Results

    To identify the eyewitness from Twitter,A technique by Zahra was proposed having 13 different features and these thirteen features were utilized by Sajjad.In this phase, we performed feature reduction to reduce the number of computations.For this purpose,Correlation Feature Selection was utilized with the Greedy Stepwise searching method.The results showed that Feature 1 and Feature 13 provided minor information in the eyewitness identification task.Feature 1 provides little surrounding details,while Feature 13 describes the location.Considering the Feature Reduction results,we removed Feature 1 and Feature 13.The experiment was performed on 11 features.

    5.3 Classification Results

    To classify the Tweets,we used three classifiers Neural Network,Random Forest and Na?ve Bayes[33].The three algorithms were implemented on dataset of Earthquake,Hurricane,Flood and Forestfire,to identify the eyewitness.For testing and training purpose,10-Fold technique was used and for evaluation of we utilized performance measures such as Precision,Recall and F-measure.The results for earthquake are presented in Fig.4.

    The maximum precision value for earthquake [34] eyewitness identification, was produced by Random Forest, while the maximum Recall value was generated by Neural Network and similarly Neural Network gained maximum F-measure value 0.886.The Neural Network showed better overall performance than other algorithms.In Tweets related to flood disasters,the maximum Precision for eyewitness was observed by Na?ve Bayes 0.554.The maximum Recall value was produced by Neural Network 0.34 and highest F-measure was generated by Random Forest 0.405.Therefore,the overall performance in identification of eyewitness was disclosed by Random Forest algorithm.The Flood results are given in Fig.5.

    Figure 4: Results of earthquake

    Figure 5: Results of flood

    As shown in Fig.6, the better precision value was presented by Neural Network 0.598 and the highest values of Recall and F-measure were introduced by Na?ve Bayes.Therefore, Na?ve Bayes performed slightly better than Neural Network and Random Forest.For Forest Fire, Neural Network produced maximum Precision of 0.391 while Neural Network produced the value of Recall was minimum.Na?ve Bayes produced the maximum value of F-measure.The results of Forest Fire are presented in Fig.7.

    The implementation of the complete algorithm,feature dataset and all results have been uploaded on GitHub1https://github.com/Shahzad-Nazir/EyewitnessIdentification.site and are publicly available now.

    Figure 6: Results of hurricane

    Figure 7: Forest fire results

    5.4 Comparison

    The proposed approach was implemented for disasters such as earthquakes, hurricanes, Flood and Forest fires.All the features were evaluated and compared with state-of-the-art approaches.Zahra proposed 13 features and the author manually created the static dictionary.The resultswere in the form of Precision, Recall and F-measure.This research work was enhanced by Sajjad.Linguistic rules were introduced by the author for each feature.These rules were created keeping in view the specific dataset.Therefore, these rules may not be valid for varying datasets.The comparison of F-measure values is presented in Fig.8.

    For flood, the proposed approach produced lower results, while for other disasters, the proposed approach outperformed the state-of-the-art approach.We further investigated the results and computed the features in each dataset.The following Tab.6 explores the statistics of results.

    Figure 8:Comparison of results

    Table 6: Total features matched in dataset

    It can be observed that for 5 features such as F_4, F_5, F_6, F_9 and F_12 the matches were found maximum in Flood dataset.The feature words were observed with overlapping which caused the slightly low results for Flood dataset.On the other hand, the proposed approach is not dataset dependent and automatically builds word dictionary.The core of the approach is WDM algorithm which is scalable and need only a few related words to initialize the extraction.Sajjad implemented their approach with 13 features while the proposed approach considered only 11 features.The overall performance of the proposed approach is better than Sajjad’s approach.The proposed methodology can be exploited for eyewitness identification in disastrous situations.

    6 Conclusion and Research Implications

    In today’s era,social media such as Twitter,Facebook,Instagram,etc.,are widely used to share opinions,information,and ideas.During any disaster,the credible information shared by an eyewitness can be helpful for agencies and organizations.The research community has introduced approaches to identify eyewitnesses.Zahra performed feature engineering to identify 13 features pointing out eyewitness Tweets.The author manually built a static word dictionary and classified the Tweets.This approach was dataset-dependent and not scalable.To update the static dictionary,domain experts are again needed.Sajjad improved the research work and introduced linguistic rules for all 13 features defined by Zahra.The rules were created based on the Tweets in the dataset.Therefore,this approach was also dataset dependent,and the rules may not outperform with a different dataset.This research paper introduces an approach to identify eyewitness accounts.We utilized 11 features instead of 13,and the core of the proposed approach is WDM.This algorithm inputs very few words for each feature and extracts all the related terms.Further,we performed the preprocessing and tokenized the Tweets.The tokens of Tweets were matched with keywords of each feature.If the feature word is found in Tweet,it was termed as 1;otherwise,0.The Tweets were classified using feature values.We used Na?ve Bayes, Random Forest and Neural Network to classify Tweets.The proposed approach produced 0.886 value of F-measure while [15] approach gained 0.81.The proposed approach can outperform in varying datasets while the state-of-the-art approaches are dataset-dependent.This research can assist the Government disaster management departments and different NGOs in identifying the direct eyewitness individuals to gather and transmit authentic information about the disasters and eliminate the fake news reports.The information from eyewitnesses can also help make alerts.

    Funding Statement:This research is funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R54), Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.

    Conflict of Interest:The authors have no conflict of interest to report regarding the present study.

    在线观看午夜福利视频| 亚洲第一av免费看| 亚洲视频免费观看视频| 亚洲人成电影免费在线| 日韩精品中文字幕看吧| 嫩草影院精品99| 久久久久精品国产欧美久久久| 欧美国产精品va在线观看不卡| 男人舔女人下体高潮全视频| 99久久国产精品久久久| 欧美精品亚洲一区二区| 男女之事视频高清在线观看| 91精品国产国语对白视频| 婷婷六月久久综合丁香| 性欧美人与动物交配| 久久人人97超碰香蕉20202| 久久精品亚洲av国产电影网| 欧美不卡视频在线免费观看 | 成人永久免费在线观看视频| 亚洲精品国产精品久久久不卡| 伦理电影免费视频| av在线播放免费不卡| 中文字幕色久视频| 国产av在哪里看| 中文欧美无线码| 亚洲精品成人av观看孕妇| av电影中文网址| 又黄又爽又免费观看的视频| 岛国在线观看网站| 欧美另类亚洲清纯唯美| 手机成人av网站| 男女下面插进去视频免费观看| 欧美人与性动交α欧美软件| 国产精品电影一区二区三区| 欧美成人免费av一区二区三区| 色综合站精品国产| 午夜免费成人在线视频| 欧美人与性动交α欧美软件| 国产成人啪精品午夜网站| 久久国产乱子伦精品免费另类| 可以免费在线观看a视频的电影网站| 成人三级黄色视频| 欧美另类亚洲清纯唯美| 国产精品国产av在线观看| 搡老乐熟女国产| 在线永久观看黄色视频| 丰满迷人的少妇在线观看| 老汉色av国产亚洲站长工具| 日本五十路高清| 精品福利观看| 中出人妻视频一区二区| 色综合婷婷激情| 黄色怎么调成土黄色| 久久中文看片网| 制服人妻中文乱码| 国产一卡二卡三卡精品| 久久青草综合色| 日本黄色日本黄色录像| 久久久久久久久中文| 久久中文字幕一级| 欧美激情久久久久久爽电影 | 午夜a级毛片| 丰满人妻熟妇乱又伦精品不卡| 国产精品国产av在线观看| 一级毛片女人18水好多| 黄色视频,在线免费观看| 日韩国内少妇激情av| 51午夜福利影视在线观看| 一二三四在线观看免费中文在| 久久久久国内视频| 老司机靠b影院| 亚洲一区高清亚洲精品| 日韩精品免费视频一区二区三区| 国产精品一区二区在线不卡| 久久精品国产清高在天天线| 亚洲精品av麻豆狂野| 免费高清在线观看日韩| 亚洲色图av天堂| 欧美人与性动交α欧美精品济南到| 成人国产一区最新在线观看| 亚洲成国产人片在线观看| 国产精品香港三级国产av潘金莲| 亚洲欧洲精品一区二区精品久久久| 久久中文看片网| 在线国产一区二区在线| 男女下面插进去视频免费观看| 亚洲中文字幕日韩| x7x7x7水蜜桃| 一区二区日韩欧美中文字幕| 丰满迷人的少妇在线观看| 淫秽高清视频在线观看| 免费日韩欧美在线观看| 国产精华一区二区三区| 日本免费一区二区三区高清不卡 | 免费搜索国产男女视频| 日韩精品青青久久久久久| 波多野结衣一区麻豆| 亚洲成人精品中文字幕电影 | 国产99久久九九免费精品| 久久久久久大精品| 亚洲专区国产一区二区| 99热国产这里只有精品6| 日本欧美视频一区| 国产高清国产精品国产三级| 成人免费观看视频高清| 在线十欧美十亚洲十日本专区| 国产色视频综合| 99久久久亚洲精品蜜臀av| 性少妇av在线| 日韩欧美三级三区| 精品国产国语对白av| 麻豆av在线久日| 国产男靠女视频免费网站| 国产av一区在线观看免费| 久久久国产精品麻豆| 日日夜夜操网爽| 中国美女看黄片| 老熟妇仑乱视频hdxx| 交换朋友夫妻互换小说| 国产精品98久久久久久宅男小说| 精品国产一区二区久久| 国产深夜福利视频在线观看| 亚洲一区高清亚洲精品| 国产精品一区二区三区四区久久 | 国产精品野战在线观看 | 级片在线观看| 国产精品 欧美亚洲| 欧美久久黑人一区二区| 成人亚洲精品av一区二区 | 好看av亚洲va欧美ⅴa在| www.精华液| 国产精品久久视频播放| av在线播放免费不卡| 中文字幕精品免费在线观看视频| 中文字幕人妻熟女乱码| 男女床上黄色一级片免费看| 黄色成人免费大全| 夜夜夜夜夜久久久久| 亚洲精品美女久久av网站| 又紧又爽又黄一区二区| 欧美日韩亚洲国产一区二区在线观看| 日韩视频一区二区在线观看| av在线天堂中文字幕 | 午夜亚洲福利在线播放| 国产一区二区激情短视频| 国产欧美日韩一区二区三| 精品一品国产午夜福利视频| 天天影视国产精品| 在线观看舔阴道视频| 视频区图区小说| 国产一区二区在线av高清观看| 夜夜看夜夜爽夜夜摸 | 高清在线国产一区| 18禁美女被吸乳视频| 黄色 视频免费看| 久久热在线av| 视频在线观看一区二区三区| 美女午夜性视频免费| 亚洲午夜理论影院| 久久久精品国产亚洲av高清涩受| 国产午夜精品久久久久久| 亚洲 欧美 日韩 在线 免费| 亚洲欧洲精品一区二区精品久久久| 日韩av在线大香蕉| 午夜福利欧美成人| 91在线观看av| 久久久国产成人免费| 长腿黑丝高跟| 妹子高潮喷水视频| 国产国语露脸激情在线看| 亚洲av美国av| 免费在线观看影片大全网站| 欧美大码av| 久久性视频一级片| 欧美激情 高清一区二区三区| 日韩中文字幕欧美一区二区| 级片在线观看| 欧美在线一区亚洲| 久久亚洲精品不卡| 亚洲精品成人av观看孕妇| 成人影院久久| 丝袜人妻中文字幕| 国产一卡二卡三卡精品| 国产视频一区二区在线看| 午夜久久久在线观看| 久久天躁狠狠躁夜夜2o2o| 久久天堂一区二区三区四区| 亚洲精品成人av观看孕妇| 欧美久久黑人一区二区| 中文欧美无线码| 19禁男女啪啪无遮挡网站| www.自偷自拍.com| 两性夫妻黄色片| 久久精品影院6| 老熟妇仑乱视频hdxx| 男男h啪啪无遮挡| 亚洲欧美激情综合另类| 成人亚洲精品一区在线观看| 男女午夜视频在线观看| 国产伦一二天堂av在线观看| 亚洲一区高清亚洲精品| 母亲3免费完整高清在线观看| 婷婷六月久久综合丁香| 国产精品亚洲一级av第二区| 成人18禁高潮啪啪吃奶动态图| 国产精品久久久久久人妻精品电影| 国产主播在线观看一区二区| 亚洲人成网站在线播放欧美日韩| 两个人看的免费小视频| 后天国语完整版免费观看| 国产一区二区激情短视频| 99国产极品粉嫩在线观看| 亚洲午夜理论影院| 精品欧美一区二区三区在线| 日本免费a在线| 国产欧美日韩一区二区三| 免费在线观看影片大全网站| 欧美日韩国产mv在线观看视频| 国产精华一区二区三区| 久久久国产精品麻豆| 成人三级黄色视频| 亚洲五月色婷婷综合| 99精品在免费线老司机午夜| 久久这里只有精品19| 久久99一区二区三区| 国产精品免费一区二区三区在线| 午夜视频精品福利| 一本大道久久a久久精品| 操美女的视频在线观看| 国产成人精品久久二区二区91| 精品人妻在线不人妻| 成人av一区二区三区在线看| 亚洲欧美一区二区三区久久| e午夜精品久久久久久久| 亚洲精华国产精华精| 天堂√8在线中文| 精品久久久久久,| 欧美午夜高清在线| 大型av网站在线播放| 国产色视频综合| 极品人妻少妇av视频| 高清av免费在线| 欧美大码av| 午夜日韩欧美国产| 亚洲中文日韩欧美视频| 制服人妻中文乱码| 亚洲成国产人片在线观看| 午夜精品在线福利| 国产伦人伦偷精品视频| 18禁国产床啪视频网站| 久久欧美精品欧美久久欧美| 手机成人av网站| 国产成人精品久久二区二区免费| 久久伊人香网站| 视频区图区小说| 中国美女看黄片| 精品国产乱码久久久久久男人| 国产精华一区二区三区| 两人在一起打扑克的视频| 久久久久精品国产欧美久久久| 真人做人爱边吃奶动态| 91九色精品人成在线观看| 日韩精品中文字幕看吧| 一进一出抽搐gif免费好疼 | av在线天堂中文字幕 | 少妇裸体淫交视频免费看高清 | 丰满迷人的少妇在线观看| 亚洲第一欧美日韩一区二区三区| 两性夫妻黄色片| 人成视频在线观看免费观看| 90打野战视频偷拍视频| 亚洲,欧美精品.| 亚洲国产精品sss在线观看 | 亚洲自拍偷在线| 不卡一级毛片| 一边摸一边抽搐一进一出视频| 免费在线观看黄色视频的| 交换朋友夫妻互换小说| 免费高清在线观看日韩| 男女床上黄色一级片免费看| av欧美777| 久久人人精品亚洲av| 亚洲欧美日韩另类电影网站| 国产成人精品久久二区二区免费| 精品第一国产精品| 性欧美人与动物交配| 国产极品粉嫩免费观看在线| a在线观看视频网站| 母亲3免费完整高清在线观看| 俄罗斯特黄特色一大片| 国产一区二区在线av高清观看| 亚洲成人国产一区在线观看| 国产精品综合久久久久久久免费 | 日韩大码丰满熟妇| av在线天堂中文字幕 | 黄频高清免费视频| 琪琪午夜伦伦电影理论片6080| 国产精品国产av在线观看| 国产又色又爽无遮挡免费看| 精品电影一区二区在线| 成人特级黄色片久久久久久久| 亚洲av五月六月丁香网| 日韩国内少妇激情av| 国产亚洲精品一区二区www| 国产成人免费无遮挡视频| 夜夜躁狠狠躁天天躁| 成人影院久久| 国产亚洲欧美精品永久| 久久99一区二区三区| 中文字幕另类日韩欧美亚洲嫩草| 女性生殖器流出的白浆| 久久精品国产清高在天天线| 国产蜜桃级精品一区二区三区| 免费少妇av软件| 精品久久久久久,| 在线视频色国产色| 日本五十路高清| 精品人妻1区二区| 午夜视频精品福利| 黑人操中国人逼视频| 日日夜夜操网爽| 真人做人爱边吃奶动态| 久久国产亚洲av麻豆专区| 国产精品电影一区二区三区| 人妻丰满熟妇av一区二区三区| 国产片内射在线| 亚洲精品国产色婷婷电影| 一级片'在线观看视频| 欧美中文日本在线观看视频| 亚洲成a人片在线一区二区| 日本撒尿小便嘘嘘汇集6| 日本wwww免费看| aaaaa片日本免费| 老司机午夜十八禁免费视频| 很黄的视频免费| 亚洲欧美激情综合另类| 欧美丝袜亚洲另类 | 99久久99久久久精品蜜桃| 午夜免费成人在线视频| 日本a在线网址| 日韩 欧美 亚洲 中文字幕| 五月开心婷婷网| 天堂影院成人在线观看| 日韩精品青青久久久久久| 91av网站免费观看| xxx96com| 国产一区在线观看成人免费| 久久久久九九精品影院| 50天的宝宝边吃奶边哭怎么回事| 大型黄色视频在线免费观看| а√天堂www在线а√下载| 成人三级做爰电影| 欧美不卡视频在线免费观看 | 人人妻,人人澡人人爽秒播| 黄色怎么调成土黄色| 色综合欧美亚洲国产小说| 夫妻午夜视频| svipshipincom国产片| 制服人妻中文乱码| 操出白浆在线播放| 日韩有码中文字幕| 99精品久久久久人妻精品| 欧美日韩亚洲综合一区二区三区_| 久久精品人人爽人人爽视色| 免费看a级黄色片| 男人舔女人下体高潮全视频| 午夜免费观看网址| 国产午夜精品久久久久久| 黄色视频,在线免费观看| 亚洲成人久久性| 亚洲全国av大片| 无遮挡黄片免费观看| 久热这里只有精品99| 亚洲成av片中文字幕在线观看| 视频区欧美日本亚洲| 日本a在线网址| 丝袜人妻中文字幕| bbb黄色大片| av网站在线播放免费| 级片在线观看| 看黄色毛片网站| 午夜免费观看网址| 性欧美人与动物交配| 国产成人欧美| 日本a在线网址| 国产精品 欧美亚洲| 亚洲欧美精品综合一区二区三区| 精品人妻在线不人妻| 一级,二级,三级黄色视频| 国内久久婷婷六月综合欲色啪| 少妇 在线观看| 久久中文看片网| 欧美日韩亚洲综合一区二区三区_| 在线观看午夜福利视频| 日韩一卡2卡3卡4卡2021年| 久久伊人香网站| 热re99久久国产66热| 777久久人妻少妇嫩草av网站| 一进一出抽搐gif免费好疼 | 久久人妻福利社区极品人妻图片| 中亚洲国语对白在线视频| 69精品国产乱码久久久| 成熟少妇高潮喷水视频| 精品国内亚洲2022精品成人| 欧美国产精品va在线观看不卡| 久久婷婷成人综合色麻豆| www.999成人在线观看| 国内久久婷婷六月综合欲色啪| 久久欧美精品欧美久久欧美| 精品福利永久在线观看| 高清在线国产一区| 亚洲 欧美 日韩 在线 免费| 欧美日本中文国产一区发布| 国产精品美女特级片免费视频播放器 | 69av精品久久久久久| 欧美日韩乱码在线| 日本一区二区免费在线视频| 久久久久精品国产欧美久久久| 美国免费a级毛片| 亚洲伊人色综图| 后天国语完整版免费观看| 亚洲avbb在线观看| 在线观看舔阴道视频| 亚洲精品国产区一区二| 精品一品国产午夜福利视频| 亚洲狠狠婷婷综合久久图片| 两性夫妻黄色片| 欧美中文综合在线视频| 亚洲欧美日韩高清在线视频| 免费一级毛片在线播放高清视频 | 夜夜躁狠狠躁天天躁| 成年版毛片免费区| 国产亚洲精品第一综合不卡| 日本黄色日本黄色录像| 中国美女看黄片| 国产亚洲欧美98| 中文字幕av电影在线播放| 亚洲五月色婷婷综合| 免费少妇av软件| 91成年电影在线观看| tocl精华| 亚洲欧洲精品一区二区精品久久久| 国产亚洲精品一区二区www| 欧美日韩亚洲高清精品| 亚洲成人久久性| 成人av一区二区三区在线看| 999精品在线视频| 久久婷婷成人综合色麻豆| 亚洲欧美激情在线| 亚洲人成网站在线播放欧美日韩| 手机成人av网站| 欧美精品亚洲一区二区| 亚洲欧美精品综合一区二区三区| 国产精品免费视频内射| 一区二区三区激情视频| 999精品在线视频| 亚洲全国av大片| 深夜精品福利| 91国产中文字幕| 国产aⅴ精品一区二区三区波| 在线观看66精品国产| 国产精品久久久av美女十八| av在线天堂中文字幕 | 夫妻午夜视频| 曰老女人黄片| 999精品在线视频| 69精品国产乱码久久久| 欧洲精品卡2卡3卡4卡5卡区| 色老头精品视频在线观看| 国产xxxxx性猛交| 久久久久久久精品吃奶| 美女扒开内裤让男人捅视频| 欧美日韩瑟瑟在线播放| 免费少妇av软件| 狠狠狠狠99中文字幕| 老司机福利观看| 一边摸一边抽搐一进一出视频| 男人舔女人下体高潮全视频| 欧美在线一区亚洲| 午夜久久久在线观看| av免费在线观看网站| 亚洲成人精品中文字幕电影 | 午夜免费激情av| 久久人人爽av亚洲精品天堂| 精品一区二区三区视频在线观看免费 | 久久久久久免费高清国产稀缺| 亚洲自拍偷在线| 在线看a的网站| 法律面前人人平等表现在哪些方面| 精品福利观看| 神马国产精品三级电影在线观看 | 欧美亚洲日本最大视频资源| 在线天堂中文资源库| 国产国语露脸激情在线看| 99久久99久久久精品蜜桃| 亚洲自拍偷在线| 免费搜索国产男女视频| 成人影院久久| 丝袜在线中文字幕| 深夜精品福利| 日本一区二区免费在线视频| 亚洲午夜理论影院| 天天影视国产精品| 亚洲精华国产精华精| 久久热在线av| 久久婷婷成人综合色麻豆| 国产精品日韩av在线免费观看 | 国产伦人伦偷精品视频| 高清av免费在线| 变态另类成人亚洲欧美熟女 | 少妇粗大呻吟视频| 嫩草影院精品99| 一级片'在线观看视频| 亚洲五月色婷婷综合| 长腿黑丝高跟| 亚洲精品久久午夜乱码| 欧美日本中文国产一区发布| 99久久99久久久精品蜜桃| 深夜精品福利| 免费av中文字幕在线| 波多野结衣av一区二区av| 精品熟女少妇八av免费久了| 又紧又爽又黄一区二区| 18禁美女被吸乳视频| 亚洲中文日韩欧美视频| 久久久水蜜桃国产精品网| 我的亚洲天堂| 久久久国产欧美日韩av| 黄片大片在线免费观看| 成人精品一区二区免费| 一区福利在线观看| 久久香蕉激情| 在线十欧美十亚洲十日本专区| 宅男免费午夜| 国产精品美女特级片免费视频播放器 | 国产99久久九九免费精品| 亚洲精品一卡2卡三卡4卡5卡| 国产精品久久久人人做人人爽| a级毛片在线看网站| 久久草成人影院| 精品一区二区三区视频在线观看免费 | 宅男免费午夜| 成人影院久久| 亚洲精品久久成人aⅴ小说| 国产成人欧美在线观看| 亚洲成人久久性| 国产色视频综合| 亚洲五月色婷婷综合| 免费人成视频x8x8入口观看| 亚洲自偷自拍图片 自拍| 精品久久久久久久毛片微露脸| 一级毛片精品| 美女高潮喷水抽搐中文字幕| 亚洲一区高清亚洲精品| 热re99久久精品国产66热6| 性色av乱码一区二区三区2| 精品国产乱子伦一区二区三区| 久久国产精品男人的天堂亚洲| 成人黄色视频免费在线看| 国产精品乱码一区二三区的特点 | av中文乱码字幕在线| 高清毛片免费观看视频网站 | 亚洲人成网站在线播放欧美日韩| 成在线人永久免费视频| 黄片播放在线免费| 一级a爱片免费观看的视频| 国产精品美女特级片免费视频播放器 | 久久人人精品亚洲av| 免费搜索国产男女视频| 欧美精品亚洲一区二区| 中亚洲国语对白在线视频| 久久久久久免费高清国产稀缺| 操出白浆在线播放| 亚洲精品在线观看二区| 村上凉子中文字幕在线| 在线观看免费午夜福利视频| 婷婷精品国产亚洲av在线| 久久草成人影院| 一区二区日韩欧美中文字幕| xxx96com| 国产成人精品在线电影| 超碰成人久久| 十八禁人妻一区二区| 18禁观看日本| 波多野结衣av一区二区av| 亚洲成a人片在线一区二区| 国产xxxxx性猛交| 国产精品 欧美亚洲| 美女高潮喷水抽搐中文字幕| 制服诱惑二区| 欧美大码av| 男人舔女人下体高潮全视频| 性少妇av在线| 日本免费一区二区三区高清不卡 | 在线观看一区二区三区激情| 正在播放国产对白刺激| 欧美激情极品国产一区二区三区| 欧美乱色亚洲激情| 村上凉子中文字幕在线| 午夜精品在线福利| 国产欧美日韩一区二区三区在线| 极品人妻少妇av视频| 日本欧美视频一区| 中亚洲国语对白在线视频| 可以在线观看毛片的网站| 俄罗斯特黄特色一大片| a在线观看视频网站| 激情在线观看视频在线高清| 久久亚洲真实| 国产精品一区二区在线不卡| 777久久人妻少妇嫩草av网站| 午夜亚洲福利在线播放| 在线观看舔阴道视频| 国产免费现黄频在线看| 亚洲色图综合在线观看|