• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Autonomous Eyewitness Identification by Employing Linguistic Rules for Disaster Events

    2021-12-14 03:49:40SajjadHaiderandMuhammadTanvirAfzal
    Computers Materials&Continua 2021年1期

    Sajjad Haider and Muhammad Tanvir Afzal

    Capital University of Science and Technology,Islamabad,44000,Pakistan

    Abstract:Social networking platforms provide a vital source for disseminating information across the globe,particularly in case of disaster.These platforms are great mean to find out the real account of the disaster.Twitter is an example of such platform,which has been extensively utilized by scientific community due to its unidirectional model.It is considered a challenging task to identify eyewitness tweets about the incident from the millions of tweets shared by twitter users.Research community has proposed diverse sets of techniques to identify eyewitness account.A recent state-of-the-art approach has proposed a comprehensive set of features to identify eyewitness account.However,this approach suffers some limitation.Firstly,automatically extracting the feature-words remains a perplexing task against each feature identified by the approach.Secondly,all identified features were not incorporated in the implementation.This paper has utilized the language structure,linguistics,and word relation to achieve automatic extraction of feature-words by creating grammar rules.Additionally,all identified features were implemented which were left out by the state-of-the-art model.A generic approach is taken to cover different types of disaster such as earthquakes,floods,hurricanes,and wildfires.The proposed approach was then evaluated for all disaster-types,including earthquakes,floods,hurricanes,and fire.Based on the static dictionary,the Zahra et al.approach was able to produce an F-Score value of 0.92 for Eyewitness identification in the earthquake category.The proposed approach secured F-Score values of 0.81 in the same category.This score can be considered as a significant score without using a static dictionary.

    Keywords:Grammar rules;social media;eyewitness identification;disaster response

    1 Introduction

    In today’s digital age social media platforms like Facebook,Twitter,and Instagram are widely used for day to day activities.People across the globe harness these platforms to share information,ideas,opinions,and reviews regarding various items or topics[1,2].Twitter is most widely used among these social media platforms,due to its unique unidirectional relationship model.Twitter handles 500+million tweets every day,which are posted or re-tweeted by 134 million active users per day.This volume is believed to be increased by 30%a year[3].

    Twitter’s word wide popularity has attracted the research community to discover implicit and explicit information from it.There has been study on Twitter for targeted news recommendations,disaster and emergency alerts,response systems,and advertising,etc.[1].Twitter is also deemed as a potential medium ofbreaking news.It has proved itself as a news breaker,85% of trending topics on twitter are news headlines[4].

    The question arises,how Twitter does that?The answer is through the eyewitness’tweets posted by the user.Some of the events wherein Twitter has proved its capacity of breaking the news are outlined below:

    ●A news agency found a tweet posted by a passenger of Delta Aircraft flight which made an emergency landing due to the potential engine failure on a remote island of Alaska[5].

    ●An eyewitness from Hudson Bay tweeted about the New York Airplane crash,and Daily Telegraph broke the news as a headline[6].

    ●There were half a dozen tweets available on Twitter a minute before USGS[7]recorded the time about the California Earthquake[8].

    ●The eyewitness tweets were available about the bombing incident of Boston well before the coverage of the news channels[9].

    ●The attack on Westgate Shopping Mall in Nairobi,Kenya was on Twitter,thirty-three minutes before being reported on the TV news Channels[10].

    Literature reviews suggest identification of eyewitnesses’is a vital task as the information shared by an eyewitness can better explain the state and severity of the event[11].Furthermore,the emergency services or agencies responding to any disastrous event tend to rely on the information provided by eyewitness account.Emergency services have to make quick and effective measures to control the situation therefore credible information is vital.The scientific community has been focusing on the eyewitness tweets identification for several years[11,12].The state-of-the-art approaches have adopted diversified techniques and feature sets to identify eyewitness tweet from a large pool of text.

    A recent study by Zahra et al.[11]identified a comprehensive list of features to identify eyewitness tweets.These features were applied to a set of 2000 tweets for training and 8000 tweets for testing.The authors have asked domain-experts to manually identify thirteen features for eyewitness identification as described below:

    ●Feature-1:Reporting small details of surroundings:e.g.,window shaking,water in basement

    ●Feature-2:Words indicating perceptual senses:e.g.,seeing,hearing,feeling

    ●Feature-3:Reporting impact of disaster:e.g.,raining,school canceled,flight delayed

    ●Feature-4:Words indicating intensity of disaster:e.g.,intense,strong,dangerous,big

    ●Feature-5:First person pronouns and adjectives:e.g.,i,we,me

    ●Feature-6:Personalized location markers:e.g.,my office,our area

    ●Feature-7:Exclamation and question marks:e.g.,!,?

    ●Feature-8:Expletives:e.g.,wtf,omg,s**t

    ●Feature-9:Mention of a routine activity:e.g.,sleeping,watching a movie

    ●Feature-10:Time indicating words:e.g.,now,at the moment,just

    ●Feature-11:Short tweet length:e.g.,one or two words

    ●Feature-12:Caution and advice for others:e.g.,watch out,be careful

    ●Feature-13:Mention of disaster locations:e.g.,area and street name,directions

    A tweet posted by an eyewitness user is shown in Fig.1.The words are tagged with appropriate features identified by Zahra et al.[11].The identified features are marked in different colors with their corresponding labels in Fig.1.

    Figure 1:Example tweet explained with domain-expert features

    Zahra et al.[11]adopted the manually created dictionary to identify the eyewitness.The authors have dropped the implementation part of some identified features like“Mention of disaster locations”,“Reporting small details of surroundings”,and“Personalized location markers”due to their implementation complexity.

    In this study,we define a set of grammar rules which recognizes the feature-words for all the identified features proposed by Zahra et al.[11].Similarly,we have also defined the grammar rules for those features which were dropped due to implementation complexity.Additionally our proposed approach is fully automated wherein no human interaction is required for feature-words extraction.

    The proposed approach has exploited the language structure,linguistic features,and existing relationship among words of a sentence in order to explain the context.Grammar rules are defined to automatically extract the feature-words from the tweet content.Furthermore,the proposed approach has been evaluated for disasters like earthquakes,floods,hurricanes,and wildfires.

    We have evaluated the proposed approach on a benchmark dataset.The results were compared with Zahra et al.[11]approach in terms of precision,recall,and f-measure.The proposed approach achieved the f-score of 0.81 for the earthquake events,which was comparable to the manual approach of Zahra et al.[11]f-score of 0.92.

    In the next section,we shall explore some of the exciting and innovative methods for the identification of eyewitnesses.The structure of our proposed approach and a comprehensive discussion of all the features are illustrated in Section 3.Evaluation results and their comparison with Zahra et al.[11]approach are explained in Section 4.The conclusion of our approach is presented in Section 5.

    2 Literature Review

    Twitter is one of the most commonly used social media wherein people share their opinions and experiences about different news and events happening around them.In the event of a natural disaster,people search for the latest news and real-time content on Twitter,please see Kryvasheyeu et al.[13–16].

    The extraction of useful information from the users’tweets is a critical task,especially when it comes to the disaster events.The disaster management system requires precise and accurate information to control the situation in a prompt manner.Twitter and disaster management mainly focus on the user-provided information for disaster response and relief[17,18].The Haiti earthquake was utilized by Meier[19],and Ostermann et al.[20]exploited the forest fires events.

    The accounts of twitter users and the relational network are used by Truelove et al.[21]to tackle the eyewitness accounts.A predefined list of query words is tested for their presence while pre-processing the content to identify the target tweet.However,this study has not presented any characteristics to identify eyewitness tweets[21].In comparison to Truelove et al.[21,22]propose a list of five linguistic features to identify the eyewitness tweets.Fang et al.[23]presented a hybrid approach by adopting linguistic features with meta-features,like the application used for reporting,for the identification of eyewitness reports.Fang et al.propose five stylistic features with the set of five linguistic features for the identification of eyewitness tweets.Tanev et al.[24]have also proposed a hybrid approach which uses three stylistic features and three linguistic features with Twitter meta-data for eyewitness identification.

    Various applications are developed to crawl the real-time tweets to facilitate disaster relief organizations,like Tweet Tracker[25],Twitcident[26],AIDR[27],and Scatter Blogs for situational awareness[28].The credibility of crowd sourced based generated data suffers the quality assurance for objectivity and truthfulness of information[29]as required by the real-life disaster management systems.

    The organizations which need to respond to a disaster always search for a credible news source or eyewitness tweet[30].The person witnessing the event can truthfully provide details like intensity,effect,casualties,and other information regarding the incident.Diakopoulos et al.[31]presented a study of eyewitness report identification out of millions of tweets related to journalism.A similar study on natural disasters and criminal justice was presented by Olteanu et al.[32]in 2015.Location information of the users is utilized to access remote and local users for a disaster event by Kumar et al.[33]in 2013.Morstatter et al.[34]proposed a technique to identify set of features for automatic classification of disaster events,based on language and the linguistic patterns found in the tweet content.Kumar et al.[33,34]studies have used the location and language information to capture the source whereas an eyewitness account is not taken into consideration.

    Truelove et al.[21]proposed a conceptual model to identify the witness account and its related impact and relayed accounts and evaluated for various events.In 2016 Doggett et al.[22]proposed the linguistic feature sets for eyewitness and non-eyewitness categories to identify eyewitness tweets in the event of a disaster.The authors have identified five features of eyewitness and four for non-eyewitness categories.Fang et al.[23]developed an eyewitness identification task by defining a set of linguistics and metafeatures.The authors have used the word dictionary of LIWC[35]and OpenCalais API[36]for topic classification.Tanev et al.[24]in 2017 defined the eyewitness features using the lexical dimension,stylistic dimension,metadata,and Semantics.None of the above techniques developed their classifiers through a combination of expert-driven engineering as the expert-driven features related to disaster events can be helpful in the identification of true eyewitness sources.

    A recent approach by Zahra et al.[11]in 2019 proposed a technique to categories features of eyewitness types from eyewitness reports using expert-driven engineering of frequent natural disasters.The authors identified the feature set associated with eyewitness and called them the domain-expert features.The authors experimented with the textual features(bag-of-words)combined with the domain-expert features for the classification of eyewitness tweets.

    We have critically analyzed all of the above studies which led us to identify the important problem discussed in this paper.The summary of eyewitness features identification techniques is illustrated in Tab.1.The table demonstrates the methodology,results and shortcomings of[11]in tweet identification.

    We have formed a complete list of features which have been used by different studies to tackle the intended tweets.Tab.2 below shows the list of features against the approaches which have incorporated.

    Table 1:Summary of eyewitness features identification techniques

    Table 2:Summary of identified eyewitness features

    We have identified two main issues which have been overlooked by Zahra et al.approach on tweets identification.Firstly,most part of the study has used a static dictionary,which is not scale-able for different domains and unseen tweets.Secondly,the implementation of the approach is not practical when millions of tweets need to be processed in real-time.The manual extraction of features with the help of a domain expert using a predefined list is time consuming and updating new data set in real-time is cumbersome.

    In the light of above limitation,we have proposed an automatic tweet identification system.It can intelligently processes millions of tweets in real-time without requiring domain experts to build static dictionaries for different domains.

    3 Methodology

    A comprehensive discussion on the proposed methodology is illustrated in this section.The overall approach is described in Fig.2.Rule identification for feature extraction module is described in Fig.3.

    Figure 2:Proposed methodology

    The data employed to implement the proposed methodology contains tweets containing words such as:“earthquake”,“flood”,“hurricane”,“wildfire”.This data set has also been employed and publicly released by Zahra et al.[11]approach.We have acquired the data from CrisisNLP repository.

    Firstly,the tweets are pre-processed to remove the noise.Thereafter,they are parsed and annotated Partof-Speech(POS)tagger.The tagger also provides lemmatization and entity-relationship details.We have used the CoreNLP[37]tool,which labels each word to its respective POS and also identifies the relationship between the words of each sentence.The output of CoreNLP is then evaluated manually by the language structure to identify the grammar rules for each identified feature as shown in Fig.3.

    The grammatical rules suggested for each function are then added to the processing of data,and the findings are compiled.Now,the next step is to identify tweets as “eyewitness”,“non-eyewitness”,or“Unknown” source.For this,we have searched for those tweets that contain at least two features of the specified set.If the tweet includes two features then it is categorized as “eyewitness”,if it does not have any explicit feature but includes non-eyewitness features from Doggett et al.[22],it is classified as “noneyewitness”.Tweets containing no identified feature are categorized as “unknown”.The following subsections provide a brief explanation of each process involved in the proposed methodology from data set selection to results evaluation.

    Figure 3:Proposed methodology(rule identification for feature extraction)

    3.1 Dataset

    The dataset employed by Zahra et al.[38]contains tweets from July-2016 to May-2018 utilizing the Twitter Streaming API.This data contains 2000 tweets,which were tweeted from 1stto 28thAugust 2017 with focused keywords including earthquake,foreshock,aftershock,hurricane,fire,forest fire,flood,etc.These tweets were used for manual analysis by Zahra et al.[11]approach.Another data set,which is also employed in our proposed study,comprises 8000 tweets picked from random collection based on same keywords and annotation.This data is collected by using Twitter Streaming API[11].It contains annotations given by the authors.

    3.2 Pre-Processing

    Tweets are generally written in natural languages where grammar and other writing rules are not followed.To remove redundant information from the tweets,we have pre-processed this data.Tweets preprocessing involves the removal of HTML tags,hash-tags,extra white spaces,special symbols,etc.After the pre-processing procedure tweets are parsed.

    3.3 Parsing of Tweets(Stanford CoreNLP)

    The Part-Of-Speech(POS)tagging or grammatical tagging is the process wherein each word in a textcorpusis tagged with its corresponding part of speech,based on both the definition and the context.The Stanford NER(Named Entity Recognition)has been proven useful in attaining high F-measure scores in contemporary approaches[11,38].Therefore,we have also used the Stanford CoreNLP tool.The annotations of the tweets are also stored in the same database which contains all the tweets to be processed in order to carryout comparison.

    For instance,consider the tweet text;“I felt something,and my door was shaking”,and its output is obtained by using the Stanford CoreNLP,as shown in Tab.3.

    Table 3:Stanford CoreNLP output

    In Tab.3,the first column“IDX”is the index ID of a word in a sentence of the tweet.The second column is the actual Word field.The third column field is titled as“Lemma”as it has a base word of the focused term generated by the lemmatization process.The forth column“POS”contains the corresponding Part-of-speech of the focused term.The fifth column titled“NER”contains the Named-Entity tag.Whereas columns sixth and seventh both contain the related information of the focused term.The sixth column of HeadIDX contains the Index-ID of the related word or zero(identifying the relationship between words).The last column contains the Dependency relation to the second word.Stanford provided the complete typed dependencies manual in 2008 and revised it in 2016[39].

    3.4 Feature Extraction(Grammar Rule-Based)

    In-depth analysis of the annotated dataset revealed that language structure and the relationship between different words of a sentence could be exploited for automatic extraction of the features from tweet content.Unlike manually created dictionaries or lists used by the existing Zahra et al.[11]approach,we have proposed grammar rules for almost every feature.Moreover,our approach also enjoys created rules for features that were dropped by the author due to their implementation complexity.

    We proposed grammar-based rules for maximum features,but few features like “Words indicating perceptual senses”,“First-person pronouns and adjectives”,“Exclamation and question marks”,and“Expletives” are somehow limited to be implemented only by using a predefined list of words.Similarly,for “Expletives” words,no rules can be defined due to the scope of this feature and again have to limit our implementation to dictionaries rather than defining grammar rules.Out of thirteen features,we have to use dictionaries for four features and a word count approach for one identified feature.The following subsections briefly discuss each feature and its proposed Grammar Rules or the limitations of using lists.

    3.4.1 Feature-1:Reporting Small Details of Surroundings

    The first feature identified by Zahra et al.[11]approach is the identification of “small surrounding details” reported by tweet authors.The feature is not implemented by Zahra et al.[11]approach,and states that it “proved too abstract to be implemented”.From a human perspective,this is an important feature to understand the context.The importance of the feature is also exploited by Fang et al.[23].We have created grammar rules for the extraction of this feature.Working on the grammar rule is explained in Tab.4,the data is the same as we have described in Tab.3.

    Table 4:Stanford CoreNLP output(working)

    The relationship among tagged words is shown in Tab.4.In this text,“something shaking”is the feature that explains the surrounding detail.By using a bag-of-words technique,it is not possible to extract this feature word.

    We have achieved successful extraction of such features,using grammar rules.It is illustrated in Tab.4 where we can see that at token 3(IDX = 3),we have a noun(NN)and it has a dependency relation(DepRel = “nsubj”)with token 8(IDX = 8)that we can link from HeadIDX value(HeadIDX = 8).The target token is a verb(POS = “VBG”)and has the dependency relation(DepRel = “ccomp”).By implementing this language rule,we can identify the feature “something shaking”,that explains the surrounding details.For implementation purpose,we have proposed a rule as follows:

    RULE-1;POS=‘NN’and DepRel=‘nsubj’and IDX

    ‘CAUSE_OF_DEATH’and OnTarget(POS=‘VBG’and DepRel=‘ccomp’)

    3.4.2 Feature-2:Words Indicating Perceptual Senses

    The words that come under this category related to a limit list like “hearing,seeing”.For its implementation,we have used the technique of “Lemma matching” of each token with a list of lemma words of “hearing,seeing”.A list of such words is available online[35];as used by Zahra et al.[11]approach.The rule implemented for this category is not purely a grammar rule but a Bag-of-Words technique as follows:

    RULE-2;LEMMA in PreceptualSensesWordList()

    3.4.3 Feature-3:Reporting Impact of Disaster

    Reporting the impact of the disaster in tweet content is a common practice during disaster events.It can be useful in evaluating the severity of the impact.Such tweets may contain words like school canceled,and flight delayed,etc.For implementation purpose,we have proposed a rule that is not purely grammar-based;however,a combination of both the language structure and the list of the impact-words like‘cancel’,‘delay’,‘suspend’,‘lost’,‘postpone’,‘defer’,‘reschedule’,‘rearrange’.We have not utilized a pre-generated list of events but used a rule that combines with a list of impact-words that helps in finding disaster impact features.The implementation of this feature would be as follows:

    RULE-3;POS like‘NN%’and OnTarget(DisasterImpacts(LEMMA))

    3.4.4 Feature-4:Words Indicating the Intensity of the Disaster

    In this category of features,the feature-words show the severity of the disaster,like intense,strong,dangerous,big,etc.Such feature words can help the institutions in decision making and help the response teams to react accordingly.For implementation,we have proposed a rule as follows:

    RULE-4;POS=‘JJ’and DepRel=‘nsubj’and IDX

    and OnTarget(POS like‘NN%’and NER=‘CAUSE_OF_DEATH’)

    3.4.5 Feature-5:First-Person Pronouns and Adjectives

    This feature type has a standard list to follow,and its implementation is straightforward.For this purpose,we have employed Bag-of-words(BOW)technique as employed by the Zahra et al.[11]approach.The rule for this feature is implemented as follows:

    RULE-5;WORD in FirstPersonProNouns()

    3.4.6 Feature-6:Personalized Location Markers

    This feature is also an important feature to indicate the density of disaster’s impact by extracting the personalized location markers.It contains terms like my office,our area,our city,etc.It is difficult to distinguish a noun of locations from other nouns.For this purpose,we used a list of location nouns[40].For the implementation purpose,we have proposed the rules as follows:

    RULE-6;POS=‘PRP$’and DepRel=‘nmod:poss’and IDX

    LEMMA in(‘my’,’we’)and OnTarget(POS=‘NN’and NER=‘O’and LocationNoun(WORD))

    3.4.7 Feature-7:Exclamation and Question Marks

    This feature can easily be acquired using a defined list of words that are“!”and“?”The formed rule to extract this feature is as follows:

    RULE-7;WORD in(‘?’,’!’)

    3.4.8 Feature-8:Expletives

    Expletives are words or phrases which are not required for an explanation of the basic meaning of the sentence,e.g.,wtf,omg,s**t,etc.For the implementation of this feature,slang words lists are used by Zahra et al.[11]approach,the same method is adopted in this work.It is not possible to identify them from sentences using grammar rules.The list is available online[41].

    RULE-8;WORD in ListOfSlangWords()

    3.4.9 Feature-9:Mention of a Routine Activity

    This feature type includes the words that explain routine activates e.g.,sleeping,watching a movie.Implementation of this type is proposed by using the word list available online[42].In our work,we have proposed the following grammar rule for implementation purpose;

    RULE-9;POS=‘VBG’and NER=‘O’and WORD<>‘GON’

    3.4.10 Feature-10:Time Indicating Words

    Time indicating words are considered as an important feature to understand and react to a disaster event.For example,now,at the moment,just,etc.For implementation purpose,we have proposed rules as follows:

    RULE-10;POS=‘RB’and DepRel=‘a(chǎn)dvmod’and((WORD=‘just’)or

    (WORD<>‘just’and NER=‘DATE’))and OnTarget(POS in‘RB’,’VBP’,’JJ’)

    3.4.11 Feature-11:Short Tweet-length

    This feature identification requires other features to explain the actual concept.For example,a tweet with a single word like“earthquake”.In the current context the feature is true;however it does not convey the true meaning rather the word“earthquake!”changes the strength of the tweet.We have not found standard wordcount in the literature therefore this feature implementation is complex.We set the tweet length to be less than or equals to 8 words for implementation purposes.This remains an open area for future work in this field.Implementation rule would be as follows;

    RULE-11;WordCount([Tweet Text])

    3.4.12 Feature-12:Caution and Advice for Others

    This feature includes words like watch out,be careful,etc.to intimate others about an event.It is implemented using dictionaries,but we have proposed the grammar rules to identify such words from the tweet content.For implementation purpose we have proposed rules as follows;

    RULE-12;POS=‘VB’and DepRel=‘cop’and IDX

    3.4.13 Feature-13:Mention of Disaster Locations

    We used the feature of the CoreNLP tool to identify the mentioned location from tweet content.For implementation purpose,we have proposed the rules as follows:

    RULE-13;LOCATION_TYPE(NER)OR WORD in(‘north’,‘south’,‘east’,‘west’)

    3.5 Feature Evaluation

    The existence of individual features like “Exclamation and question marks”,“Short tweet-length”,“First-person pronouns”,“Words indicating perceptual senses”,do not explain the concept that the tweet is posted by a true eyewitness.In order to incorporate such situations,we have generalized the idea and adopted a standardized approach.It states that a tweet having at-least two identified features would be marked as an eyewitness tweet.A non-eyewitness feature-based[22]tweet is marked as non-eyewitness whereas tweets with no identified features are categorized as unknown.An exceptional scenario is adopted for the “Exclamation and question marks”,wherein even only one feature is identified tweet is marked as “Eyewitness”.

    3.6 Evaluation Parameters

    The proposed approach is evaluated in terms of precision,recall,and f-measure to measure the effectiveness.These evaluation matrices are commonly used in this field.Zahra et al.approach and other approaches in the literature also exploit these evaluation measures.

    3.7 Evaluation Strategy

    The proposed approach is evaluated against the results generated by Zahra et al.approach on a specific data set[11].If the proposed approach generated the same result as Zahra et al.approach,then it is considered as a true-match.In this manner values for precision and recall are calculated for each identified feature.The fscore is then calculated to have a harmonic mean of both precision and recall values.

    4 Experiments and Results

    This section presents the overall evaluation results of the proposed approach.The following subsections briefly explain the results generated by each step of the proposed methodology.

    4.1 Dataset and Pre-Processing of Tweets

    The proposed approach is evaluated using a dataset containing 8000 tweets on earthquake,flood,hurricane,and wildfire.Every tweet in the dataset contains the annotations done by the Zahra et al.approach.A brief statistics of all the categories are illustrated in Tab.5.

    Table 5:Dataset statistics(tweets)

    As the dataset has already been evaluated and updated by Zahra et al.approach,it contained a small number of tweets which required pre-processing steps.The existence “ ‘ “ and comma was found in 2942 tweets,and they were updated during the pre-processing.The dataset contains the author generated results against each tweet.The same dataset is used as the benchmark to compare proposed approach results with Zahra et al.results.

    4.2 Parsing and Feature Extraction

    Parsing of all tweets for Part-Of-Speech(POS)tagging,Lemmatization,Named Entry,and Entity relation,Stanford CoreNLP tool is used.The content of all 8000 tweets were passed through the tool that eventually generated the results having 22,932 rows with all required fields,as discussed in Section-3.3.

    After passing all tweets from the Stanford CoreNLP tool,the results are then thoroughly studied and analyzed for the language structure.The relationship among different words of a sentence was studied,and based on these findings,and rules were generated for feature extraction(as discussed in Section 3.4).

    The feature-words count is not shared by the authors of the Zahra et al.approach.However proposed approach uses grammar rules to validate the accuracy of feature-words extraction.Each tweet has been annotated manually for the identification of feature-words.The dataset,along with features list,was provided to a Natural Language Processing expert with excellent language skills and has published different articles in the field of semantics analysis and information extraction.The expert is a Ph.D.Scholar in the field of semantic computing.

    The proposed grammar rules described in Section 3.4 were applied to the processed dataset of 8000 tweets,and the feature-words were extracted against each grammar rule.Manually extracted featurewords by the expert for each feature using 8000 tweets were then compared with the list of automatically generated results of the proposed approach of grammar rules.The comparison results yielded an accuracy of 0.95.The resulting dataset was then used for further calculations,evaluations,and comparison with Zahra et al.approach[11].

    4.3 Results of Proposed Approach

    The results were generated by the proposed approach are shown in Tab.6.

    Table 6:Proposed approach result

    The proposed approach results generated for each disaster type for all evaluation parameters(precision,recall,and f-score)are illustrated in Tab.6.To align the table,the category names are denoted as;“Ew”for eyewitness,“NEw” for non-eyewitness,and“Un”for unknown categories.

    It was observed that the“Earthquake”disaster category achieved high-value results for precision,recall,and f-score.The remaining disaster categories have achieved average results for all the parameters.The same proportion of results was observed in Zahra et al.approach.Best results were generated with earthquake events as the maximum numbers of features were identified.During manual analysis,it was observed that different characteristics were found for different disaster types due to their different nature.

    4.4 Comparison of Proposed Approach with Zahra et al.Approach

    Each feature type was evaluated independently,and results were compared with Zahra et al.approach.Comparison of proposed approach and the Zahra et al.approach is illustrated in Fig.4 in terms of precision,recall,and F-Scores measures.Zahra et al.approach uses a manually created static dictionary for the evaluation which is not scalable.For instance,consider the following scenarios:

    Figure 4:Comparison scores of proposed and state-of-the-art approaches

    a)When the domain changes,the list of words might change

    b)The unseen tweets may contain different words or vocabulary that will directly affect the performance of the state-of-the-art approach.

    In the event of a disaster,it will be a cumbersome effort to process millions of tweets to identify the eyewitness in real-time as it will involve thousands of domain-exports to update the dictionary.In order to cover this gap,the language structure,linguistic features,and word relations have been critically reviewed to automatically identify the eyewitness.Subsequently,the idea of creating grammar rules for the automatic extraction of eyewitness feature-words has been used.The grammar-based rules have been created for feature extraction for all thirteen identified features without using static dictionaries with reasonable accuracy.

    The proposed approach was then evaluated for all disaster-types,including earthquakes,floods,hurricanes,and fire.Based on the static dictionary,the Zahra et al.approach was able to produce an F-Score value of 0.92 for Eyewitness identification in the earthquake category.The proposed approach secured F-Score values of 0.81 in the same category.This score can be considered as a significant score without using a static dictionary.For the non-eyewitness category,Zahra et al.approach and proposed approaches both secured F-Score of 0.71,as the same approach of Doggett et al.[22]was adopted.For the unknown category,the proposed approach performed better and achieved the F-Score of 0.16,while the Zahra et al.approach achieved the F-Score of 0.15.The results for other disaster types remained in the same proportion for both approaches.

    The proposed technique produced lower results for the “wildfire” disaster type in comparison to the results achieved for the remaining disaster types.Because the proposed approach automatically extracts feature-words and do not involve human interaction for manual creation and maintenance of dictionaries for feature-words.We claim that the proposed approach could be deemed as a potential approach,especially in the scenarios when we have millions of tweets to process in real-time.

    5 Conclusion

    Twitter is considered as a potential news beaker platform.People across the globe share information to their followers regarding incidents they witness,especially disastrous events like flood,earthquake,fire,etc.It is considered a challenging task to identify eyewitness tweets about the incident from the millions of tweets shared by twitter users.

    Whereas on other hand the information shared by an eyewitness is valuable in terms of understanding the true intensity of the event.Emergency services can use such information for disaster management.The recent approach proposed by Zahra et al.[11]has identified various characteristics from the content of the tweet to identify the feature words.However,they have manually created static dictionary for the identification of feature-words.A critical analysis of literature reveals three major limitations of Zahra et al.approach for using a static dictionary.Firstly,need to be updated with the changing in the domain.Secondly,the involvement of domain-experts will be required for updating the static dictionaries.Thirdly,handling millions of tweets in real-time is daunting task.

    This research paper has proposed a comprehensive methodology for automatic extraction of feature set.A fine-tuned grammar rules were created after studying the language structure.The grammatical rule for each of the feature was carefully designed by detailed analysis of the dataset and features were extracted based on these rules.These features were further utilized to categorize the tweets into different categories like“eyewitness”,“non-eyewitness”,or “unknown”[11].

    The proposed approach results were evaluated in terms of precision,recall,and f-measure on a benchmark dataset of 8000 tweets.Furthermore,the results were also compared with the Zahra et al.approach.The proposed approach secured F-Score of 0.81 for eyewitness tweets for earthquake incident.This score can be considered as a significant score without using a static dictionary.Hence,the proposed approach can be a potential approach considering no human effort is required in terms of dictionary management.Similarly,millions of tweets can be processed on real time processing.Last but not least a generalized approach which can work with various disaster types.The proposed approach is a novel contribution to the concept of automatic identification of eyewitness tweets using grammar rules on twitter without human interaction.

    Funding Statement:The author(s)received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    两个人的视频大全免费| 亚洲午夜理论影院| 一进一出抽搐动态| 99精品久久久久人妻精品| 男女视频在线观看网站免费| 亚洲七黄色美女视频| 久久天躁狠狠躁夜夜2o2o| 嫩草影院入口| 少妇丰满av| 亚洲乱码一区二区免费版| 91在线观看av| 看十八女毛片水多多多| 亚洲色图av天堂| 不卡一级毛片| 久久久久亚洲av毛片大全| 在线观看一区二区三区| 此物有八面人人有两片| av天堂在线播放| 国产91精品成人一区二区三区| 草草在线视频免费看| 美女免费视频网站| 国产高清激情床上av| 久久精品国产清高在天天线| 九九热线精品视视频播放| 亚洲性夜色夜夜综合| 午夜福利免费观看在线| 午夜亚洲福利在线播放| 午夜免费激情av| 丝袜美腿在线中文| 亚洲aⅴ乱码一区二区在线播放| 99riav亚洲国产免费| 欧美成狂野欧美在线观看| 久久精品影院6| 精品免费久久久久久久清纯| 亚洲av熟女| 国产一区二区三区在线臀色熟女| 成年女人永久免费观看视频| 18禁在线播放成人免费| 琪琪午夜伦伦电影理论片6080| 最近视频中文字幕2019在线8| 欧美在线一区亚洲| 国产成人aa在线观看| 夜夜夜夜夜久久久久| 成人亚洲精品av一区二区| 99久久精品一区二区三区| 人人妻人人看人人澡| 成人国产综合亚洲| 国产精品亚洲一级av第二区| 国产亚洲精品久久久久久毛片| 亚洲成人久久性| ponron亚洲| 国产精品影院久久| 搡老岳熟女国产| 国产高清激情床上av| 欧美精品国产亚洲| 欧美一区二区精品小视频在线| 亚洲国产日韩欧美精品在线观看| xxxwww97欧美| 69av精品久久久久久| 国产欧美日韩精品亚洲av| 美女免费视频网站| 小说图片视频综合网站| 看免费av毛片| 夜夜夜夜夜久久久久| 一本一本综合久久| 97热精品久久久久久| 国产一区二区三区在线臀色熟女| 亚洲片人在线观看| 日韩精品中文字幕看吧| 午夜精品一区二区三区免费看| 欧美色视频一区免费| 免费搜索国产男女视频| 88av欧美| 深夜a级毛片| 国产精品一区二区性色av| 免费av观看视频| 桃色一区二区三区在线观看| 99热6这里只有精品| 又爽又黄无遮挡网站| www.熟女人妻精品国产| 日日摸夜夜添夜夜添小说| 看免费av毛片| 免费在线观看日本一区| 熟妇人妻久久中文字幕3abv| 精品国产亚洲在线| 亚洲 国产 在线| 日本五十路高清| 久久精品夜夜夜夜夜久久蜜豆| 国产成人福利小说| 夜夜爽天天搞| 午夜精品在线福利| 狂野欧美白嫩少妇大欣赏| 欧美色视频一区免费| 欧美3d第一页| 简卡轻食公司| 亚洲国产精品久久男人天堂| 成人一区二区视频在线观看| 好看av亚洲va欧美ⅴa在| 国产免费男女视频| 国产毛片a区久久久久| 免费看a级黄色片| 美女高潮喷水抽搐中文字幕| 日本免费a在线| 18禁黄网站禁片午夜丰满| 国产精品久久久久久久久免 | av黄色大香蕉| 色5月婷婷丁香| 91久久精品电影网| 久久精品人妻少妇| 午夜影院日韩av| 91麻豆精品激情在线观看国产| 久久久久久久午夜电影| 免费人成视频x8x8入口观看| 国产人妻一区二区三区在| 欧美另类亚洲清纯唯美| 啦啦啦观看免费观看视频高清| 欧美精品啪啪一区二区三区| 亚洲精品一卡2卡三卡4卡5卡| 又粗又爽又猛毛片免费看| 99热只有精品国产| 亚洲狠狠婷婷综合久久图片| 一个人免费在线观看电影| 亚洲av免费高清在线观看| 色尼玛亚洲综合影院| 亚洲在线观看片| 757午夜福利合集在线观看| 国产一级毛片七仙女欲春2| 久久欧美精品欧美久久欧美| eeuss影院久久| www.999成人在线观看| 丰满乱子伦码专区| 一个人免费在线观看电影| 最近在线观看免费完整版| 人妻久久中文字幕网| 成人一区二区视频在线观看| 久久香蕉精品热| 亚洲 国产 在线| 色噜噜av男人的天堂激情| 亚洲18禁久久av| 少妇裸体淫交视频免费看高清| 69人妻影院| 日韩欧美一区二区三区在线观看| 每晚都被弄得嗷嗷叫到高潮| 搡老熟女国产l中国老女人| 怎么达到女性高潮| 亚洲人成网站在线播| 色av中文字幕| 中出人妻视频一区二区| 亚洲人与动物交配视频| 国产精品嫩草影院av在线观看 | 久久国产精品人妻蜜桃| 校园春色视频在线观看| 能在线免费观看的黄片| 一边摸一边抽搐一进一小说| 一本一本综合久久| 国产成人欧美在线观看| 亚洲无线观看免费| 欧美一区二区精品小视频在线| avwww免费| 观看美女的网站| 国产精品伦人一区二区| 亚洲一区二区三区不卡视频| 国产乱人视频| 日韩欧美三级三区| 日韩欧美免费精品| 午夜精品在线福利| 国产精品亚洲av一区麻豆| 人妻丰满熟妇av一区二区三区| 亚洲一区二区三区不卡视频| 欧美日本亚洲视频在线播放| 综合色av麻豆| 精品无人区乱码1区二区| 少妇的逼水好多| 麻豆成人av在线观看| 男女下面进入的视频免费午夜| 亚洲七黄色美女视频| 又爽又黄无遮挡网站| 久久精品国产亚洲av涩爱 | 欧美区成人在线视频| 最新中文字幕久久久久| 欧美日韩中文字幕国产精品一区二区三区| 久久久色成人| 高潮久久久久久久久久久不卡| 欧美精品啪啪一区二区三区| 18禁黄网站禁片免费观看直播| 国产精品女同一区二区软件 | x7x7x7水蜜桃| 午夜福利在线观看免费完整高清在 | 婷婷丁香在线五月| 国产成人a区在线观看| 免费av不卡在线播放| 亚洲乱码一区二区免费版| 99在线人妻在线中文字幕| 最近最新免费中文字幕在线| 国产精品伦人一区二区| 免费在线观看日本一区| 久久精品国产亚洲av涩爱 | 露出奶头的视频| 成人性生交大片免费视频hd| 国产探花在线观看一区二区| 三级男女做爰猛烈吃奶摸视频| 午夜福利在线在线| 欧美日韩中文字幕国产精品一区二区三区| 91久久精品国产一区二区成人| 免费看日本二区| 国内精品一区二区在线观看| 无遮挡黄片免费观看| 淫秽高清视频在线观看| 国产成人av教育| 免费观看精品视频网站| 国产精品98久久久久久宅男小说| 大型黄色视频在线免费观看| 精品一区二区免费观看| 69av精品久久久久久| 老司机午夜福利在线观看视频| 成人一区二区视频在线观看| 一进一出抽搐gif免费好疼| 国产精品一及| 久久久国产成人免费| 亚洲在线观看片| 精品一区二区三区人妻视频| 99国产综合亚洲精品| 久久久久性生活片| 在线播放无遮挡| 1000部很黄的大片| av欧美777| 欧美黑人巨大hd| 午夜久久久久精精品| 亚洲专区国产一区二区| 五月伊人婷婷丁香| 在线观看美女被高潮喷水网站 | 久久人妻av系列| 每晚都被弄得嗷嗷叫到高潮| 国产一区二区亚洲精品在线观看| bbb黄色大片| 脱女人内裤的视频| 每晚都被弄得嗷嗷叫到高潮| 亚洲,欧美精品.| 老鸭窝网址在线观看| 亚洲一区高清亚洲精品| 欧美乱色亚洲激情| 黄色女人牲交| 给我免费播放毛片高清在线观看| 国产在视频线在精品| 三级男女做爰猛烈吃奶摸视频| 久久这里只有精品中国| 嫩草影院新地址| 老司机福利观看| 欧美激情国产日韩精品一区| 亚洲自拍偷在线| av视频在线观看入口| 日韩欧美在线二视频| 久久久久国产精品人妻aⅴ院| 国内少妇人妻偷人精品xxx网站| 深夜精品福利| 欧美成人一区二区免费高清观看| 亚洲久久久久久中文字幕| 欧美激情久久久久久爽电影| 亚洲精品一区av在线观看| 国产伦人伦偷精品视频| 日本一本二区三区精品| 一进一出好大好爽视频| 一卡2卡三卡四卡精品乱码亚洲| 国产爱豆传媒在线观看| 给我免费播放毛片高清在线观看| 久久久久国产精品人妻aⅴ院| 精品久久久久久久久av| 国产69精品久久久久777片| 成人高潮视频无遮挡免费网站| 九色成人免费人妻av| 色av中文字幕| 深夜a级毛片| 中亚洲国语对白在线视频| 高清在线国产一区| www日本黄色视频网| 给我免费播放毛片高清在线观看| 日韩欧美在线乱码| 亚洲av免费在线观看| 欧美丝袜亚洲另类 | 亚洲激情在线av| 一级av片app| 日韩免费av在线播放| 亚洲精品久久国产高清桃花| 激情在线观看视频在线高清| 狠狠狠狠99中文字幕| 看免费av毛片| 亚洲av美国av| 天堂影院成人在线观看| 国产精品av视频在线免费观看| 日韩 亚洲 欧美在线| 中文亚洲av片在线观看爽| 国产高清有码在线观看视频| 欧美激情国产日韩精品一区| av在线天堂中文字幕| 日韩大尺度精品在线看网址| 亚洲自偷自拍三级| 久久国产精品人妻蜜桃| 免费电影在线观看免费观看| 中文字幕人妻熟人妻熟丝袜美| www日本黄色视频网| 老司机午夜十八禁免费视频| 亚洲专区中文字幕在线| 国产精品美女特级片免费视频播放器| 色哟哟哟哟哟哟| 一边摸一边抽搐一进一小说| 变态另类成人亚洲欧美熟女| 国产精品久久视频播放| av福利片在线观看| 色精品久久人妻99蜜桃| 波野结衣二区三区在线| 日本 欧美在线| 欧美精品啪啪一区二区三区| 亚洲七黄色美女视频| 国产精品不卡视频一区二区 | 国产精品99久久久久久久久| 舔av片在线| 免费一级毛片在线播放高清视频| 国产一区二区在线av高清观看| 老女人水多毛片| 久久亚洲精品不卡| 亚洲美女搞黄在线观看 | 91午夜精品亚洲一区二区三区 | 国产探花在线观看一区二区| 亚洲av免费在线观看| 黄色丝袜av网址大全| 一a级毛片在线观看| x7x7x7水蜜桃| av天堂在线播放| bbb黄色大片| 永久网站在线| 久久午夜亚洲精品久久| 午夜福利在线观看免费完整高清在 | 中文字幕高清在线视频| 欧美性猛交黑人性爽| 国产色婷婷99| 久久精品影院6| 99热这里只有是精品50| 天堂动漫精品| 在线播放国产精品三级| 国产伦精品一区二区三区视频9| 久久精品国产99精品国产亚洲性色| 波多野结衣高清无吗| 亚洲人成电影免费在线| 亚洲 国产 在线| 一级作爱视频免费观看| 午夜福利高清视频| 可以在线观看的亚洲视频| 色哟哟哟哟哟哟| 国产精华一区二区三区| 亚洲一区二区三区色噜噜| 嫁个100分男人电影在线观看| 国产精品98久久久久久宅男小说| 一区二区三区免费毛片| 午夜两性在线视频| a在线观看视频网站| 在线观看66精品国产| 亚洲黑人精品在线| av天堂中文字幕网| 性色avwww在线观看| 一级黄色大片毛片| 亚洲五月婷婷丁香| 人妻夜夜爽99麻豆av| 久久久色成人| 久久热精品热| 国产日本99.免费观看| 好男人电影高清在线观看| 一级作爱视频免费观看| 大型黄色视频在线免费观看| 免费在线观看亚洲国产| 自拍偷自拍亚洲精品老妇| 99热这里只有是精品在线观看 | 国产精品永久免费网站| www.色视频.com| 尤物成人国产欧美一区二区三区| 黄片小视频在线播放| 观看免费一级毛片| 久久人人爽人人爽人人片va | 丰满的人妻完整版| 中文字幕人妻熟人妻熟丝袜美| 色综合亚洲欧美另类图片| 亚洲不卡免费看| 日本 av在线| 欧美高清性xxxxhd video| avwww免费| 国产成人啪精品午夜网站| 久久亚洲精品不卡| 精品午夜福利视频在线观看一区| 国内精品美女久久久久久| 色播亚洲综合网| 国产探花极品一区二区| 偷拍熟女少妇极品色| 成人一区二区视频在线观看| 日韩欧美三级三区| 欧美国产日韩亚洲一区| 午夜福利18| 别揉我奶头~嗯~啊~动态视频| 婷婷亚洲欧美| 噜噜噜噜噜久久久久久91| 亚洲在线自拍视频| 婷婷六月久久综合丁香| 国产成人欧美在线观看| 精品人妻偷拍中文字幕| 69av精品久久久久久| 午夜视频国产福利| 午夜a级毛片| 国产视频内射| av福利片在线观看| 制服丝袜大香蕉在线| 中文字幕久久专区| 高清日韩中文字幕在线| 夜夜夜夜夜久久久久| av在线蜜桃| 丝袜美腿在线中文| 午夜日韩欧美国产| 亚洲av.av天堂| 桃色一区二区三区在线观看| 悠悠久久av| 欧美xxxx黑人xx丫x性爽| 亚洲国产色片| 亚洲精品在线美女| 俺也久久电影网| 老熟妇乱子伦视频在线观看| 美女xxoo啪啪120秒动态图 | 高清日韩中文字幕在线| www.熟女人妻精品国产| 久久亚洲精品不卡| 老司机午夜福利在线观看视频| 自拍偷自拍亚洲精品老妇| bbb黄色大片| 国产精品综合久久久久久久免费| 一区二区三区免费毛片| 精品福利观看| 狠狠狠狠99中文字幕| 天堂av国产一区二区熟女人妻| 高清日韩中文字幕在线| 精品99又大又爽又粗少妇毛片 | 国产av在哪里看| 欧美色视频一区免费| 女同久久另类99精品国产91| 欧美成人性av电影在线观看| 色综合婷婷激情| 国产精品一区二区三区四区久久| 麻豆一二三区av精品| 免费大片18禁| 少妇高潮的动态图| 97热精品久久久久久| 国产精品一区二区性色av| 无人区码免费观看不卡| 久久国产乱子免费精品| 久久伊人香网站| 深夜精品福利| netflix在线观看网站| 最近中文字幕高清免费大全6 | 日韩免费av在线播放| 日本黄色片子视频| 国产精品av视频在线免费观看| 国产精品一区二区免费欧美| 成人特级av手机在线观看| 国产免费男女视频| 99riav亚洲国产免费| 国产高潮美女av| 国产亚洲欧美98| 国内揄拍国产精品人妻在线| 欧美一级a爱片免费观看看| 午夜影院日韩av| 好男人在线观看高清免费视频| 全区人妻精品视频| 最后的刺客免费高清国语| 国产老妇女一区| 99国产精品一区二区蜜桃av| 熟妇人妻久久中文字幕3abv| 女生性感内裤真人,穿戴方法视频| 成人一区二区视频在线观看| 亚洲av免费在线观看| 国产 一区 欧美 日韩| 日韩中字成人| 男人狂女人下面高潮的视频| 亚洲av成人不卡在线观看播放网| 国产亚洲精品久久久com| 成年女人看的毛片在线观看| 高清毛片免费观看视频网站| 少妇高潮的动态图| 99久久成人亚洲精品观看| 国产一区二区亚洲精品在线观看| 91在线观看av| 国产在视频线在精品| 18+在线观看网站| 午夜福利高清视频| 麻豆国产97在线/欧美| 很黄的视频免费| 日日摸夜夜添夜夜添av毛片 | 成人毛片a级毛片在线播放| 亚洲无线在线观看| 97超级碰碰碰精品色视频在线观看| 黄色一级大片看看| 国产蜜桃级精品一区二区三区| 亚洲人与动物交配视频| 成人永久免费在线观看视频| 宅男免费午夜| 成人鲁丝片一二三区免费| 在线天堂最新版资源| 国产精品爽爽va在线观看网站| 中文字幕免费在线视频6| 色哟哟哟哟哟哟| 国产精品精品国产色婷婷| 久久精品国产亚洲av天美| 久久久久久九九精品二区国产| 国产高清有码在线观看视频| 国产色爽女视频免费观看| 99国产极品粉嫩在线观看| 色精品久久人妻99蜜桃| 天堂av国产一区二区熟女人妻| 亚洲精品成人久久久久久| 国模一区二区三区四区视频| 中文字幕高清在线视频| 99久久精品热视频| 一级作爱视频免费观看| 国产精品人妻久久久久久| 无遮挡黄片免费观看| 女人十人毛片免费观看3o分钟| 99久久成人亚洲精品观看| 国产一级毛片七仙女欲春2| 亚洲精品色激情综合| 国产欧美日韩一区二区精品| 在线国产一区二区在线| 亚洲精品在线观看二区| 日日干狠狠操夜夜爽| 成人亚洲精品av一区二区| 亚洲av五月六月丁香网| 波多野结衣高清无吗| 制服丝袜大香蕉在线| 级片在线观看| 久久精品人妻少妇| 亚洲av第一区精品v没综合| 国产精品永久免费网站| 不卡一级毛片| 观看美女的网站| av在线老鸭窝| 国产精品影院久久| 99久久无色码亚洲精品果冻| 国产成人a区在线观看| 嫩草影视91久久| 国产黄色小视频在线观看| 别揉我奶头 嗯啊视频| 亚洲中文字幕一区二区三区有码在线看| 亚洲熟妇熟女久久| 久久99热6这里只有精品| 国产精品1区2区在线观看.| 亚洲av美国av| 在线看三级毛片| 精品福利观看| 变态另类丝袜制服| 欧美中文日本在线观看视频| 99久久九九国产精品国产免费| 国产国拍精品亚洲av在线观看| 夜夜夜夜夜久久久久| 亚洲色图av天堂| 亚洲av成人av| 国产精品亚洲av一区麻豆| 国产视频一区二区在线看| 精品久久国产蜜桃| 欧美日韩亚洲国产一区二区在线观看| 久久精品91蜜桃| 黄色视频,在线免费观看| 亚洲久久久久久中文字幕| av在线蜜桃| 国产黄色小视频在线观看| 一本精品99久久精品77| 精品一区二区三区视频在线观看免费| 精品久久国产蜜桃| 亚洲av电影不卡..在线观看| 午夜两性在线视频| 亚洲狠狠婷婷综合久久图片| 一卡2卡三卡四卡精品乱码亚洲| 动漫黄色视频在线观看| 欧美性感艳星| 中文在线观看免费www的网站| 在线观看av片永久免费下载| 免费看美女性在线毛片视频| 蜜桃久久精品国产亚洲av| 国产乱人视频| 成人无遮挡网站| 亚洲无线观看免费| 亚洲精品色激情综合| 亚洲avbb在线观看| 99国产极品粉嫩在线观看| 亚洲av免费高清在线观看| 亚洲色图av天堂| 18禁在线播放成人免费| 啦啦啦韩国在线观看视频| 欧美bdsm另类| 熟女人妻精品中文字幕| 免费av不卡在线播放| 亚洲av一区综合| 免费在线观看日本一区| 日本免费一区二区三区高清不卡| 两个人视频免费观看高清| 国产精品野战在线观看| 宅男免费午夜| 中文字幕免费在线视频6| 人人妻,人人澡人人爽秒播| 十八禁人妻一区二区| 日日摸夜夜添夜夜添av毛片 | 国产爱豆传媒在线观看| 欧美黑人巨大hd| 韩国av一区二区三区四区| 美女高潮喷水抽搐中文字幕| 成人鲁丝片一二三区免费| 人妻夜夜爽99麻豆av| av视频在线观看入口| 中文字幕av在线有码专区| 精品一区二区免费观看| 国产视频内射| ponron亚洲| 国产亚洲av嫩草精品影院|