• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    BERT for Conversational Question Answering Systems Using Semantic Similarity Estimation

    2022-03-14 09:23:44AbdulazizAlBesherKailashKumarSangeethaandTinasheButsa
    Computers Materials&Continua 2022年3期

    Abdulaziz Al-Besher,Kailash Kumar,M.Sangeethaand Tinashe Butsa

    1College of Computing and Informatics,Saudi Electronic University,Riyadh,11673,Kingdom of Saudi Arabia

    2Department of Information Technology,SRM Institute of Science and Technology,Kattankulathur,India

    3Department of Information Technology,Harare Institute of Technology,Belvedere,Harare

    Abstract: Most of the questions from users lack the context needed to thoroughly understand the problem at hand,thus making the questions impossible to answer.Semantic Similarity Estimation is based on relating user’s questions to the context from previous Conversational Search Systems(CSS)to provide answers without requesting the user’s context.It imposes constraints on the time needed to produce an answer for the user.The proposed model enables the use of contextual data associated with previous Conversational Searches(CS).While receiving a question in a new conversational search, the model determines the question that refers to more past CS.The model then infers past contextual data related to the given question and predicts an answer based on the context inferred without engaging in multi-turn interactions or requesting additional data from the user for context.This model shows the ability to use the limited information in user queries for best context inferences based on Closed-Domain-based CS and Bidirectional Encoder Representations from Transformers for textual representations.

    Keywords: Semantic similarity estimation; conversational search; multi-turn interactions; context inference; BERT; user intent

    1 Introduction

    Conversational search is one of the most critical areas in Natural Language Processing(NLP); hence, researchers’ambition is to understand user intent in multi-turn conversations to simulate human-to-human interaction in Conversational Assistants (CA).CSS can be defined as an approach to find information in a multi-turn conversation, and it has long been associated with Information retrieval systems.The adoption of CA in Conversational Search Systems (CSS)is currently rising, which has attracted much attention from researchers.The most common framework for CA mainly focuses on Natural Language Understanding (NLU) [1] to design and develop systems that can better understand human language.The objective is to understand NLP and identify the informational users’needs (user intent) from natural language by analysing textual information.

    In any CSS, the critical task is understanding user intent from utterances of that conversation [2].Recent NLP models such as Bidirectional Encoder Representations from Transformers(BERT) [3], Robustly Optimized BERT-Pretraining Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), and Generative Pretrained Transformer 2 (GPT-2), have been outperforming humans on the competition datasets, Stanford Question Answering Dataset (SQUAD), and General Language Understanding Evaluation benchmark(GLUE).These advancements created much interest in conversational search regarding concepts such as the ability to identify user intent from utterances and also the ability to provide answers or solutions based on user’s questions.Tasks involving questioning and answering in multi-turn environments using datasets like SQUAD would be best solved using models like BERT.SSE and context inference allow the model to deal with partial, limited, or incomplete questions from users who do not know how to express their informational needs and also with questions that Chabot designers did not expect though searching the whole knowledge base for an answer based on the question similarity [4].There is a strong belief that CSS and ConvQA should provide helpful information by getting limited information from the user.Most CSS are not capable of understanding input with partial information as well as input with multiple turns.The ambiguous nature of questions from users will often require additional information for clarification, which often creates a challenge in ConvQA.The essential aspects of conversational search in question answering determine User Intent, NLU in multi-turn interactions, and CA.

    Determining user intent is one of the key features for question answering in multi-turn conversations.In multi-turn interactions, the intent represented by the initial question of the user determines the flow of the conversation and how CA will process later utterances.Modelling multi-turn interactions between CA and users requires accurately identifying user intent [5].A single conversational search session can consist of several utterances, with each utterance representing different user intent.Determining user intent in such scenarios becomes a challenge to provide the most suitable answer to the user.Depending on the NLP task, there are many classifications for user intent.Some latest research identifies user intent as the classification of statements in multi-turn contexts [6], for example, the user’s initial quotation (question) is classified as the Origin Question (OQ), and utterances that represent additional data from the user are classified as Further Details.Other functions define user intent as the user intention is referenced in an utterance; for example, the utterance "I would like to buy a laptop" denotes the means to purchase [7].For this research, the representation of user intent is the information the user intends to get as a response to a given question.

    NLU refers to extracting meaning from natural language [8].Input from users is not always straightforward.Most of the input from users is non-factoid and will often trigger multi-turn interactions between the agent and the user.For example, a query “How do we upgrade to Windows 10?”does not contain all the information needed to provide the most appropriate answer and requires that the user provide additional information to get the most suitable response or answer.Most often, CAs have to keep track of the change of user intent throughout the conversation,which is a challenging task, especially for conversations with several turns.Previous research helps resolve NLU challenges for single utterances to extract user intent and classify core features called slots from that single utterance using slot filling [9].To understand the informational needs of the user from such conversations, more complex NLU techniques are required.CA should understand the context of each utterance in the conversation, elicit relevant knowledge in case of continuous evaluation of the user’s informational demand, and enhance previous answers to improve the present answers.This modeling contextual representations from past conversations and inferring them from users based on some similarity algorithm will help to determine user intent more accurately and quickly.It will, in turn, eliminate the number of turns needed to understand the users’informational needs.

    The potential of CA to simulate human conversations in their natural forms, such as text or speech, enhances Semantic Similarity Estimation (SSE).Simulating human conversations should allow Question Answering Chatbots to provide the most accurate user questions [10].It can be achieved by analyzing and identifying specific keywords and phrases from both text and speech.By focusing on conversation flow, CA should analyze the contextual data of the conversation to learn the relationships between words and utterances for processing answers.Utterances within a conversation usually represent different intent types, and by analyzing these utterances, CA will understand the user’s intent.CA must be trained using a large domain knowledge base for higher quality language understanding in multi-turn conversations.Training CA using a variation of conversations with different informational needs should improve the performance of CA on question answering.CA is based on several key aspects: mode of interaction, CA usage, modeling techniques used, and the knowledge base or domain.By considering the mentioned aspects,CA will determine the contextual conversation data used for identifying the user’s informational needs through NLU [11].Emulating how people look for information regarding asked questions requires understanding the two types of domains related to CSS and ConvQA systems, namely Closed-Domain System (CDS) and Open-Domain System (ODS).

    In CDS conversational search, the questions are limited to predefined domains and domain knowledge (e.g., tech support questions based only on Microsoft products); generally, CSS should answer a wide variety of user questions using contextual data from different domains to find answers.CDS Conversational Search Systems find information based on context from a predefined domain since they are trained to answer domain-specific questions.Since ODS Conversational Search Systems are limited to answering specific domains, Researchers focus on search systems that can answer different user questions.ODS Conversational Search Systems can generalize questions from users to answering questions from different domains.ODS can use different domain-based contexts from different knowledge bases to meet the user’s informational needs and provide the most accurate answers.ODS can be helpful, especially when users don’t know the particular domain to which their question is related.The main challenge of such systems narrows down candidate context for question answering; arriving at the answer may be a constraint on the time efficiency of the model in providing the answer to a user.

    CSS features natural conversations with users.The generation of responses is mainly based on the level of confidence obtained from the context provided by users, and the sequence of dialogue contexts is considered for information finding.Interactions between users and CA can be divided into two classes: single and multi-turn interactions, as illustrated in Fig.1.Single-turn interactions provide answers based on the immediate user question (utterance) and do not require additional information to answer the question (i.e., single utterance just before the answer).On the other hand, multi-turn interactions generate a response based on multiple interactions between the user and the system.Utilizing SSE for question-context mapping in CSS and ConvQA systems allow CAs to figure out the user’s informational needs before recommending an answer.A typical CSS is one in which the user initiates the conversation with an intent-based question.The system will ask for additional information through follow-up questions to understand the user’s informational needs.When the system is confident enough, it will then suggest or retrieve the appropriate information to the user.Furthermore, the system will retrieve the answer iteratively throughout the interaction process, where it takes more than 2 turns for the agent to understand the user’s informational needs and generate the appropriate answer.This form of multi-turn interaction opens up new possibilities for CSS.

    Figure 1: User-agent interactions in CSS

    SSE in CSS allows a system to understand user intent without engaging in multiple rounds of message exchanges.Instead of asking for additional clarification, the system will infer conversational context to the user’s question based on similarity computations.The model will leverage the recent question from the user and intent to look into conversational search to provide an answer as a recommendation with minimal user input and incomplete details.Each question from the user relies on inferring context from a single session to connect intent with the question.This work aims to understand user intent by utilizing BERT contextual representations for SSE to infer past conversation context on the current question using limited information from the user.Given a recent question from the user, this system wants to understand the user intent by computing the semantic similarity between the representations of the current question with the contextual representations of the previous conversations and then inferring detailed contextual information to the question at hand.User intent can be defined in many ways in the field of CS.This framework describes the user’s intention to obtain information for a particular question.Predicting user intent comes from the need to understand the user’s informational needs to provide the most accurate answer that meets the user’s needs without additional user input.Like humans, CSS must learn to identify closely related or highly similar questions to refer historical context based on the question similarity.By referring to the historical conversational context, the system will understand the user’s informational needs without carrying out the same process of requesting additional information from the user for clarifications, especially when it comes to similar questions.This approach helps to provide fast solutions with minimal user input.We utilize BERT for language representations and understanding because of its ability to understand long-term dependencies in large text.Bert is astate-of-the-artNLP model for language representation from Google Artificial Language [12].

    BERT uses bidirectionality by pre-training on mask language modeling and next sentence prediction, making it suitable for achieving the best contextual representations of each conversation for language understanding.BERT for ConvQA works and performs excellent on a relatively large number of words, making it suitable for understanding multi-turn interactions in CSS.Since BERT for ConvQA is trained on SQUAD data, summary paragraphs and related questions and not multi-turn interactions for dialogues.This model aims to construct BERT for intent prediction and question answering to understand the language in a multi-turn environment, which typically involves several turns.We conduct our experiments on predicting user intent CSS using the MSDialog data [13].The data contain interactive dialogues between users seeking technical support and answer providers.Most user questions are often non-factoid and require further conversational interactions to build a solid understanding of the user’s needs.The answer providers are, in this case, the Microsoft stuff and some experienced Microsoft product users (human agents).The answer (user intent) is the user’s intent to get a question related to Microsoft products.

    2 Related Works

    Several CSS and NLU advancements have created new CSS and ConvQA systems’research interests over the years.Despite these advancements, understanding the nature of conversational search is still a difficult task.There remains a challenge of understanding the user’s informational needs (user intent) in an interactive environment.The focus of ConvQA is to model change in user intent in multi-turn interactions.The intuition is based on handling conversation history between cycles in a multi-turn environment.This is achieved by selecting a subset of past turns (previous answers) based on the level of importance using a rule-based method.The model then embeds past answers for ConvQA.Given a questionqtfrom the user, history modeling expects the agent to refer to the previous answerat-1forqt-1to understand the informational needs of the recent question from the user.The critical aspect of ConvQA is on using history turns for understanding the informational needs of the user.ConvQA performs history answer embedding to the recent question for a given informal session to understand the user’s informational needs.History answer embedding allows the model to understand the user’s intent through conversation history modeling for a particular conversational session.Combining earlier answers with the recent question from the user enables the agent to determine user intent.

    The ConvQA method is suitable for understanding intent based on previous utterances within a particular conversation session.However, this approach is associated with multi-turn interaction,which constraints the time complexity of the model for generating an answer.Furthermore,in a multi-turn setting, using the sequential order of question-answer sets to understand user intent may have a detrimental impact on the CSS and ConvQA systems because user intent tends to change from one turn to the next.In such scenarios, understanding user intent for answer generation becomes difficult.For the same question from a different user, the ConvQA system may again go through several multi-turn interactions to understand user intent, making the process redundant, seeing that an answer for that same question was already generated in the previous session.The BERT system approach focuses on inferring past conversational context to the current user question based on some degree of similarity between the current question and the context of past CS.The contextual conversation data is modeled using BERT‘s next sentence prediction task.By inferring context from previous similar conversations to the current question,this model understands the user’s informational needs and provides answers without requiring additional information.The approach performs conversation contextual data modeling, which indirectly deals with the unexpected changes in user intent.Context modeling is performed based on the intent represented by the original question of conversationci.By focusing on utterances representing the same intent as that of the original question, this model infers the most accurate past conversational context to the question.

    Existing approaches use a System Ask-User Respond (SAUR) approach for CS [14].Naturally, people engage in multi-turn interactions when seeking information.SAUR aims to comprehend user’s requirements by fetching answers based on user feedback.According to SAUR,processes that can start answering appropriate questions dynamically can better understand user needs, which is one of the essential aims of CSS and ConvQA.SAUR integrates sequential modeling and concern via multi-memory network architecture and an individualized version for CS and recommendation.This approach to CS and recommendation focuses on feature sets for the CS to manage and control user acceptance to comprehend user needs.However, this presents a scenario in which a user is given a practically identical question to ask questions historically; the system rehashes the same process of asking the learners to identify the user’s informational needs rather than relating the user to related research conversations.Also, the user may ask follow-up questions that do not represent the same intent as the previous utterances, and this will start a new search altogether.ConvQA suggests that to understand the current information needs of the user, the model should be able to handle the conversation history of the current conversational search session.The approach used in this system will be capable of understanding user intent through context inference based on question similarity, and from the inferred context, we can determine or predict the user’s informational needs.

    Some methodologies to SSE used RL in User Chat Bots; the task of SSE is addressed as a task to assume relevant questions that users might be interested in [15,16].The approach models SSE as a Markov decision process and implements Reinforcement Learning (RL) to find the best recommendation methods.Unlike other existing techniques, which predict the list of items likely to be of interest to users by depending on the immediate benefit rather than the long-term benefits of each recommendation, the analysis proved to review the inter-relationships between the user dynamics and recommends questions using aN-Stepsequential decision process.The model will suggest and add a sensible question to the recommendation list at each turn.The model helps to understand clicks and user satisfaction by resetting its ranking results of the top‘N-Step’recommendations based on user behaviour patterns and question popularity.The approach demonstrates the SSE task by generating better guidelines.

    The approach using attentive history selection for question answering in CSS introduces a history attention mechanism to select conversation histories based on attention weights to understand and answer the question called“Soft Selection.”For each turn in a conversation, different weights are allocated based on their usefulness in answering the current question of the user.Applying attention weights to utterances within a conversion allows the model to capture the importance of history turns.Furthermore, the method incorporates the position information for history to examine the importance of turn position in conversation history modeling.This work realizes the need to learn to answer current questions based on past conversations to limit the interactive process to a single turn.Another related yet different approach is Neural Re-entry Prediction Combining Context and User History.It uses a neural network framework that focuses on re-entry prediction given in a conversation by combining context and user history.The model learns meaningful representations from both the conversation context and user history to predict whether they will return to a conversation they once participated in.The paper illustrates the importance of historical conversational context in understanding user utterances.This approach focuses on utilizing BERT context, representations for conversation context modeling and SSE.This model focuses on the conversational context of past conversations, and the similarity of user questions is an essential aspect for CSS and ConvQA.

    3 Proposed Methodology

    3.1 Problem Statement

    Given a questionqifrom the user, the task is to relateqiwith CS from past sessions to find and infer past conversational contexttoqibased on the highest semantic similarity score for question understanding and then generate the answeraitoqi, whereckis theithconversation consisting of ‘k’utterances after data modeling.Fig.2 shows the system flow of this model approach.

    Figure 2: The flow of SSE, CSS, and Q&A

    3.2 Overview

    3.2.1 BERT Encoding for Intent Semantic Similarity Estimation and Question Answering

    This approach utilizes the BERT model to encode the questionqiand the inferred conversation context ‘c’, into contextualized text representations for SSE.BERT is a cutting-edge,pre-trained language model for NLU that employs transformers to learn deep bidirectional representations.Given a training instance(qi, c), pair the question and the conversation context into a single sequence.The input sequences are fed into the BERT encoder, and BERT generates contextualized representations for each sequence based on the token, segment, and position embedding.BERT is well suited for understanding the given textual information and deriving answers from the text.To understand the textual information for question answering, BERT was trained on the Stanford Question Answering Dataset (SQUAD), consisting of questions with a span of text from that particular textual data.The BERT model for SSE and ConvQA converts the MSDialog data structure to that of SQUAD data.Utterances from each conversation will be treated as contextual information for that particular conversation.The contextual information will provide the BERT Model with the features needed to understand the context-related question.BERT model for ConvQA is limited to text not longer than 512tokens per sequence.It makes it suitable for dealing with long sequential data from multi-turn interactions associated with each conversation.When the sequential data exceeds 512tokens per sequence, understanding the data becomes a challenge.

    3.2.2 Semantic Similarity Estimation in the Conversational Question Answering Framework

    The system presents a modularized design framework for SSE in ConvQA as an abstract in Fig.3.The framework mainly focuses on three key components: SSE (for determining user intent),context inference, and answer prediction or generation.Given a training instanceqi,ai), the SSE module chooses the conversation contextthat is semantically similar to the given questionqi.The selected context is related to the model, which then learns the start and end vectors of the answer span from the inferred conversational context.It is based on the intuition that highly similar questions often go through the same context to understand the user’s informational needs.Here, the conversational contextual data modeling and SSE model implementations are introduced in the following sections.In this research, the model employs a primary method as the conversational context inference in which the most relevant conversational context based on semantic similarity is inferred to the current question of the user for intent prediction.It is based on the intuition that similar questions often result in the same answer or solution, so instead of asking the user the same questions for clarification, this process minimizes user input and infer past related conversational context to the current question.

    Figure 3: Framework for BERT Model

    3.2.3 Semantic Similarity Estimation and Context Inference

    Given a question from the user, the model performs SSE by comparing the similarity of the question with the contextual representations of each past CS in the knowledge base.By converting text (question and the contextual data) into Term Frequency-Inverse Document Frequency(TF-IDF) vectors, we compute the cosine similarity between the question utterance and each contextual conversation data in the knowledge base, as shown in Fig.4.Similarity determines how close or related the given questionqiis to each conversation contextciin the knowledge base in terms of meaning or context.The question is represented into a vector form, whereas the contextual data are represented in matrix form (e.g., TF-IDF):tf-idf(t,c)tf (t,c)×idf(t).The Cosine similarity of the question and the conversation contextual data ranges from 0 to 1, where the score of 1 means that two vectors are highly similar.Eq.(1) is the Cosine similarity for this context inference module between two non-zero vectors.

    SSE and context inference are based on the cosine similarity between the question TF-IDF vector and the conversational context TF-IDF features.This similarity function allows the model to rank the conversation contexts in the knowledge base and infer the context with the highest score to the question posed by the user.After SSE (selecting the most relevant conversation contextual data), the model infers that particular data to the question utterance and sends them to the question-answering module.

    Figure 4: SSE and conversation context inference module

    3.2.4 Question Answering

    An essential aspect of finding the answer to the given question lies in inferring the most relevant conversational context to the given question and using that context to determine user intent.From the inferred contextual data, this model can then predict the answer text span.For example, given a user questionqiand the inferred conversational contextcki, here used a different approach to find the answer.For the training model, the input is the original question of the conversationciand the modeled utterances of the same conversation as context, the output is the probability of context tokens being the start or end tokens of the answer span.The model finds the most probable answer from the inferred contextual data by computing the START/END probabilities of the answers.For each contextual data in the knowledge base, the likelihood is computed for START/END tokens of the answer span based on the Softmax function regarding the given question.Given the wordi, and its hidden vectorTi, the likelihood of the word being the START/END of the answer spanis computed as, Eq.(2)

    The task is to predict the answer using the inferred context.As shown in Fig.5, in the answer prediction task, the model represents the input question and inferred context as a single paired sequence, with the current question of the user using the Q embedding and the inferred context using the C embedding.The models represent the final hidden vector for the input tokeniasTi∈Hand introduce a start vectorS∈Hand end vector∈H.The dot product betweenTiandSis used to calculate the probability of word ‘i’being the start of the answer text span followed by a Softmax over all of the words in the inferred context of Eq.(3)

    The same formula is used for computing the end of the answer text span, Eq.(4)

    The score of a candidate span from position‘i’to position‘j’is defined as, Eq.(5)

    and for prediction, this model uses the maximum scoring text span wherej≥i.The result is the sum of the likelihood of the correct START/END vectors.In the model architecture illustrated in Fig.5, the question is mapped, the conversational context is packed, and the resulting sequence is fed to this model, and then a representation is also generated for each token on the token,segment, and position embedding.Next, a vector representation for the START/END position is learned.It will be used to compute the answer span based on the given question.The loss will be computed as the average of the cross-entropy loss for the START/END positions.The model should then produce the following interactive interface showing output for the given question based on past conversations.

    Figure 5: ConvQA using conversation context inference

    Fig.6 gives the question “Microsoft edge is not responding.” The model successfully inferred relevant conversation context to the question and predicted the answer based on the inferred context.Only a single turn was needed to answer the question.

    Figure 6: Output showing the ConvQA based on past conversations

    4 Experiments

    This system first describes the MSDialog dataset and how it applies to the research problem and then described the experimentation approach for SSE, and lastly, performed different evaluation results.

    4.1 Data Set

    We conduct the model experiment on ConvQA based on SSE using the MSDialog dataset.This dataset contains interactive dialogues between users seeking technical support and answer providers.The answer providers are, in this case, the Microsoft stuff and some experienced Microsoft product users.The answer (user intent) is the user’s intent to get a question related to Microsoft products.The dataset consists of 35000 technical support conversational interactions,with over 2000 dialogues selected for user intent annotations.Each dialogue will comprise at least 2 to 3 turns, 2 participants and 1 correct answer.Tabs.1 and 2 give the description and statistics of the dataset, respectively.

    4.2 Simulation

    The model used PyTorch as our framework for Deep Learning (https://pytorch.org), also we used the uncased pre-trained BERT model and then using the PyTorch-transformers package from hugging face (https://github.com/huggingface/pytorch-transformers), which includes different utilities and scripts for performing different NLP tasks such as ConvQA.The pre-trained BERT model comes with its vocabulary of words; therefore, extracting words from the current dataset is unnecessary.BERT consists of the uncased and cased model, and for this work, we use the uncased model, which is not case sensitive.Also, the model uses ConvQA annotation for defining the answer spans from the conversation contexts.This will split the list of conversations from the MSDialog dataset into training and validation sets.It uses different optimizers for the model to find the model with the best performance (BertAdam optimizer) and apply early stopping based on the validation set.This model applies gradient clipping using a max-norm of 1.0.The batch size for the training process is 2.For all models, the maximum length of the input text sequence is set to 384tokens per sequence, the maximum answer length is set to 512tokens per sequence,the document striding is set to 128, and the maximum sequence length is set to 512tokens per sequence.The learning rate of the model is set to 2 × 10-5.It performs checkpoints on every iteration step and tests on the validation set.

    Table 1: MSDialog data description and classification

    Table 2: MSDialog data statistics

    4.3 Evaluation Metrics

    The evaluation of this model is based on two metrics that are, Exact Match and F1 scores.The Exact Match calculation is a binary measure.Check to see if the answer from this model exactly matches the answer from the validation set.The F1-score, on the other hand, is less strict;it computes the average overlap between the BERT model’s response and the answer from the validation set.This score is taken as the proportion of the precision and recall of the answers,where precision is described as the ratio of words in the model answer that also appear in the quantitative measurements answer, and recall is defined as the ratio of words in the quantitative measurements answer that appears to be correct.For example, if the actual answer is“You cannot use that chart type if the data is already aggregated”, and this model predicted,“You cannot use that chart type.”This would have high precision but lower recall, but if predicted,“You cannot use that chart type if the data is already aggregated in Excel,”this would have high recall but lower precision.This example also shows why the F1 scores are necessary, as answers can be presented in more than one way.Both answers will be allocated an exact match score of‘0’if there is an overlap in the ground truth answer and the predicted answer.However, the predicted answer spans are primarily correct; hence this focused more on improving the F1-score of this model than the exact match.

    4.4 Baselines

    In this section, compare the evaluation metrics following previous work as a baseline model.In addition to analyzing baseline performance, analyze the performance of the proposed model over the MSDialog dataset.This model considers several models with different model parameters as baselines for conversational question answering using SSE.The methods used for comparison are described in detail as follows:

    ?BERT with History Answer Embedding(HAE):A history answers embedding for ConvQA Attentive

    ?History Selection for ConvQA:Uses a History Attention Mechanism (HAM) for ConvQA

    ? HAM using Bert Large

    ?BERT+Context Inference:This model implements different ConvQA with BERT, and we predict the user’s intent by inferring contextual data from past CS for intent determination and answer finding.

    ? BERT + BertAdam

    ? BERT + AdamW

    ? BERT + FusedAdam

    5 Result and Discussion

    Several experiments were conducted on the MSDialog dataset for CQA based on SSE using different parameters.The experimental group compared the Exact Match and F1 scores of the proposed methods and the baseline models.Tab.3 shows the results of the experiments.

    Table 3: Each row displays validation/test scores for the respective model

    Figure 7: Each row displays validation/test scores for the respective model

    The BERT model is evaluated based on two evaluation metrics: Exact Match (EM) and F1 scores are compared to the baseline models.Fig.7 shows that HAM (BERT-Large) brings a slightly higher performance than HAM and BERT + HAE, achieving the best results among baselines.This suggests that methods of historiography attention are essential for conversation historiography modeling and response embedding.Furthermore, our proposed model achieved much higher accuracy by using past CS than the baseline models.

    Fig.8.The proposed model, BERT + Context Inference, obtains a substantially higher performance on answer prediction than baseline methods, showing the strength of our model on that task.Also, the performance of our model is affected slightly by using different optimizers,as shown in Figs.9 and 10 for both F1 and EM scores, respectively.However, the model sees no significant differences when using different optimization functions.Increasing the sequence length of the query and the maximum input sequence significantly improves the F1 scores, suggesting that a model that can take more than 512 tokens as input can achieve even better results.Also,train the model using different model parameters.Experimental results from our models as well as the otherstate-of-the-artmodel are shown in Tab.3, where the first model uses the BERT-HAE,the second model uses the HAM, the third model uses the HAM on BERT Large, and the rest of the models represent the proposed BERT model with context inference implemented across different model parameters.

    Figure 8: Each bar displays F1/EM-scores for the respective model

    Tab.3, the new proposed model using AdamW optimization achieved the F1-score of 83.63 after 10 epochs, using default parameters, and it performed better than BERT models, whichhave the same settings except for the AdamW optimization.In particular, on the MSDialog dataset,the model using AdamW Optimization improves by 0.07% EM and 1.06% F1 compared to the Fused Adam models, and 0.0% EM and 0.479% F1 over the BertAdam models.Leveraging the BERT-Large model makes multi-passage BERT even better on MSDialog datasets.

    Figure 9: Exact match scores based on different optimizers at different epoch thresholds

    Figure 10: Accuracy of the model using different optimizers at different epoch thresholds

    Fig.11 shows the results with different optimization functions.In all cases, inferring past conversation contextual data was practical.However, using different optimizers had little effect on accuracy.The results show that the best accuracy was obtained in the case of using the AdamW optimizer.This model uses past conversation contextual representations trained over the question-and-answer pairs from previous CS.A new method captured the user’s informational needs based on interaction from past CS, and it achieved reasonably high accuracy.Pre-trained BERT was trained with two input segments, which makes it suitable for this task.However,providing accurate answers is highly dependent on the richness of the researcher’s knowledge base in terms of contextual data availability.Past contextual data on the knowledge base (MSDialog)can include the context unrelated to the current question.Since this model infers contextual data based on the current question and past contextual data similarities, the context with the highest score may not be related to the given question and can decrease the accuracy.Notably, it is essential to update the knowledge base with new context from CS.Future work will constitute context selection only related to the current question and using context from other knowledge bases for better performances.

    Figure 11: Comparison of the proposed model using different optimizers

    6 Conclusion and Future Work

    This paper introduced SSE and context inference, a method for determining user intent and providing answers from partial/limited information from users based on past CS without engaging in multi-turn interactions with the system.This system can understand limited information from user questions and can infer relevant contextual data from past conversations and at the same time provide exact answer spans using questions of the same domain area.Modelling utterances as conversational context using BERT deliver a significant improvement in ConvQA over the existing model.Using BERT for contextual representations enhanced the performance of this model.When it comes to multi-turn interactions in CS, available datasets have no precise contextual information for modeling good models; however, the MSDialog dataset provided the necessary multi-turn information-seeking conversations for proposed work.This work will create more research interest in intent prediction and CSS, critical for simulating human-human interactions in CA.

    In the future, researchers design to combine our history design methodology with a learned history analysis algorithm for ConvQA.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts to report regarding the present study.

    91aial.com中文字幕在线观看| 91aial.com中文字幕在线观看| 菩萨蛮人人尽说江南好唐韦庄| 亚洲av电影在线观看一区二区三区| 午夜日韩欧美国产| 精品国产超薄肉色丝袜足j| 天天影视国产精品| 久久精品国产亚洲av涩爱| 国产又色又爽无遮挡免| 精品少妇黑人巨大在线播放| www.av在线官网国产| 午夜福利,免费看| 日韩精品免费视频一区二区三区| 精品一区二区三卡| 国产av一区二区精品久久| 久久婷婷青草| 天天影视国产精品| 亚洲欧美成人精品一区二区| 看免费成人av毛片| 亚洲精品美女久久久久99蜜臀 | 性少妇av在线| 亚洲伊人久久精品综合| 在线观看三级黄色| www.精华液| 欧美日韩亚洲国产一区二区在线观看 | 桃花免费在线播放| 亚洲美女视频黄频| av网站在线播放免费| av国产精品久久久久影院| 热99国产精品久久久久久7| 日韩中文字幕欧美一区二区 | 美女xxoo啪啪120秒动态图| 久久午夜福利片| 九色亚洲精品在线播放| 乱人伦中国视频| 国产淫语在线视频| 午夜福利乱码中文字幕| 欧美日本中文国产一区发布| 国产黄色免费在线视频| 亚洲欧洲国产日韩| 少妇的丰满在线观看| 精品一区二区免费观看| 精品一区二区三区四区五区乱码 | 99久久精品国产国产毛片| 性色avwww在线观看| 亚洲一区中文字幕在线| 又黄又粗又硬又大视频| 婷婷色综合www| 欧美精品亚洲一区二区| 女性被躁到高潮视频| 美国免费a级毛片| 在线 av 中文字幕| 国产精品99久久99久久久不卡 | 日韩免费高清中文字幕av| 中文乱码字字幕精品一区二区三区| 欧美精品亚洲一区二区| 18禁动态无遮挡网站| av网站在线播放免费| av卡一久久| 自线自在国产av| www.av在线官网国产| 观看av在线不卡| 久久午夜综合久久蜜桃| 激情视频va一区二区三区| 国产又色又爽无遮挡免| 婷婷色综合大香蕉| 在线观看免费高清a一片| 自线自在国产av| 亚洲经典国产精华液单| 国产成人免费观看mmmm| 一级毛片我不卡| 精品亚洲乱码少妇综合久久| 久久人妻熟女aⅴ| 成人国语在线视频| 女性被躁到高潮视频| 欧美在线黄色| 欧美精品人与动牲交sv欧美| av一本久久久久| 免费黄色在线免费观看| 亚洲精品美女久久久久99蜜臀 | 亚洲精品视频女| 多毛熟女@视频| 九色亚洲精品在线播放| 不卡视频在线观看欧美| 国产极品粉嫩免费观看在线| 亚洲综合精品二区| 免费黄频网站在线观看国产| 午夜福利网站1000一区二区三区| 亚洲一级一片aⅴ在线观看| 高清欧美精品videossex| 国产精品麻豆人妻色哟哟久久| 欧美xxⅹ黑人| 在线观看国产h片| 秋霞在线观看毛片| 中文字幕人妻丝袜一区二区 | av国产精品久久久久影院| 午夜精品国产一区二区电影| 肉色欧美久久久久久久蜜桃| www.熟女人妻精品国产| 飞空精品影院首页| 美女脱内裤让男人舔精品视频| 国产97色在线日韩免费| 曰老女人黄片| 免费看不卡的av| 汤姆久久久久久久影院中文字幕| 美女xxoo啪啪120秒动态图| 啦啦啦啦在线视频资源| 99精国产麻豆久久婷婷| 亚洲精品国产一区二区精华液| 五月天丁香电影| 在线观看免费视频网站a站| 国产精品 国内视频| 国产精品一区二区在线不卡| 久久人人97超碰香蕉20202| 人人妻人人澡人人爽人人夜夜| 精品一区二区免费观看| 久久久久国产精品人妻一区二区| 99国产精品免费福利视频| 欧美人与性动交α欧美精品济南到 | 三级国产精品片| 亚洲男人天堂网一区| 婷婷色综合大香蕉| 在现免费观看毛片| 久久久欧美国产精品| 只有这里有精品99| 看免费av毛片| tube8黄色片| 久久久久久久久久久久大奶| 国精品久久久久久国模美| 丝瓜视频免费看黄片| 99精国产麻豆久久婷婷| 精品午夜福利在线看| 9191精品国产免费久久| 成年美女黄网站色视频大全免费| 国产高清不卡午夜福利| 婷婷色综合大香蕉| 久久久久国产一级毛片高清牌| 色视频在线一区二区三区| 亚洲国产色片| 久久久久久久国产电影| kizo精华| 成人毛片a级毛片在线播放| 中文天堂在线官网| 国产一区亚洲一区在线观看| 久久青草综合色| a级毛片黄视频| 欧美av亚洲av综合av国产av | 热99国产精品久久久久久7| 欧美日韩成人在线一区二区| 婷婷色麻豆天堂久久| 中文字幕色久视频| 18禁裸乳无遮挡动漫免费视频| 国产成人精品一,二区| 在线观看一区二区三区激情| 午夜福利,免费看| 亚洲精品aⅴ在线观看| 亚洲美女黄色视频免费看| 美女国产视频在线观看| 久久97久久精品| 中国国产av一级| 亚洲人成网站在线观看播放| 宅男免费午夜| 国产日韩欧美在线精品| 久久精品国产亚洲av涩爱| 永久网站在线| freevideosex欧美| 色吧在线观看| 久久午夜综合久久蜜桃| 亚洲一区中文字幕在线| www.自偷自拍.com| 人妻 亚洲 视频| 午夜福利在线免费观看网站| 亚洲一区二区三区欧美精品| 亚洲av福利一区| 一级毛片我不卡| 精品亚洲成a人片在线观看| 中文字幕人妻丝袜一区二区 | 亚洲精品av麻豆狂野| 久久久久国产精品人妻一区二区| 国产成人一区二区在线| 国产一区亚洲一区在线观看| 中文字幕制服av| 免费观看av网站的网址| 9色porny在线观看| 极品少妇高潮喷水抽搐| 久久99蜜桃精品久久| 国产免费一区二区三区四区乱码| 午夜福利,免费看| 日本vs欧美在线观看视频| 午夜激情av网站| 久久久久精品人妻al黑| 亚洲av福利一区| 亚洲成人一二三区av| 成人亚洲欧美一区二区av| 国产精品欧美亚洲77777| 热re99久久精品国产66热6| 久久精品人人爽人人爽视色| 丰满迷人的少妇在线观看| 飞空精品影院首页| 视频在线观看一区二区三区| 免费在线观看视频国产中文字幕亚洲 | 国产极品粉嫩免费观看在线| 一边摸一边做爽爽视频免费| 99国产综合亚洲精品| 国产激情久久老熟女| 亚洲精品在线美女| 国产免费福利视频在线观看| 精品少妇久久久久久888优播| 欧美av亚洲av综合av国产av | 久久综合国产亚洲精品| 亚洲三级黄色毛片| 五月伊人婷婷丁香| 亚洲欧美精品综合一区二区三区 | 在线观看国产h片| 精品人妻一区二区三区麻豆| 日本av免费视频播放| 超碰成人久久| 99re6热这里在线精品视频| 国产精品亚洲av一区麻豆 | 熟女av电影| av片东京热男人的天堂| 香蕉精品网在线| 永久免费av网站大全| 啦啦啦在线免费观看视频4| 巨乳人妻的诱惑在线观看| 亚洲熟女精品中文字幕| 97人妻天天添夜夜摸| 久久精品人人爽人人爽视色| 在线观看美女被高潮喷水网站| 天天躁狠狠躁夜夜躁狠狠躁| 国产国语露脸激情在线看| 777久久人妻少妇嫩草av网站| 国产欧美亚洲国产| 欧美少妇被猛烈插入视频| 亚洲av在线观看美女高潮| 1024视频免费在线观看| 欧美国产精品va在线观看不卡| 亚洲在久久综合| 十分钟在线观看高清视频www| 一区二区三区激情视频| 免费女性裸体啪啪无遮挡网站| 亚洲欧洲精品一区二区精品久久久 | 一级黄片播放器| 人体艺术视频欧美日本| 日本爱情动作片www.在线观看| 国产精品香港三级国产av潘金莲 | 日产精品乱码卡一卡2卡三| 9191精品国产免费久久| 深夜精品福利| 国产精品 欧美亚洲| 久久人人97超碰香蕉20202| 亚洲一区二区三区欧美精品| 成年人午夜在线观看视频| 九草在线视频观看| 在线观看美女被高潮喷水网站| 老女人水多毛片| 18在线观看网站| 国语对白做爰xxxⅹ性视频网站| 人妻一区二区av| 大片电影免费在线观看免费| 久久人人97超碰香蕉20202| 国产极品天堂在线| 国产精品99久久99久久久不卡 | 免费不卡的大黄色大毛片视频在线观看| 久久精品久久久久久久性| 免费观看av网站的网址| 丰满少妇做爰视频| 大香蕉久久网| 成人漫画全彩无遮挡| 一级爰片在线观看| 国产成人91sexporn| 国产乱来视频区| 满18在线观看网站| 美国免费a级毛片| 国产精品二区激情视频| 国产精品国产三级专区第一集| 国产一区二区在线观看av| 国产深夜福利视频在线观看| 伦理电影大哥的女人| 国产黄色免费在线视频| 日韩三级伦理在线观看| 嫩草影院入口| 男人爽女人下面视频在线观看| 伦精品一区二区三区| 欧美精品一区二区大全| 国产探花极品一区二区| 久久99精品国语久久久| 黄色毛片三级朝国网站| 伊人久久国产一区二区| 日韩av在线免费看完整版不卡| 少妇猛男粗大的猛烈进出视频| 欧美成人午夜免费资源| 香蕉丝袜av| 午夜激情久久久久久久| 高清不卡的av网站| 欧美av亚洲av综合av国产av | 免费在线观看完整版高清| 欧美亚洲 丝袜 人妻 在线| 欧美人与性动交α欧美精品济南到 | 国产高清国产精品国产三级| 少妇 在线观看| 国产毛片在线视频| 亚洲av中文av极速乱| 高清不卡的av网站| 久久久久国产精品人妻一区二区| 午夜福利,免费看| 免费播放大片免费观看视频在线观看| 久久久久久伊人网av| 亚洲国产最新在线播放| 黄色 视频免费看| 成人免费观看视频高清| xxx大片免费视频| 国产精品嫩草影院av在线观看| 一个人免费看片子| 久久久久久久久久人人人人人人| 成人国产麻豆网| 成年人午夜在线观看视频| a级片在线免费高清观看视频| 99香蕉大伊视频| 亚洲精品国产色婷婷电影| 你懂的网址亚洲精品在线观看| 七月丁香在线播放| 国语对白做爰xxxⅹ性视频网站| 新久久久久国产一级毛片| 在线天堂最新版资源| 十分钟在线观看高清视频www| 在线观看美女被高潮喷水网站| 国产无遮挡羞羞视频在线观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 亚洲国产日韩一区二区| 青春草视频在线免费观看| 日韩av在线免费看完整版不卡| 色婷婷av一区二区三区视频| av国产久精品久网站免费入址| 少妇精品久久久久久久| 欧美日韩成人在线一区二区| 99九九在线精品视频| 晚上一个人看的免费电影| 天堂中文最新版在线下载| 汤姆久久久久久久影院中文字幕| 高清不卡的av网站| 精品视频人人做人人爽| 免费在线观看黄色视频的| 国产av国产精品国产| 亚洲视频免费观看视频| 国产精品秋霞免费鲁丝片| 久久久久人妻精品一区果冻| 欧美成人午夜精品| 亚洲人成77777在线视频| 午夜久久久在线观看| 飞空精品影院首页| 成年女人在线观看亚洲视频| 91国产中文字幕| 最近手机中文字幕大全| 欧美激情 高清一区二区三区| 菩萨蛮人人尽说江南好唐韦庄| 女性被躁到高潮视频| 看十八女毛片水多多多| 中文字幕人妻熟女乱码| 婷婷色综合www| 精品亚洲成国产av| 超色免费av| 2021少妇久久久久久久久久久| 亚洲久久久国产精品| 亚洲美女视频黄频| 日本猛色少妇xxxxx猛交久久| 久久久久久久大尺度免费视频| 在线天堂最新版资源| 美女xxoo啪啪120秒动态图| 九九爱精品视频在线观看| 欧美精品一区二区大全| 91精品国产国语对白视频| 在线观看一区二区三区激情| 制服诱惑二区| 在现免费观看毛片| 久久久久国产网址| 高清av免费在线| 国产精品久久久久久久久免| 精品福利永久在线观看| av卡一久久| 18禁国产床啪视频网站| 久久久久国产一级毛片高清牌| 国产欧美日韩一区二区三区在线| 欧美日韩综合久久久久久| 91在线精品国自产拍蜜月| 精品一区二区三卡| 亚洲欧美成人综合另类久久久| 建设人人有责人人尽责人人享有的| 丰满少妇做爰视频| 如何舔出高潮| 日本免费在线观看一区| 免费播放大片免费观看视频在线观看| 午夜福利视频精品| 精品一品国产午夜福利视频| 日本av免费视频播放| 久久综合国产亚洲精品| 亚洲精品在线美女| 午夜福利在线免费观看网站| 熟女少妇亚洲综合色aaa.| a级毛片黄视频| 尾随美女入室| 桃花免费在线播放| 久久精品夜色国产| 色网站视频免费| 亚洲欧洲日产国产| 熟妇人妻不卡中文字幕| 精品国产乱码久久久久久男人| 伦理电影免费视频| av线在线观看网站| 大话2 男鬼变身卡| 欧美日韩视频高清一区二区三区二| 一二三四在线观看免费中文在| 精品亚洲乱码少妇综合久久| 亚洲av欧美aⅴ国产| 热99国产精品久久久久久7| 在线观看一区二区三区激情| 亚洲经典国产精华液单| 亚洲欧洲国产日韩| 国产精品久久久久成人av| 狠狠精品人妻久久久久久综合| 久久亚洲国产成人精品v| 久久av网站| 欧美变态另类bdsm刘玥| 亚洲精品视频女| 亚洲一区中文字幕在线| 看免费成人av毛片| 日本猛色少妇xxxxx猛交久久| 亚洲五月色婷婷综合| 欧美+日韩+精品| 91国产中文字幕| 色婷婷久久久亚洲欧美| 久久久久国产网址| 中国国产av一级| 精品国产一区二区三区四区第35| 少妇人妻 视频| 成人二区视频| 亚洲欧美一区二区三区黑人 | 免费观看无遮挡的男女| 久久久精品94久久精品| 人人妻人人添人人爽欧美一区卜| 男人添女人高潮全过程视频| 免费高清在线观看视频在线观看| 国产精品久久久久成人av| 日韩电影二区| 亚洲美女黄色视频免费看| 2021少妇久久久久久久久久久| 久久这里只有精品19| 啦啦啦视频在线资源免费观看| 国产午夜精品一二区理论片| 亚洲精品国产色婷婷电影| 少妇被粗大的猛进出69影院| 久久99热这里只频精品6学生| 伦理电影大哥的女人| 亚洲欧美清纯卡通| 久久久久久久久久久久大奶| 韩国高清视频一区二区三区| 国产黄频视频在线观看| 免费黄网站久久成人精品| 亚洲av男天堂| 两个人免费观看高清视频| 色视频在线一区二区三区| 国产精品一区二区在线观看99| 夜夜骑夜夜射夜夜干| 一级片免费观看大全| 精品国产超薄肉色丝袜足j| 亚洲欧美成人综合另类久久久| 新久久久久国产一级毛片| 国产男女内射视频| 精品人妻熟女毛片av久久网站| 王馨瑶露胸无遮挡在线观看| 人成视频在线观看免费观看| 一二三四在线观看免费中文在| 日本色播在线视频| 天天躁狠狠躁夜夜躁狠狠躁| 在线观看国产h片| 女人久久www免费人成看片| 午夜老司机福利剧场| 美女大奶头黄色视频| 建设人人有责人人尽责人人享有的| 国产欧美日韩综合在线一区二区| 国产又爽黄色视频| 人人澡人人妻人| 国产男女超爽视频在线观看| 亚洲欧美中文字幕日韩二区| 欧美日韩视频精品一区| videos熟女内射| 久久久精品94久久精品| 女性生殖器流出的白浆| 美女国产高潮福利片在线看| 哪个播放器可以免费观看大片| 91成人精品电影| 91久久精品国产一区二区三区| 老汉色av国产亚洲站长工具| 亚洲综合精品二区| 日产精品乱码卡一卡2卡三| 亚洲国产成人一精品久久久| 中文精品一卡2卡3卡4更新| 久久av网站| 欧美日韩国产mv在线观看视频| 中文天堂在线官网| 国产精品蜜桃在线观看| 爱豆传媒免费全集在线观看| 久久久精品免费免费高清| 黄色怎么调成土黄色| 中文字幕另类日韩欧美亚洲嫩草| 麻豆精品久久久久久蜜桃| 极品人妻少妇av视频| 日韩欧美一区视频在线观看| 久久热在线av| 色网站视频免费| 免费黄色在线免费观看| 国产一区二区在线观看av| 午夜免费观看性视频| 最近中文字幕高清免费大全6| 中文字幕人妻丝袜制服| 性色av一级| 日韩制服骚丝袜av| 亚洲国产看品久久| 日韩成人av中文字幕在线观看| 国产一区二区激情短视频 | 叶爱在线成人免费视频播放| 麻豆乱淫一区二区| 国产精品无大码| videos熟女内射| 久久久久人妻精品一区果冻| 黄色一级大片看看| 亚洲一区中文字幕在线| 中文字幕人妻熟女乱码| 天天影视国产精品| 国产精品女同一区二区软件| 狠狠婷婷综合久久久久久88av| 99热全是精品| 国产精品无大码| 亚洲精品一区蜜桃| 在线观看三级黄色| 91精品三级在线观看| av不卡在线播放| 成人黄色视频免费在线看| 一级爰片在线观看| 久久青草综合色| 男女下面插进去视频免费观看| 啦啦啦啦在线视频资源| 久久久久视频综合| 老汉色av国产亚洲站长工具| 美女国产高潮福利片在线看| 又大又黄又爽视频免费| 国产色婷婷99| 日日摸夜夜添夜夜爱| 一级片免费观看大全| 日韩制服丝袜自拍偷拍| 国产精品 欧美亚洲| 亚洲经典国产精华液单| 亚洲精品成人av观看孕妇| 久久精品久久久久久噜噜老黄| 日韩欧美一区视频在线观看| 精品卡一卡二卡四卡免费| 91午夜精品亚洲一区二区三区| 免费黄网站久久成人精品| www.熟女人妻精品国产| 美女高潮到喷水免费观看| 日韩 亚洲 欧美在线| 精品一区二区免费观看| 国产伦理片在线播放av一区| 欧美日韩国产mv在线观看视频| 亚洲国产精品一区二区三区在线| 国产黄色视频一区二区在线观看| 欧美日韩精品成人综合77777| 欧美97在线视频| 一二三四中文在线观看免费高清| 91午夜精品亚洲一区二区三区| 新久久久久国产一级毛片| 国产综合精华液| 午夜福利网站1000一区二区三区| 久久人人97超碰香蕉20202| 国产视频首页在线观看| 大陆偷拍与自拍| 亚洲成人一二三区av| 熟女少妇亚洲综合色aaa.| 成人亚洲精品一区在线观看| 三级国产精品片| 国产男人的电影天堂91| 伦理电影大哥的女人| 少妇熟女欧美另类| 成年女人毛片免费观看观看9 | 女人被躁到高潮嗷嗷叫费观| 国产亚洲午夜精品一区二区久久| 国产一区二区三区综合在线观看| 王馨瑶露胸无遮挡在线观看| 亚洲av欧美aⅴ国产| 国产精品 欧美亚洲| 色婷婷久久久亚洲欧美| 狠狠婷婷综合久久久久久88av| 在现免费观看毛片| 2022亚洲国产成人精品| 韩国av在线不卡| 欧美日韩成人在线一区二区| 人妻系列 视频| 69精品国产乱码久久久| 国产在线一区二区三区精| 色吧在线观看| 一级爰片在线观看| 亚洲欧洲日产国产| 少妇的逼水好多| 天美传媒精品一区二区| 90打野战视频偷拍视频| 2021少妇久久久久久久久久久| 日韩人妻精品一区2区三区| 久久精品久久久久久久性| 精品国产乱码久久久久久小说| 亚洲av综合色区一区| 一级片免费观看大全| 国产精品99久久99久久久不卡 | 熟女av电影| 国产日韩一区二区三区精品不卡| 国产成人一区二区在线|