• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    BERT for Conversational Question Answering Systems Using Semantic Similarity Estimation

    2022-03-14 09:23:44AbdulazizAlBesherKailashKumarSangeethaandTinasheButsa
    Computers Materials&Continua 2022年3期

    Abdulaziz Al-Besher,Kailash Kumar,M.Sangeethaand Tinashe Butsa

    1College of Computing and Informatics,Saudi Electronic University,Riyadh,11673,Kingdom of Saudi Arabia

    2Department of Information Technology,SRM Institute of Science and Technology,Kattankulathur,India

    3Department of Information Technology,Harare Institute of Technology,Belvedere,Harare

    Abstract: Most of the questions from users lack the context needed to thoroughly understand the problem at hand,thus making the questions impossible to answer.Semantic Similarity Estimation is based on relating user’s questions to the context from previous Conversational Search Systems(CSS)to provide answers without requesting the user’s context.It imposes constraints on the time needed to produce an answer for the user.The proposed model enables the use of contextual data associated with previous Conversational Searches(CS).While receiving a question in a new conversational search, the model determines the question that refers to more past CS.The model then infers past contextual data related to the given question and predicts an answer based on the context inferred without engaging in multi-turn interactions or requesting additional data from the user for context.This model shows the ability to use the limited information in user queries for best context inferences based on Closed-Domain-based CS and Bidirectional Encoder Representations from Transformers for textual representations.

    Keywords: Semantic similarity estimation; conversational search; multi-turn interactions; context inference; BERT; user intent

    1 Introduction

    Conversational search is one of the most critical areas in Natural Language Processing(NLP); hence, researchers’ambition is to understand user intent in multi-turn conversations to simulate human-to-human interaction in Conversational Assistants (CA).CSS can be defined as an approach to find information in a multi-turn conversation, and it has long been associated with Information retrieval systems.The adoption of CA in Conversational Search Systems (CSS)is currently rising, which has attracted much attention from researchers.The most common framework for CA mainly focuses on Natural Language Understanding (NLU) [1] to design and develop systems that can better understand human language.The objective is to understand NLP and identify the informational users’needs (user intent) from natural language by analysing textual information.

    In any CSS, the critical task is understanding user intent from utterances of that conversation [2].Recent NLP models such as Bidirectional Encoder Representations from Transformers(BERT) [3], Robustly Optimized BERT-Pretraining Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), and Generative Pretrained Transformer 2 (GPT-2), have been outperforming humans on the competition datasets, Stanford Question Answering Dataset (SQUAD), and General Language Understanding Evaluation benchmark(GLUE).These advancements created much interest in conversational search regarding concepts such as the ability to identify user intent from utterances and also the ability to provide answers or solutions based on user’s questions.Tasks involving questioning and answering in multi-turn environments using datasets like SQUAD would be best solved using models like BERT.SSE and context inference allow the model to deal with partial, limited, or incomplete questions from users who do not know how to express their informational needs and also with questions that Chabot designers did not expect though searching the whole knowledge base for an answer based on the question similarity [4].There is a strong belief that CSS and ConvQA should provide helpful information by getting limited information from the user.Most CSS are not capable of understanding input with partial information as well as input with multiple turns.The ambiguous nature of questions from users will often require additional information for clarification, which often creates a challenge in ConvQA.The essential aspects of conversational search in question answering determine User Intent, NLU in multi-turn interactions, and CA.

    Determining user intent is one of the key features for question answering in multi-turn conversations.In multi-turn interactions, the intent represented by the initial question of the user determines the flow of the conversation and how CA will process later utterances.Modelling multi-turn interactions between CA and users requires accurately identifying user intent [5].A single conversational search session can consist of several utterances, with each utterance representing different user intent.Determining user intent in such scenarios becomes a challenge to provide the most suitable answer to the user.Depending on the NLP task, there are many classifications for user intent.Some latest research identifies user intent as the classification of statements in multi-turn contexts [6], for example, the user’s initial quotation (question) is classified as the Origin Question (OQ), and utterances that represent additional data from the user are classified as Further Details.Other functions define user intent as the user intention is referenced in an utterance; for example, the utterance "I would like to buy a laptop" denotes the means to purchase [7].For this research, the representation of user intent is the information the user intends to get as a response to a given question.

    NLU refers to extracting meaning from natural language [8].Input from users is not always straightforward.Most of the input from users is non-factoid and will often trigger multi-turn interactions between the agent and the user.For example, a query “How do we upgrade to Windows 10?”does not contain all the information needed to provide the most appropriate answer and requires that the user provide additional information to get the most suitable response or answer.Most often, CAs have to keep track of the change of user intent throughout the conversation,which is a challenging task, especially for conversations with several turns.Previous research helps resolve NLU challenges for single utterances to extract user intent and classify core features called slots from that single utterance using slot filling [9].To understand the informational needs of the user from such conversations, more complex NLU techniques are required.CA should understand the context of each utterance in the conversation, elicit relevant knowledge in case of continuous evaluation of the user’s informational demand, and enhance previous answers to improve the present answers.This modeling contextual representations from past conversations and inferring them from users based on some similarity algorithm will help to determine user intent more accurately and quickly.It will, in turn, eliminate the number of turns needed to understand the users’informational needs.

    The potential of CA to simulate human conversations in their natural forms, such as text or speech, enhances Semantic Similarity Estimation (SSE).Simulating human conversations should allow Question Answering Chatbots to provide the most accurate user questions [10].It can be achieved by analyzing and identifying specific keywords and phrases from both text and speech.By focusing on conversation flow, CA should analyze the contextual data of the conversation to learn the relationships between words and utterances for processing answers.Utterances within a conversation usually represent different intent types, and by analyzing these utterances, CA will understand the user’s intent.CA must be trained using a large domain knowledge base for higher quality language understanding in multi-turn conversations.Training CA using a variation of conversations with different informational needs should improve the performance of CA on question answering.CA is based on several key aspects: mode of interaction, CA usage, modeling techniques used, and the knowledge base or domain.By considering the mentioned aspects,CA will determine the contextual conversation data used for identifying the user’s informational needs through NLU [11].Emulating how people look for information regarding asked questions requires understanding the two types of domains related to CSS and ConvQA systems, namely Closed-Domain System (CDS) and Open-Domain System (ODS).

    In CDS conversational search, the questions are limited to predefined domains and domain knowledge (e.g., tech support questions based only on Microsoft products); generally, CSS should answer a wide variety of user questions using contextual data from different domains to find answers.CDS Conversational Search Systems find information based on context from a predefined domain since they are trained to answer domain-specific questions.Since ODS Conversational Search Systems are limited to answering specific domains, Researchers focus on search systems that can answer different user questions.ODS Conversational Search Systems can generalize questions from users to answering questions from different domains.ODS can use different domain-based contexts from different knowledge bases to meet the user’s informational needs and provide the most accurate answers.ODS can be helpful, especially when users don’t know the particular domain to which their question is related.The main challenge of such systems narrows down candidate context for question answering; arriving at the answer may be a constraint on the time efficiency of the model in providing the answer to a user.

    CSS features natural conversations with users.The generation of responses is mainly based on the level of confidence obtained from the context provided by users, and the sequence of dialogue contexts is considered for information finding.Interactions between users and CA can be divided into two classes: single and multi-turn interactions, as illustrated in Fig.1.Single-turn interactions provide answers based on the immediate user question (utterance) and do not require additional information to answer the question (i.e., single utterance just before the answer).On the other hand, multi-turn interactions generate a response based on multiple interactions between the user and the system.Utilizing SSE for question-context mapping in CSS and ConvQA systems allow CAs to figure out the user’s informational needs before recommending an answer.A typical CSS is one in which the user initiates the conversation with an intent-based question.The system will ask for additional information through follow-up questions to understand the user’s informational needs.When the system is confident enough, it will then suggest or retrieve the appropriate information to the user.Furthermore, the system will retrieve the answer iteratively throughout the interaction process, where it takes more than 2 turns for the agent to understand the user’s informational needs and generate the appropriate answer.This form of multi-turn interaction opens up new possibilities for CSS.

    Figure 1: User-agent interactions in CSS

    SSE in CSS allows a system to understand user intent without engaging in multiple rounds of message exchanges.Instead of asking for additional clarification, the system will infer conversational context to the user’s question based on similarity computations.The model will leverage the recent question from the user and intent to look into conversational search to provide an answer as a recommendation with minimal user input and incomplete details.Each question from the user relies on inferring context from a single session to connect intent with the question.This work aims to understand user intent by utilizing BERT contextual representations for SSE to infer past conversation context on the current question using limited information from the user.Given a recent question from the user, this system wants to understand the user intent by computing the semantic similarity between the representations of the current question with the contextual representations of the previous conversations and then inferring detailed contextual information to the question at hand.User intent can be defined in many ways in the field of CS.This framework describes the user’s intention to obtain information for a particular question.Predicting user intent comes from the need to understand the user’s informational needs to provide the most accurate answer that meets the user’s needs without additional user input.Like humans, CSS must learn to identify closely related or highly similar questions to refer historical context based on the question similarity.By referring to the historical conversational context, the system will understand the user’s informational needs without carrying out the same process of requesting additional information from the user for clarifications, especially when it comes to similar questions.This approach helps to provide fast solutions with minimal user input.We utilize BERT for language representations and understanding because of its ability to understand long-term dependencies in large text.Bert is astate-of-the-artNLP model for language representation from Google Artificial Language [12].

    BERT uses bidirectionality by pre-training on mask language modeling and next sentence prediction, making it suitable for achieving the best contextual representations of each conversation for language understanding.BERT for ConvQA works and performs excellent on a relatively large number of words, making it suitable for understanding multi-turn interactions in CSS.Since BERT for ConvQA is trained on SQUAD data, summary paragraphs and related questions and not multi-turn interactions for dialogues.This model aims to construct BERT for intent prediction and question answering to understand the language in a multi-turn environment, which typically involves several turns.We conduct our experiments on predicting user intent CSS using the MSDialog data [13].The data contain interactive dialogues between users seeking technical support and answer providers.Most user questions are often non-factoid and require further conversational interactions to build a solid understanding of the user’s needs.The answer providers are, in this case, the Microsoft stuff and some experienced Microsoft product users (human agents).The answer (user intent) is the user’s intent to get a question related to Microsoft products.

    2 Related Works

    Several CSS and NLU advancements have created new CSS and ConvQA systems’research interests over the years.Despite these advancements, understanding the nature of conversational search is still a difficult task.There remains a challenge of understanding the user’s informational needs (user intent) in an interactive environment.The focus of ConvQA is to model change in user intent in multi-turn interactions.The intuition is based on handling conversation history between cycles in a multi-turn environment.This is achieved by selecting a subset of past turns (previous answers) based on the level of importance using a rule-based method.The model then embeds past answers for ConvQA.Given a questionqtfrom the user, history modeling expects the agent to refer to the previous answerat-1forqt-1to understand the informational needs of the recent question from the user.The critical aspect of ConvQA is on using history turns for understanding the informational needs of the user.ConvQA performs history answer embedding to the recent question for a given informal session to understand the user’s informational needs.History answer embedding allows the model to understand the user’s intent through conversation history modeling for a particular conversational session.Combining earlier answers with the recent question from the user enables the agent to determine user intent.

    The ConvQA method is suitable for understanding intent based on previous utterances within a particular conversation session.However, this approach is associated with multi-turn interaction,which constraints the time complexity of the model for generating an answer.Furthermore,in a multi-turn setting, using the sequential order of question-answer sets to understand user intent may have a detrimental impact on the CSS and ConvQA systems because user intent tends to change from one turn to the next.In such scenarios, understanding user intent for answer generation becomes difficult.For the same question from a different user, the ConvQA system may again go through several multi-turn interactions to understand user intent, making the process redundant, seeing that an answer for that same question was already generated in the previous session.The BERT system approach focuses on inferring past conversational context to the current user question based on some degree of similarity between the current question and the context of past CS.The contextual conversation data is modeled using BERT‘s next sentence prediction task.By inferring context from previous similar conversations to the current question,this model understands the user’s informational needs and provides answers without requiring additional information.The approach performs conversation contextual data modeling, which indirectly deals with the unexpected changes in user intent.Context modeling is performed based on the intent represented by the original question of conversationci.By focusing on utterances representing the same intent as that of the original question, this model infers the most accurate past conversational context to the question.

    Existing approaches use a System Ask-User Respond (SAUR) approach for CS [14].Naturally, people engage in multi-turn interactions when seeking information.SAUR aims to comprehend user’s requirements by fetching answers based on user feedback.According to SAUR,processes that can start answering appropriate questions dynamically can better understand user needs, which is one of the essential aims of CSS and ConvQA.SAUR integrates sequential modeling and concern via multi-memory network architecture and an individualized version for CS and recommendation.This approach to CS and recommendation focuses on feature sets for the CS to manage and control user acceptance to comprehend user needs.However, this presents a scenario in which a user is given a practically identical question to ask questions historically; the system rehashes the same process of asking the learners to identify the user’s informational needs rather than relating the user to related research conversations.Also, the user may ask follow-up questions that do not represent the same intent as the previous utterances, and this will start a new search altogether.ConvQA suggests that to understand the current information needs of the user, the model should be able to handle the conversation history of the current conversational search session.The approach used in this system will be capable of understanding user intent through context inference based on question similarity, and from the inferred context, we can determine or predict the user’s informational needs.

    Some methodologies to SSE used RL in User Chat Bots; the task of SSE is addressed as a task to assume relevant questions that users might be interested in [15,16].The approach models SSE as a Markov decision process and implements Reinforcement Learning (RL) to find the best recommendation methods.Unlike other existing techniques, which predict the list of items likely to be of interest to users by depending on the immediate benefit rather than the long-term benefits of each recommendation, the analysis proved to review the inter-relationships between the user dynamics and recommends questions using aN-Stepsequential decision process.The model will suggest and add a sensible question to the recommendation list at each turn.The model helps to understand clicks and user satisfaction by resetting its ranking results of the top‘N-Step’recommendations based on user behaviour patterns and question popularity.The approach demonstrates the SSE task by generating better guidelines.

    The approach using attentive history selection for question answering in CSS introduces a history attention mechanism to select conversation histories based on attention weights to understand and answer the question called“Soft Selection.”For each turn in a conversation, different weights are allocated based on their usefulness in answering the current question of the user.Applying attention weights to utterances within a conversion allows the model to capture the importance of history turns.Furthermore, the method incorporates the position information for history to examine the importance of turn position in conversation history modeling.This work realizes the need to learn to answer current questions based on past conversations to limit the interactive process to a single turn.Another related yet different approach is Neural Re-entry Prediction Combining Context and User History.It uses a neural network framework that focuses on re-entry prediction given in a conversation by combining context and user history.The model learns meaningful representations from both the conversation context and user history to predict whether they will return to a conversation they once participated in.The paper illustrates the importance of historical conversational context in understanding user utterances.This approach focuses on utilizing BERT context, representations for conversation context modeling and SSE.This model focuses on the conversational context of past conversations, and the similarity of user questions is an essential aspect for CSS and ConvQA.

    3 Proposed Methodology

    3.1 Problem Statement

    Given a questionqifrom the user, the task is to relateqiwith CS from past sessions to find and infer past conversational contexttoqibased on the highest semantic similarity score for question understanding and then generate the answeraitoqi, whereckis theithconversation consisting of ‘k’utterances after data modeling.Fig.2 shows the system flow of this model approach.

    Figure 2: The flow of SSE, CSS, and Q&A

    3.2 Overview

    3.2.1 BERT Encoding for Intent Semantic Similarity Estimation and Question Answering

    This approach utilizes the BERT model to encode the questionqiand the inferred conversation context ‘c’, into contextualized text representations for SSE.BERT is a cutting-edge,pre-trained language model for NLU that employs transformers to learn deep bidirectional representations.Given a training instance(qi, c), pair the question and the conversation context into a single sequence.The input sequences are fed into the BERT encoder, and BERT generates contextualized representations for each sequence based on the token, segment, and position embedding.BERT is well suited for understanding the given textual information and deriving answers from the text.To understand the textual information for question answering, BERT was trained on the Stanford Question Answering Dataset (SQUAD), consisting of questions with a span of text from that particular textual data.The BERT model for SSE and ConvQA converts the MSDialog data structure to that of SQUAD data.Utterances from each conversation will be treated as contextual information for that particular conversation.The contextual information will provide the BERT Model with the features needed to understand the context-related question.BERT model for ConvQA is limited to text not longer than 512tokens per sequence.It makes it suitable for dealing with long sequential data from multi-turn interactions associated with each conversation.When the sequential data exceeds 512tokens per sequence, understanding the data becomes a challenge.

    3.2.2 Semantic Similarity Estimation in the Conversational Question Answering Framework

    The system presents a modularized design framework for SSE in ConvQA as an abstract in Fig.3.The framework mainly focuses on three key components: SSE (for determining user intent),context inference, and answer prediction or generation.Given a training instanceqi,ai), the SSE module chooses the conversation contextthat is semantically similar to the given questionqi.The selected context is related to the model, which then learns the start and end vectors of the answer span from the inferred conversational context.It is based on the intuition that highly similar questions often go through the same context to understand the user’s informational needs.Here, the conversational contextual data modeling and SSE model implementations are introduced in the following sections.In this research, the model employs a primary method as the conversational context inference in which the most relevant conversational context based on semantic similarity is inferred to the current question of the user for intent prediction.It is based on the intuition that similar questions often result in the same answer or solution, so instead of asking the user the same questions for clarification, this process minimizes user input and infer past related conversational context to the current question.

    Figure 3: Framework for BERT Model

    3.2.3 Semantic Similarity Estimation and Context Inference

    Given a question from the user, the model performs SSE by comparing the similarity of the question with the contextual representations of each past CS in the knowledge base.By converting text (question and the contextual data) into Term Frequency-Inverse Document Frequency(TF-IDF) vectors, we compute the cosine similarity between the question utterance and each contextual conversation data in the knowledge base, as shown in Fig.4.Similarity determines how close or related the given questionqiis to each conversation contextciin the knowledge base in terms of meaning or context.The question is represented into a vector form, whereas the contextual data are represented in matrix form (e.g., TF-IDF):tf-idf(t,c)tf (t,c)×idf(t).The Cosine similarity of the question and the conversation contextual data ranges from 0 to 1, where the score of 1 means that two vectors are highly similar.Eq.(1) is the Cosine similarity for this context inference module between two non-zero vectors.

    SSE and context inference are based on the cosine similarity between the question TF-IDF vector and the conversational context TF-IDF features.This similarity function allows the model to rank the conversation contexts in the knowledge base and infer the context with the highest score to the question posed by the user.After SSE (selecting the most relevant conversation contextual data), the model infers that particular data to the question utterance and sends them to the question-answering module.

    Figure 4: SSE and conversation context inference module

    3.2.4 Question Answering

    An essential aspect of finding the answer to the given question lies in inferring the most relevant conversational context to the given question and using that context to determine user intent.From the inferred contextual data, this model can then predict the answer text span.For example, given a user questionqiand the inferred conversational contextcki, here used a different approach to find the answer.For the training model, the input is the original question of the conversationciand the modeled utterances of the same conversation as context, the output is the probability of context tokens being the start or end tokens of the answer span.The model finds the most probable answer from the inferred contextual data by computing the START/END probabilities of the answers.For each contextual data in the knowledge base, the likelihood is computed for START/END tokens of the answer span based on the Softmax function regarding the given question.Given the wordi, and its hidden vectorTi, the likelihood of the word being the START/END of the answer spanis computed as, Eq.(2)

    The task is to predict the answer using the inferred context.As shown in Fig.5, in the answer prediction task, the model represents the input question and inferred context as a single paired sequence, with the current question of the user using the Q embedding and the inferred context using the C embedding.The models represent the final hidden vector for the input tokeniasTi∈Hand introduce a start vectorS∈Hand end vector∈H.The dot product betweenTiandSis used to calculate the probability of word ‘i’being the start of the answer text span followed by a Softmax over all of the words in the inferred context of Eq.(3)

    The same formula is used for computing the end of the answer text span, Eq.(4)

    The score of a candidate span from position‘i’to position‘j’is defined as, Eq.(5)

    and for prediction, this model uses the maximum scoring text span wherej≥i.The result is the sum of the likelihood of the correct START/END vectors.In the model architecture illustrated in Fig.5, the question is mapped, the conversational context is packed, and the resulting sequence is fed to this model, and then a representation is also generated for each token on the token,segment, and position embedding.Next, a vector representation for the START/END position is learned.It will be used to compute the answer span based on the given question.The loss will be computed as the average of the cross-entropy loss for the START/END positions.The model should then produce the following interactive interface showing output for the given question based on past conversations.

    Figure 5: ConvQA using conversation context inference

    Fig.6 gives the question “Microsoft edge is not responding.” The model successfully inferred relevant conversation context to the question and predicted the answer based on the inferred context.Only a single turn was needed to answer the question.

    Figure 6: Output showing the ConvQA based on past conversations

    4 Experiments

    This system first describes the MSDialog dataset and how it applies to the research problem and then described the experimentation approach for SSE, and lastly, performed different evaluation results.

    4.1 Data Set

    We conduct the model experiment on ConvQA based on SSE using the MSDialog dataset.This dataset contains interactive dialogues between users seeking technical support and answer providers.The answer providers are, in this case, the Microsoft stuff and some experienced Microsoft product users.The answer (user intent) is the user’s intent to get a question related to Microsoft products.The dataset consists of 35000 technical support conversational interactions,with over 2000 dialogues selected for user intent annotations.Each dialogue will comprise at least 2 to 3 turns, 2 participants and 1 correct answer.Tabs.1 and 2 give the description and statistics of the dataset, respectively.

    4.2 Simulation

    The model used PyTorch as our framework for Deep Learning (https://pytorch.org), also we used the uncased pre-trained BERT model and then using the PyTorch-transformers package from hugging face (https://github.com/huggingface/pytorch-transformers), which includes different utilities and scripts for performing different NLP tasks such as ConvQA.The pre-trained BERT model comes with its vocabulary of words; therefore, extracting words from the current dataset is unnecessary.BERT consists of the uncased and cased model, and for this work, we use the uncased model, which is not case sensitive.Also, the model uses ConvQA annotation for defining the answer spans from the conversation contexts.This will split the list of conversations from the MSDialog dataset into training and validation sets.It uses different optimizers for the model to find the model with the best performance (BertAdam optimizer) and apply early stopping based on the validation set.This model applies gradient clipping using a max-norm of 1.0.The batch size for the training process is 2.For all models, the maximum length of the input text sequence is set to 384tokens per sequence, the maximum answer length is set to 512tokens per sequence,the document striding is set to 128, and the maximum sequence length is set to 512tokens per sequence.The learning rate of the model is set to 2 × 10-5.It performs checkpoints on every iteration step and tests on the validation set.

    Table 1: MSDialog data description and classification

    Table 2: MSDialog data statistics

    4.3 Evaluation Metrics

    The evaluation of this model is based on two metrics that are, Exact Match and F1 scores.The Exact Match calculation is a binary measure.Check to see if the answer from this model exactly matches the answer from the validation set.The F1-score, on the other hand, is less strict;it computes the average overlap between the BERT model’s response and the answer from the validation set.This score is taken as the proportion of the precision and recall of the answers,where precision is described as the ratio of words in the model answer that also appear in the quantitative measurements answer, and recall is defined as the ratio of words in the quantitative measurements answer that appears to be correct.For example, if the actual answer is“You cannot use that chart type if the data is already aggregated”, and this model predicted,“You cannot use that chart type.”This would have high precision but lower recall, but if predicted,“You cannot use that chart type if the data is already aggregated in Excel,”this would have high recall but lower precision.This example also shows why the F1 scores are necessary, as answers can be presented in more than one way.Both answers will be allocated an exact match score of‘0’if there is an overlap in the ground truth answer and the predicted answer.However, the predicted answer spans are primarily correct; hence this focused more on improving the F1-score of this model than the exact match.

    4.4 Baselines

    In this section, compare the evaluation metrics following previous work as a baseline model.In addition to analyzing baseline performance, analyze the performance of the proposed model over the MSDialog dataset.This model considers several models with different model parameters as baselines for conversational question answering using SSE.The methods used for comparison are described in detail as follows:

    ?BERT with History Answer Embedding(HAE):A history answers embedding for ConvQA Attentive

    ?History Selection for ConvQA:Uses a History Attention Mechanism (HAM) for ConvQA

    ? HAM using Bert Large

    ?BERT+Context Inference:This model implements different ConvQA with BERT, and we predict the user’s intent by inferring contextual data from past CS for intent determination and answer finding.

    ? BERT + BertAdam

    ? BERT + AdamW

    ? BERT + FusedAdam

    5 Result and Discussion

    Several experiments were conducted on the MSDialog dataset for CQA based on SSE using different parameters.The experimental group compared the Exact Match and F1 scores of the proposed methods and the baseline models.Tab.3 shows the results of the experiments.

    Table 3: Each row displays validation/test scores for the respective model

    Figure 7: Each row displays validation/test scores for the respective model

    The BERT model is evaluated based on two evaluation metrics: Exact Match (EM) and F1 scores are compared to the baseline models.Fig.7 shows that HAM (BERT-Large) brings a slightly higher performance than HAM and BERT + HAE, achieving the best results among baselines.This suggests that methods of historiography attention are essential for conversation historiography modeling and response embedding.Furthermore, our proposed model achieved much higher accuracy by using past CS than the baseline models.

    Fig.8.The proposed model, BERT + Context Inference, obtains a substantially higher performance on answer prediction than baseline methods, showing the strength of our model on that task.Also, the performance of our model is affected slightly by using different optimizers,as shown in Figs.9 and 10 for both F1 and EM scores, respectively.However, the model sees no significant differences when using different optimization functions.Increasing the sequence length of the query and the maximum input sequence significantly improves the F1 scores, suggesting that a model that can take more than 512 tokens as input can achieve even better results.Also,train the model using different model parameters.Experimental results from our models as well as the otherstate-of-the-artmodel are shown in Tab.3, where the first model uses the BERT-HAE,the second model uses the HAM, the third model uses the HAM on BERT Large, and the rest of the models represent the proposed BERT model with context inference implemented across different model parameters.

    Figure 8: Each bar displays F1/EM-scores for the respective model

    Tab.3, the new proposed model using AdamW optimization achieved the F1-score of 83.63 after 10 epochs, using default parameters, and it performed better than BERT models, whichhave the same settings except for the AdamW optimization.In particular, on the MSDialog dataset,the model using AdamW Optimization improves by 0.07% EM and 1.06% F1 compared to the Fused Adam models, and 0.0% EM and 0.479% F1 over the BertAdam models.Leveraging the BERT-Large model makes multi-passage BERT even better on MSDialog datasets.

    Figure 9: Exact match scores based on different optimizers at different epoch thresholds

    Figure 10: Accuracy of the model using different optimizers at different epoch thresholds

    Fig.11 shows the results with different optimization functions.In all cases, inferring past conversation contextual data was practical.However, using different optimizers had little effect on accuracy.The results show that the best accuracy was obtained in the case of using the AdamW optimizer.This model uses past conversation contextual representations trained over the question-and-answer pairs from previous CS.A new method captured the user’s informational needs based on interaction from past CS, and it achieved reasonably high accuracy.Pre-trained BERT was trained with two input segments, which makes it suitable for this task.However,providing accurate answers is highly dependent on the richness of the researcher’s knowledge base in terms of contextual data availability.Past contextual data on the knowledge base (MSDialog)can include the context unrelated to the current question.Since this model infers contextual data based on the current question and past contextual data similarities, the context with the highest score may not be related to the given question and can decrease the accuracy.Notably, it is essential to update the knowledge base with new context from CS.Future work will constitute context selection only related to the current question and using context from other knowledge bases for better performances.

    Figure 11: Comparison of the proposed model using different optimizers

    6 Conclusion and Future Work

    This paper introduced SSE and context inference, a method for determining user intent and providing answers from partial/limited information from users based on past CS without engaging in multi-turn interactions with the system.This system can understand limited information from user questions and can infer relevant contextual data from past conversations and at the same time provide exact answer spans using questions of the same domain area.Modelling utterances as conversational context using BERT deliver a significant improvement in ConvQA over the existing model.Using BERT for contextual representations enhanced the performance of this model.When it comes to multi-turn interactions in CS, available datasets have no precise contextual information for modeling good models; however, the MSDialog dataset provided the necessary multi-turn information-seeking conversations for proposed work.This work will create more research interest in intent prediction and CSS, critical for simulating human-human interactions in CA.

    In the future, researchers design to combine our history design methodology with a learned history analysis algorithm for ConvQA.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts to report regarding the present study.

    国内少妇人妻偷人精品xxx网站| 日韩三级伦理在线观看| 五月伊人婷婷丁香| 免费电影在线观看免费观看| 99九九线精品视频在线观看视频| 女人十人毛片免费观看3o分钟| 性插视频无遮挡在线免费观看| 国语对白做爰xxxⅹ性视频网站| 国产亚洲精品久久久com| 好男人在线观看高清免费视频| 热re99久久精品国产66热6| 久久精品国产亚洲网站| 99热6这里只有精品| 久久精品国产亚洲网站| 国产精品久久久久久精品电影小说 | 久久精品国产亚洲av涩爱| 亚洲不卡免费看| 久久久久久久国产电影| 国产成人午夜福利电影在线观看| 两个人的视频大全免费| 国产精品秋霞免费鲁丝片| 久久99热这里只频精品6学生| 狂野欧美白嫩少妇大欣赏| 精品一区二区三卡| 日本-黄色视频高清免费观看| 久久99热这里只有精品18| 久久精品久久精品一区二区三区| 成年免费大片在线观看| 国产在线男女| 22中文网久久字幕| 日韩av免费高清视频| 亚洲最大成人手机在线| 亚洲久久久久久中文字幕| 精品少妇黑人巨大在线播放| 自拍偷自拍亚洲精品老妇| 久久久精品94久久精品| 亚洲国产精品成人久久小说| 精品久久久噜噜| 国产有黄有色有爽视频| 久久久久久伊人网av| 毛片一级片免费看久久久久| 午夜精品国产一区二区电影 | 亚洲内射少妇av| 亚洲综合精品二区| 成年免费大片在线观看| 日韩 亚洲 欧美在线| 男女边摸边吃奶| 国产一区二区三区av在线| 91aial.com中文字幕在线观看| 久久久午夜欧美精品| 亚洲精品,欧美精品| 亚洲伊人久久精品综合| 精品久久国产蜜桃| 一个人观看的视频www高清免费观看| 亚洲av免费高清在线观看| 性色av一级| 乱码一卡2卡4卡精品| 免费av观看视频| .国产精品久久| 日韩av在线免费看完整版不卡| 在线观看一区二区三区| 2018国产大陆天天弄谢| 日日摸夜夜添夜夜爱| 国产精品久久久久久精品电影小说 | 搡老乐熟女国产| 一级毛片我不卡| 午夜爱爱视频在线播放| 如何舔出高潮| 国产免费又黄又爽又色| 国产探花极品一区二区| 国产午夜福利久久久久久| 亚洲欧美精品专区久久| 亚洲自偷自拍三级| 大话2 男鬼变身卡| 白带黄色成豆腐渣| 男人和女人高潮做爰伦理| 成人无遮挡网站| 高清欧美精品videossex| 亚洲国产色片| 舔av片在线| 亚洲真实伦在线观看| 免费看a级黄色片| av播播在线观看一区| 亚洲av男天堂| 久久久国产一区二区| 激情五月婷婷亚洲| 最近中文字幕高清免费大全6| 99热6这里只有精品| 国产又色又爽无遮挡免| 日产精品乱码卡一卡2卡三| 狂野欧美激情性xxxx在线观看| a级毛色黄片| 国产伦理片在线播放av一区| 深夜a级毛片| 校园人妻丝袜中文字幕| 中文字幕亚洲精品专区| 五月开心婷婷网| 国产成人91sexporn| 午夜老司机福利剧场| 国产综合懂色| 成年女人看的毛片在线观看| 寂寞人妻少妇视频99o| 久久久久久久大尺度免费视频| 韩国av在线不卡| 国产在线一区二区三区精| 成人综合一区亚洲| 日韩国内少妇激情av| 欧美最新免费一区二区三区| 婷婷色麻豆天堂久久| 寂寞人妻少妇视频99o| 一级毛片黄色毛片免费观看视频| 日韩av在线免费看完整版不卡| 亚洲av福利一区| av女优亚洲男人天堂| 精品99又大又爽又粗少妇毛片| 蜜桃亚洲精品一区二区三区| 亚洲av男天堂| 一级a做视频免费观看| 久久精品久久精品一区二区三区| av国产久精品久网站免费入址| 青春草国产在线视频| 大又大粗又爽又黄少妇毛片口| 在线看a的网站| 一级a做视频免费观看| 国产精品国产三级国产专区5o| 十八禁网站网址无遮挡 | 内射极品少妇av片p| 少妇被粗大猛烈的视频| 97热精品久久久久久| tube8黄色片| 亚洲精品aⅴ在线观看| 亚洲人与动物交配视频| 亚洲av中文字字幕乱码综合| 亚洲成人精品中文字幕电影| 丝袜美腿在线中文| 99热这里只有是精品50| 国产永久视频网站| 亚洲av一区综合| 成人亚洲精品av一区二区| av天堂中文字幕网| 精品午夜福利在线看| 亚洲欧美一区二区三区国产| 欧美老熟妇乱子伦牲交| 免费播放大片免费观看视频在线观看| 最后的刺客免费高清国语| 国产大屁股一区二区在线视频| 精品熟女少妇av免费看| 成人特级av手机在线观看| 人妻系列 视频| 国产欧美亚洲国产| av又黄又爽大尺度在线免费看| 最近的中文字幕免费完整| 色5月婷婷丁香| 一区二区三区精品91| 久久韩国三级中文字幕| 一级爰片在线观看| 久久久亚洲精品成人影院| 欧美性感艳星| 久久精品国产a三级三级三级| 亚洲第一区二区三区不卡| 国产精品三级大全| 又爽又黄无遮挡网站| h日本视频在线播放| 亚洲精品乱久久久久久| 建设人人有责人人尽责人人享有的 | 在线亚洲精品国产二区图片欧美 | 久久97久久精品| av黄色大香蕉| 久久热精品热| 综合色av麻豆| 中文字幕免费在线视频6| 高清午夜精品一区二区三区| 精品久久久久久久久av| 亚洲精品成人av观看孕妇| 人妻制服诱惑在线中文字幕| 最近最新中文字幕免费大全7| 狂野欧美白嫩少妇大欣赏| 国产毛片a区久久久久| 亚洲欧美成人综合另类久久久| 欧美少妇被猛烈插入视频| 草草在线视频免费看| 成年女人看的毛片在线观看| 男的添女的下面高潮视频| 日韩精品有码人妻一区| 亚洲av一区综合| 一级黄片播放器| 日韩电影二区| 麻豆久久精品国产亚洲av| 激情五月婷婷亚洲| 久久精品国产亚洲av天美| 麻豆久久精品国产亚洲av| 搡老乐熟女国产| 美女高潮的动态| 国产有黄有色有爽视频| 亚洲最大成人av| 亚洲人成网站在线播| 你懂的网址亚洲精品在线观看| 97超视频在线观看视频| 干丝袜人妻中文字幕| 亚洲国产高清在线一区二区三| 男女国产视频网站| 亚洲av在线观看美女高潮| 91久久精品电影网| 一区二区三区四区激情视频| 欧美一级a爱片免费观看看| 久久久午夜欧美精品| 王馨瑶露胸无遮挡在线观看| 久久精品熟女亚洲av麻豆精品| 插逼视频在线观看| 精品国产乱码久久久久久小说| 噜噜噜噜噜久久久久久91| 在现免费观看毛片| 国产午夜精品一二区理论片| av免费观看日本| 日韩成人av中文字幕在线观看| 久热久热在线精品观看| 尤物成人国产欧美一区二区三区| 天堂俺去俺来也www色官网| 精品人妻一区二区三区麻豆| 国产欧美日韩精品一区二区| 免费黄网站久久成人精品| 国产成人午夜福利电影在线观看| 神马国产精品三级电影在线观看| 国产亚洲最大av| 水蜜桃什么品种好| 日韩一区二区视频免费看| 中文天堂在线官网| 国产精品三级大全| 少妇 在线观看| 精品久久久噜噜| 国产精品久久久久久久电影| 18禁裸乳无遮挡动漫免费视频 | 亚洲国产精品999| 日韩欧美精品v在线| 又粗又硬又长又爽又黄的视频| 国产在视频线精品| 亚洲久久久久久中文字幕| 岛国毛片在线播放| 日本猛色少妇xxxxx猛交久久| 欧美日韩亚洲高清精品| 国内少妇人妻偷人精品xxx网站| 男男h啪啪无遮挡| 久久久精品免费免费高清| 国内揄拍国产精品人妻在线| 狂野欧美激情性bbbbbb| 人人妻人人澡人人爽人人夜夜| 又粗又硬又长又爽又黄的视频| 又爽又黄无遮挡网站| 国产女主播在线喷水免费视频网站| 精品少妇久久久久久888优播| 精品久久久噜噜| 精品国产三级普通话版| 韩国av在线不卡| 插逼视频在线观看| 视频中文字幕在线观看| 在线观看一区二区三区激情| 亚洲av成人精品一二三区| 久久精品夜色国产| 国产成人精品久久久久久| av在线播放精品| 少妇的逼水好多| 久久久久久伊人网av| 一级av片app| 国产乱人视频| 一本久久精品| 日韩,欧美,国产一区二区三区| 高清av免费在线| 九九爱精品视频在线观看| 亚洲激情五月婷婷啪啪| 99久国产av精品国产电影| 最近手机中文字幕大全| 人妻制服诱惑在线中文字幕| 青春草亚洲视频在线观看| 国产成人a区在线观看| 美女主播在线视频| 在线免费观看不下载黄p国产| 人体艺术视频欧美日本| 国产成人免费无遮挡视频| 国产欧美日韩精品一区二区| 国产精品一及| 成人亚洲精品av一区二区| 午夜日本视频在线| 久久久色成人| 亚洲成人av在线免费| 少妇裸体淫交视频免费看高清| 日韩视频在线欧美| 少妇人妻久久综合中文| 欧美成人精品欧美一级黄| 91午夜精品亚洲一区二区三区| 日韩成人av中文字幕在线观看| 高清毛片免费看| 黄色一级大片看看| 国产 一区精品| 成人特级av手机在线观看| 亚洲欧美日韩另类电影网站 | 女人被狂操c到高潮| 亚洲电影在线观看av| 欧美zozozo另类| 熟女av电影| 九色成人免费人妻av| 久久久a久久爽久久v久久| 亚洲av成人精品一二三区| 嘟嘟电影网在线观看| 国产乱人视频| 男人爽女人下面视频在线观看| 三级男女做爰猛烈吃奶摸视频| 亚洲av一区综合| 亚洲人成网站在线观看播放| 国产精品人妻久久久影院| 高清毛片免费看| 97人妻精品一区二区三区麻豆| 一级黄片播放器| 亚洲av欧美aⅴ国产| 国内精品宾馆在线| 欧美日韩综合久久久久久| 亚洲色图综合在线观看| 十八禁网站网址无遮挡 | 久久久国产一区二区| 老女人水多毛片| 久久久久精品久久久久真实原创| 特大巨黑吊av在线直播| av播播在线观看一区| 久久人人爽人人片av| 麻豆久久精品国产亚洲av| 高清欧美精品videossex| 黄色一级大片看看| 又粗又硬又长又爽又黄的视频| 国产精品嫩草影院av在线观看| 色哟哟·www| 51国产日韩欧美| 国产色爽女视频免费观看| 午夜爱爱视频在线播放| 精品一区二区三卡| av网站免费在线观看视频| 亚洲精品久久午夜乱码| 国产 一区 欧美 日韩| 91久久精品电影网| 日本爱情动作片www.在线观看| 免费少妇av软件| 欧美日韩视频精品一区| 狂野欧美白嫩少妇大欣赏| 成人毛片a级毛片在线播放| 亚洲综合精品二区| 日本免费在线观看一区| 午夜福利视频1000在线观看| 美女脱内裤让男人舔精品视频| 亚洲婷婷狠狠爱综合网| 人妻制服诱惑在线中文字幕| 女人十人毛片免费观看3o分钟| 免费看光身美女| 校园人妻丝袜中文字幕| 亚洲欧美日韩东京热| 精品少妇久久久久久888优播| av在线亚洲专区| 一级二级三级毛片免费看| 91精品一卡2卡3卡4卡| 欧美国产精品一级二级三级 | 麻豆成人午夜福利视频| 在线看a的网站| 老司机影院成人| 成年女人看的毛片在线观看| 色婷婷久久久亚洲欧美| 日韩欧美 国产精品| 99久国产av精品国产电影| 国产午夜福利久久久久久| 亚洲精品国产av成人精品| 国产男女超爽视频在线观看| 国产免费一区二区三区四区乱码| 欧美区成人在线视频| 18禁裸乳无遮挡免费网站照片| 欧美最新免费一区二区三区| videossex国产| h日本视频在线播放| 欧美成人a在线观看| 亚洲在久久综合| 免费大片18禁| 十八禁网站网址无遮挡 | 亚洲欧洲日产国产| 人人妻人人看人人澡| 国产国拍精品亚洲av在线观看| 天堂网av新在线| 亚洲一级一片aⅴ在线观看| av黄色大香蕉| 久久久久久伊人网av| 尤物成人国产欧美一区二区三区| 亚洲av.av天堂| 久久久a久久爽久久v久久| 我的女老师完整版在线观看| 亚洲人成网站高清观看| 亚洲精品色激情综合| 精品一区二区三卡| 激情五月婷婷亚洲| 久久亚洲国产成人精品v| 成人特级av手机在线观看| 免费av观看视频| 久久综合国产亚洲精品| www.av在线官网国产| 欧美激情在线99| 十八禁网站网址无遮挡 | 男女边吃奶边做爰视频| 国产精品麻豆人妻色哟哟久久| 最近最新中文字幕免费大全7| 一级爰片在线观看| 国产男人的电影天堂91| 久热这里只有精品99| 一级毛片电影观看| 欧美激情国产日韩精品一区| 一个人观看的视频www高清免费观看| 夫妻性生交免费视频一级片| 亚洲综合精品二区| av网站免费在线观看视频| 亚洲精品成人av观看孕妇| 国产精品一区二区性色av| 69av精品久久久久久| 欧美3d第一页| 亚洲精品一区蜜桃| 日产精品乱码卡一卡2卡三| 一级毛片我不卡| 国产成人一区二区在线| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 天堂俺去俺来也www色官网| 寂寞人妻少妇视频99o| 国产免费视频播放在线视频| 尾随美女入室| 亚洲av免费在线观看| 亚洲人成网站高清观看| 激情五月婷婷亚洲| 日本熟妇午夜| 插阴视频在线观看视频| 两个人的视频大全免费| 青春草国产在线视频| 精华霜和精华液先用哪个| 久久精品国产a三级三级三级| 99热网站在线观看| 日韩一区二区视频免费看| 成人漫画全彩无遮挡| 久久精品国产亚洲av涩爱| 欧美成人a在线观看| 搡女人真爽免费视频火全软件| 亚洲欧洲国产日韩| 美女高潮的动态| 在线天堂最新版资源| 欧美xxxx黑人xx丫x性爽| 日韩国内少妇激情av| 丰满乱子伦码专区| 国产日韩欧美在线精品| 亚洲伊人久久精品综合| 中文字幕亚洲精品专区| 久久99蜜桃精品久久| 久久久久久久亚洲中文字幕| 国产免费福利视频在线观看| 欧美性感艳星| 亚洲三级黄色毛片| 精品久久久久久久久亚洲| 久久久久网色| 成年版毛片免费区| 嫩草影院新地址| 免费av观看视频| 一级毛片电影观看| 午夜免费观看性视频| 亚洲精品色激情综合| 尤物成人国产欧美一区二区三区| 深夜a级毛片| 亚洲国产精品成人久久小说| 老司机影院毛片| 少妇裸体淫交视频免费看高清| 久久久欧美国产精品| 国产成人精品婷婷| 国产高清三级在线| 亚洲一级一片aⅴ在线观看| 在线亚洲精品国产二区图片欧美 | 国内少妇人妻偷人精品xxx网站| 亚洲精品影视一区二区三区av| www.av在线官网国产| 色哟哟·www| 精品久久久久久久人妻蜜臀av| 国产欧美另类精品又又久久亚洲欧美| 熟妇人妻不卡中文字幕| 亚洲av国产av综合av卡| 日韩亚洲欧美综合| 一级毛片黄色毛片免费观看视频| av在线播放精品| 波多野结衣巨乳人妻| 97超视频在线观看视频| 国产综合懂色| 成人国产麻豆网| 亚洲精品国产av蜜桃| 国产一区二区在线观看日韩| 亚洲精品国产av蜜桃| 少妇人妻 视频| 香蕉精品网在线| 久久99蜜桃精品久久| 韩国高清视频一区二区三区| 蜜臀久久99精品久久宅男| 青春草国产在线视频| 婷婷色麻豆天堂久久| 日韩一区二区三区影片| 精品人妻偷拍中文字幕| 久久久久网色| 大陆偷拍与自拍| 国模一区二区三区四区视频| 亚洲色图av天堂| 亚洲欧美日韩卡通动漫| 精品少妇黑人巨大在线播放| 色视频在线一区二区三区| 人人妻人人看人人澡| 狂野欧美激情性xxxx在线观看| 国产成人午夜福利电影在线观看| 亚洲精品国产色婷婷电影| 久热这里只有精品99| 网址你懂的国产日韩在线| 久久久久久国产a免费观看| 亚洲在线观看片| 日日撸夜夜添| 亚洲精品乱码久久久v下载方式| 精品酒店卫生间| 噜噜噜噜噜久久久久久91| 80岁老熟妇乱子伦牲交| 激情五月婷婷亚洲| 亚洲av中文字字幕乱码综合| 搡老乐熟女国产| 夫妻性生交免费视频一级片| 久久人人爽人人片av| 少妇人妻久久综合中文| 亚洲精品一二三| 性插视频无遮挡在线免费观看| 午夜福利高清视频| 九九爱精品视频在线观看| 夫妻午夜视频| 国产精品一区二区三区四区免费观看| 国产精品嫩草影院av在线观看| 久久国产乱子免费精品| 在线观看一区二区三区激情| 午夜免费鲁丝| 欧美三级亚洲精品| 麻豆乱淫一区二区| 丰满人妻一区二区三区视频av| 色5月婷婷丁香| 美女cb高潮喷水在线观看| 亚洲真实伦在线观看| 国产综合懂色| 午夜亚洲福利在线播放| 两个人的视频大全免费| 亚洲人成网站在线播| 夫妻午夜视频| 三级男女做爰猛烈吃奶摸视频| 亚洲欧美日韩另类电影网站 | 亚洲av中文av极速乱| 亚洲精品一区蜜桃| 青春草亚洲视频在线观看| 高清午夜精品一区二区三区| 女人被狂操c到高潮| 亚洲国产欧美在线一区| 日韩国内少妇激情av| 精品一区二区三区视频在线| 成人国产麻豆网| 久久久欧美国产精品| 18禁裸乳无遮挡免费网站照片| 国产精品福利在线免费观看| 免费少妇av软件| 久久6这里有精品| 亚洲精品aⅴ在线观看| 欧美高清性xxxxhd video| 亚洲欧洲日产国产| 亚洲成人av在线免费| 国产69精品久久久久777片| 精品午夜福利在线看| 岛国毛片在线播放| 少妇人妻精品综合一区二区| 中国三级夫妇交换| 亚洲国产日韩一区二区| 日韩 亚洲 欧美在线| 国产精品久久久久久精品电影| 亚洲无线观看免费| 国产老妇伦熟女老妇高清| 在线观看一区二区三区| 午夜视频国产福利| 男男h啪啪无遮挡| 在线观看av片永久免费下载| 97在线人人人人妻| 亚洲av电影在线观看一区二区三区 | 激情五月婷婷亚洲| 夜夜看夜夜爽夜夜摸| 国产精品蜜桃在线观看| 色哟哟·www| 欧美区成人在线视频| 日本黄色片子视频| 一边亲一边摸免费视频| 国产男女超爽视频在线观看| 免费看av在线观看网站| 日日撸夜夜添| 亚洲av中文字字幕乱码综合| 视频中文字幕在线观看| 亚洲国产精品999| 亚洲av中文av极速乱| 最近的中文字幕免费完整| 国产黄频视频在线观看| 欧美3d第一页| 日韩伦理黄色片| 日日啪夜夜爽| 在线观看三级黄色| 成人欧美大片| 日本av手机在线免费观看| 久久久久久久久久人人人人人人| 大香蕉97超碰在线| 婷婷色麻豆天堂久久| 久久久久久久大尺度免费视频| 2018国产大陆天天弄谢| 成人亚洲欧美一区二区av| 成人鲁丝片一二三区免费| 精品久久久久久久久av| 国产熟女欧美一区二区| 亚洲精品国产av蜜桃| 99九九线精品视频在线观看视频| 边亲边吃奶的免费视频|