• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    AMDnet:An Academic Misconduct Detection Method for Authors’Behaviors

    2022-08-23 02:21:58ShihaoZhouZiyuanXuJinHanXingmingSunandYiCao
    Computers Materials&Continua 2022年6期

    Shihao Zhou,Ziyuan Xu,Jin Han,,Xingming Sun,2 and Yi Cao

    1Nanjing University of Information Science&Technology,Nanjing,210044,China

    2Engineering Research Center of Digital Forensics,Ministry of Education,Nanjing,201144,China

    3Nanjing University(Suzhou)High and New Technology Research Institute,Suzhou,215123,China

    4Jiangsu Union Technical Institute,Wuxi,214145,China

    5Department of Electrical and Computer Engineering,University of Windsor,ON,N9B 3P4,Canada

    Abstract: In recent years,academic misconduct has been frequently exposed by the media, with serious impacts on the academic community.Current research on academic misconduct focuses mainly on detecting plagiarism in article content through the application of character-based and non-text element detection techniques over the entirety of a manuscript.For the most part, these techniques can only detect cases of textual plagiarism, which means that potential culprits can easily avoid discovery through clever editing and alterations of text content.In this paper, we propose an academic misconduct detection method based on scholars’submission behaviors.The model can effectively capture the atypical behavioral approach and operation of the author.As such, it is able to detect various types of misconduct,thereby improving the accuracy of detection when combined with a text content analysis.The model learns by forming a dual network group that processes text features and user behavior features to detect potential academic misconduct.First, the effect of scholars’behavioral features on the model are considered and analyzed.Second, the Synthetic Minority Oversampling Technique(SMOTE)is applied to address the problem of imbalanced samples of positive and negative classes among contributing scholars.Finally, the text features of the papers are combined with the scholars’behavioral data to improve recognition precision.Experimental results on the imbalanced dataset demonstrate that our model has a highly satisfactory performance in terms of accuracy and recall.

    Keywords:Academic misconduct;neural network;imbalanced dataset

    1 Introduction

    In the last few years,scientific research around the world has shown an increasingly open position,as the emergence of excellent academic papers has contributed positively to social development.However,due to the low-risk and high-profit incentives of academic misconduct,an increasing number of violations of academic regulations are being frequently exposed,raising heightened attention from governments and the public.As cited in[1],the Office of Science and Technology Policy of the United States defines academic misconduct as“the fabrication,tampering with,or plagiarizing when recommending,conducting,or reporting research,or when seriously deviating from the accepted rules of the scientific community,excluding honest errors or discrepancies in data interpretation or assessment.”This definition has been universally recognized and accepted by scholars.In May 2019,the China Press and Publication Administration issued the“Academic Publishing Standards:Definition of Academic Misconduct in Journals”[2].This standard provides a detailed classification and definition of many different forms of academic misconduct,further regulating academic production activities.

    Despite these regulations,many studies have found that academic misconduct is still on the rise.Since 1975,the number of papers withdrawn due to some kind of misconduct has increased by nearly 10 times as a percentage of published articles.Grieneisen et al.[3] analyzed databases of multiple disciplines and collected 4449 retracted articles in 4 disciplines.Their research confirmed that the trend of withdrawals is on the rise,with most withdrawals due to misconduct.Although the number of retracted articles still accounts for a small proportion of the total number of publications, many researchers believe that the current academic frauds are only the tip of the iceberg.Therefore,how to prevent and detect academic misconduct in time has become an increasingly important topic.

    Fig.1 depicts the number of article retractions in different countries counted on Retraction Watch from 2015 to 2020.It can be observed that although most of the 9 countries have experienced a decline in the number of retracted manuscripts during these five years, the overall number of manuscripts withdrawn is still on the rise.2020 is a special year as the outbreak of Corona Virus Disease 2019(COVID-19)may be the main reason for the decline.While papers related to COVID-19 have increased, such as the telemedicine system proposed by Abdulkareem et al.[4] and the use of machine learning models to predict confirmed cases by Antor et al.[5],the overall trend is still a decline in 2020.

    Figure 1:Trends of paper withdrawals on Retraction Watch in different countries from 2015 to 2020

    The current mainstream techniques in character-based or non-text element detection only consider the content of papers, and can only detect cases of a single plagiarism type.At the same time,these methods cannot examine dynamic features from the outside.In this paper,academic misconduct is captured not only through paper content,but also through the user’s external behavior.The model learns by forming a dual network group that processes text features and user behavior features to detect potential academic misconduct.This method is proven experimentally to improve accuracy and recall,and the model’s performance is further improved after combining text features.

    In summary, this study proposes an academic misconduct detection network (AMDnet) which includes the following innovative features:

    1) The academic misconduct detection method adopts a fusion model combining user behaviors and text content,in which the user behavior module captures abnormal operations and the text content analysis module calculates the probability of paper plagiarism.The proposed model overcomes the limitation that only a single plagiarism type can be detected.

    2) For the first time,we use real submission data for testing,and filter a large amount of data to obtain effective learning features.

    3) The AMDnet framework learns the representation of cross-domain features by employing a network group.

    The rest of this article is structured as follows:Chapter 2 introduces past work on detecting academic misconduct;Chapter 3 presents a neural network algorithm for detecting academic misconduct;Chapter 4 describes the processing and experimental scheme of the dataset,and Chapter 5 evaluates the model’s performance and experimental results.Chapter 6 draws a conclusion.

    2 Related Work

    Analyzing data from the Retraction Watch Database,Wu et al.[6]found that the number of papers retracted due to plagiarism accounted for the majority,reaching nearly 61%.Hence,the research on plagiarism,one type of academic misconduct,has been occupying the mainstream,while the discovery and detection of other types of academic misconduct have mainly occurred through human experience.

    As shown in Fig.2, from a technical perspective, plagiarism detection techniques are classified into two main categories: external plagiarism detection and internal plagiarism detection.External plagiarism detection methods compare suspicious documents with a collection of documents assumed to be genuine (reference collection), and retrieve all documents showing similarities that exceed a threshold as potential sources [7].Generally speaking, the collection of suspicious documents using this method is usually very large.It is computationally unfeasible to compare the input article with all the documents in the collection.To improve detection efficiency,most external plagiarism detection methods are divided into two stages:candidate retrieval and detailed analysis[8].The main task of the selective retrieval stage is to retrieve documents that share content with the input from the suspicious documents so as to reduce the amount of calculation in the next stage.In the detailed analysis stage,a careful document comparison is completed to identify parts of the input document that are similar to the source document.

    The concept of internal plagiarism detection was first proposed by Eissen et al.[9].This method assumes that each author has their own writing style and uses this to identify articles by different authors.The internal plagiarism detection method includes two tasks[10]:style vulnerability detection,which detects paragraphs with different styles;and author identification,which identifies the author of a document or paragraph.The main difference between internal plagiarism detection and external plagiarism detection is that the internal method does not require any reference documents.

    Figure 2:Plagiarism detection technology classification system

    In the past few decades, scholars have proposed many specific detection techniques to realize external or internal plagiarism detection.Character-based detection technology is a frequently used technology, most suitable for identifying copy and paste.Grozea et al.[11] used 16-gram to match the similarity of 16 consecutive entity sequences and to detect similar content.Tschuggnall et al.[12]detect suspicious places in a text document by analyzing the grammar of the sentences.Elhadi et al.[13] use syntactic location tags to represent text structure and as the basis for analysis, in which documents containing the same location tag features are used to identify the source of plagiarism.The semantic-based detection method is currently more important.This method can find the relevance between words and words,sentences and sentences,and paragraphs and paragraphs,so as to detect the similarity of papers at the semantic level,thereby improving the accuracy of detection.AlSallal et al.[14]proposed a new weighting method and used Latent Semantic Analysis(LSA)as the style feature for internal plagiarism detection.Resnik et al.[15] used the WordNet model to calculate semantic similarity.Salvador et al.[16]improved the weighting process by using skip-grams,and then applied graph similarity measures to generate semantic similarity scores for documents.

    All of the above detection and recognition methods are based on analyses of the text content.Foltynek et al.[17] called the technology of analyzing non-text elements to identify academic misconduct as idea-based methods.This type of method is an important supplement and expansion of text-based analysis methods,and enriches the technical ideas for detecting various types of academic misconduct.Gipp et al.[18] proposed citation-based plagiarism detection and analyzed the citation patterns in academic literature,checking,for instance,whether the same citations appear in a similar order in two documents.Meuschke et al.[19] proposed a detection method based on mathematical expressions, and through experiments showed that mathematical expressions are effective features independent of the text.Acuna et al.[20] analyzed the graphic elements in the literature, and used image similarity detection algorithms to find a large number of image reuse and plagiarisms.

    Current research focuses mainly on the detection of plagiarism-type academic misconduct.Systematic research on other types of misconduct is insufficient.However,research has been carried out on the detection of various abnormal user behaviors in social networks,such as building neural network models to detect deceptive comments [21], and detecting malicious social bots through graph networks[22].Taking those approaches into account,this paper analyzes user information and behavior data in the Tech Science Press (TSP) online submission system, and finds that some users display repeated submissions, one-site multiple submissions, and other problems.Thus, we propose an academic misconduct user classification model.This research method provides a new idea for academic misconduct detection and further supplements idea-based methods.

    3 Method

    The main purpose of this paper is to identify scholars who have a high probability of academic misconduct by building a neural network that can analyze scholars’non-text behavioral data.The classification model for academic misconduct has three main tasks: data preprocessing, in which behavioral data and text data are processed respectively; data sampling, which samples imbalanced and unevenly distributed data to improve classification accuracy; and result output, where the final result is calculated by a multilayer perceptron.The flow chart of the model is shown in Fig.3.

    Figure 3:Flowchart of the author classification model for academic misconduct

    3.1 Data Sampling

    The small proportion of true misconduct among all scholar users means that the data we use for experiments is an imbalanced dataset, which is reflected in a 1:13 ratio of positive to negative cases(see Section 4.1 below).The disparity between the positive class sample size and the negative class sample size will affect the performance of the classifier model to some extent,so it is necessary to use resampling techniques to overcome this problem.

    The random oversampling technique is a method of increasing the number of minority class samples.SMOTE[23]is an improved method of random oversampling.We use this technique to enrich our positive class sample instead of simply replicating it.The data processing flow is as follows.

    · Calculation of the K nearest neighbors of each minority sample

    First,set the oversampling magnification to determine the number of synthesized samples,then select a sample X arbitrarily from the positive sample set,calculate the distance from this sample to all other positive samples by Euclidean distance,and then sort the distance from smallest to largest to obtain the top K nearest neighbors to sample X.Here,K is a hyperparameter.

    · Linear interpolation

    For each positive sample X, randomly select M samples from its K nearest neighbors, then interpolate on the straight line between X and the M samples respectively; that is, a new sample is synthesized at any point on the line.The interpolation formula is as follows:

    Here,rand(0,1)means randomly select a value in the range(0,1),and ?xmeans randomly select any one of the K neighbors of sample X.

    · Generation of new dataset

    The positive samples generated by simulating a few samples are not copies of the original samples, but can be regarded as new samples similar in feature space, and we merge the original positive samples with the new samples to form a new positive class sample set.The ratio of positive and negative samples is expanded from the original 1:13 to 1:1.

    3.2 Neural Network Method

    The proposed model needs to process two different types of data,so a neural network model based on word vectors and a multilayer perceptron is used to build the whole framework.The initial word vector is converted by the Word2vec model into a low-dimensional vector for subsequent semantic analysis.The multilayer perceptron takes the computed text data and the preprocessed behavioral data as input and obtains the output vector through multilayered neural units.The overall structure of the model is shown in Fig.4.

    Figure 4:Neural network framework combining scholar behavior and text features

    3.2.1 Word Vector Model Module

    In observing and analyzing the dataset, we found that some academic users usually modify the title and abstract of the paper to‘skin the paper’for the purpose of misconduct.The related findings are as follows:

    1) Authors will submit similar papers to different journals or sections repeatedly to get a higher probability of acceptance.Generally,these papers have previously been rejected by editors for publication.

    2) Authors sometimes submit a slightly revised paper to a different journal or section as a completely new paper in order to deceive editors.

    In response to these phenomena,we decided to use the text data from the authors’submissions for analysis.By calculating the semantic similarity of the text data and analyzing the repetition ratio,similar papers can be identified quickly.The hierarchical structure of the model based on word vectors is shown in Fig.5.

    Figure 5:Hierarchy of text semantic analysis module

    First,paper titles and abstracts are stored in pairs and processed for the status of the component words in the data preprocessing stage;they are then fed into the model as input.Subsequently,each word is weighted with Term Frequency–Inverse Document Frequency (TF-IDF) weights so that keywords in the text are given different levels of attention to improve the similarity matching accuracy.The word vectors with weights are then fed into the pre-trained Word2vec model.Participles are then mapped by the Word2vec model into another low-dimensional word vector space and are represented by a set of new vectors.In this space, the distance between words with similar semantics becomes shorter, whereas the distance between words with more distant relationships becomes longer, which produces a natural clustering effect.Finally, we average the word embedding vectors of each word segmentation to obtain the representation of the entire text in vector space, and then use the cosine formula to calculate the cosine of the angle between the vectors to obtain the similarity between the texts.

    3.2.2 Multilayer Perceptron Module

    This paper introduces the innovation of using data on scholars’submission behavior to assess the probability of misconduct,and to construct a multilayered neural network which performs calculations on two different kinds of data to classify scholars.The processing in Sections 3.2.1 produce data coded into a computable form for the neural network.Two kinds of data are combined through the aggregation layer to generate the final input set, and the output vector is finally returned after the multilayer perceptron calculation.The structure of the neural network model incorporating multiple features is shown in Fig.6.

    Figure 6:Hierarchy of neural network fused with multiple features

    The multilayer perceptron network consists of three layers.The first layer is the input layer and is composed of six neural units.Each neuron processes different feature data,such as the number of submissions,the frequency of submissions,and title similarity.After testing,five neurons are set in the hidden layer.Too many neurons will trigger the possibility of overfitting,while too few neurons will lead to a decrease in prediction accuracy.To keep the weights updated continuously and to accelerate the convergence,we use a Parametric Rectified Linear Unit(PReLU)[24]as the activation function in this layer.The specific formula is as follows.

    For the binary classification problem,only one output unit needs to be set in the output layer of the neural network.The SoftMax activation function is then used to obtain the output vector.

    4 Experiments

    In this section,we present the details of experiments and propose different experimental schemes based on the experimental data.Experimental results are then analyzed and discussed.

    The relevant parameters of the model in this study were finally determined after many tests.The parameters stipulate that the length of the paper title shall be calculated by the number of words,the length of the title shall not exceed 50, the abstract shall not exceed 500, and the keywords shall not exceed 30.To facilitate calculations,the difference between paper numbers is used as a measure of the interval between paper submissions.The activation function of the hidden layer in the neural network is set as the PReLU function, and the activation function of the output layer is set as the SoftMax function.The model is used with the Adam optimizer [25], which sets the learning rate to 0.01, the batch size to 200,and the epoch count to 100.

    4.1 Dataset Introduction

    The experimental data in this study come from the submission system used by TSP,a publisher with which we cooperate.While the publisher publishes a large number of academic papers every year,it has also found many malicious submissions,such as multiple submissions and duplicate submissions.We collected a series of 25,238 items of behavioral data,basic information on scholars,and summary data of papers from 1,823 users from 2020 to 2021, including paper titles and submission behavior data.

    There were many records with little or no relevance to this experiment because of low correlation or because they were too evenly distributed(for instance,scholar username,scholar login registration time,etc.).Therefore,we removed these records from the data,counted contributors who submitted more than twice,and constructed the dataset for this experiment after re-screening.As shown in Tab.1,the scholar users are divided into positive and negative cases,and the ratio of positive to negative cases is about 1:13.

    Table 1: Scholar submission dataset

    4.2 Evaluation Metrics

    In order to test the performance of the model, we use accuracy, precision and recall as the evaluation metrics,as expressed in the formulae below.False Positives(FP)denotes the number of true negative samples predicted as positive samples,True Positives(TP)denotes the number of true positive samples predicted as positive samples, True Negatives (TN) denotes the number of true negative samples predicted as negative samples,and False Negatives(FN)denotes the number of true positive samples predicted as negative samples.

    Due to the imbalance of the sample set, Receiver Operating Characteristic(ROC) curve is used to reflect the sensitivity and accuracy of the model under different thresholds.Area Under Curve(AUC)[26]represents the area under the ROC curve,which can be used to measure the generalization performance of the model and can be a more intuitive indication of the classification effectiveness.The True Positive Rate (TPR) value will be used as the horizontal coordinate of the ROC curve as shown in Eq.(6).The False Positive Rate(FPR)value will be used as the vertical coordinate as shown in Eq.(7),and AUC is calculated by the formula shown in Eq.(8).

    In the above,MandNare the numbers of positive and negative samples respectively,sirepresents the serial number of thei-th sample,Ranksidenotes the ranking of the score obtained by thei-th sample,andsi∈positiverefers to the serial numbers of the positive samples.

    4.3 Experimental Results

    4.3.1 Experimental Scheme Using Behavioral Data

    Here,we describe the innovative attempt to use scholar behavior data to analyze the probability of academic misconduct.

    To better test the effect of the data on the classification results,we specified and limited the range of data values.Among them, an excessive number of submitted papers and submission frequencies are considered to have an abnormal tendency; paper acceptance and rejection rates measure the scientific level of the authors;and the number of different journals or special issues to which papers are submitted is used as a feature to calculate the probability of duplicate submissions and multiple submissions.In this experiment, 70% of the samples are used as the training set, and the remaining 30%as the test set.The experimental data are shown in Tab.2.

    Table 2: Scholar behavior data

    Fig.7 shows the results of the model evaluation after using scholar behavior data only.The results show that scholars with academic misconduct can be effectively detected by using behavioral data as features.The upper left figure shows the loss value of the model under training and testing.As the epochs increase, the loss values gradually stabilize and are maintained at around 0.38 and 0.3 respectively.The upper right figure evaluates the accuracy of the model.It can be seen that after 100 iterations,the accuracy of the test set is close to the accuracy level of the training set,reaching about 84%.Because the dataset is an imbalanced dataset, which contains far fewer positive samples than negative samples,the accuracy of the model is not a good reflection of the model’s performance.The recall rate in the lower left figure is used to measure the level of positive samples identified by the classifier.It can be observed that the average recall of the model reaches the better result of about 88%.The lower right figure shows the ROC curve.The dashed black line represents the random classifier,and the solid red curve measures the model’s ability.The AUC value of 0.9 shows objectively that the model has a good performance.

    Figure 7:Model performance using behavioral features as input(loss,accuracy,recall,and ROC curve)

    4.3.2 Experimental Scheme Combining Behavioral and Text Data

    The text data which is used in the mainstream plagiarism method will be adde d to our behavioral data to test the comprehensive effect of two different data types on the classification results.Here again,the text data length is restricted to filter noise,as shown in Tab.3.

    Table 3: Text data of authors’papers

    To test whether the detection performance can be further improved by adding text features,the experiment shown in Fig.7 is continued with text data,and the results are shown in Fig.8.In terms of loss rate,both the test set and the training set show better performance,indicating that the model is better trained.As for the accuracy rate, both have gained a large improvement from the original highest point of less than 90%to a basically stable point of 90%.On the other hand,the recall rate of the model has not changed significantly and is basically maintained at the same level as the original one, indicating that adding text data does not have a significant effect on improving the recall rate.The AUC value of the ROC curve,however,is 0.925,which is larger than the 0.901 in Fig.7 and shows that the combination of text features and behavioral features can enable the model to perform better.

    Figure 8: Continued

    Figure 8:Model performance combining behavioral and text features(loss,accuracy,recall,and ROC curve)

    The analysis and mining of text data can detect plagiarism effectively,but have difficulties finding other forms of academic misconduct.The experimental results here show that,by using website data on scholars’behavior in the submission process, we can detect multiple submissions and duplicate submissions effectively, while the identification rate can be further improved by adding text data features.

    5 Conclusion

    With academic misconduct becoming a growing problem, it is far from sufficient to detect academic misconduct by analyzing article contents merely.More problems of academic misconduct are caused by authors and related stakeholders in their behaviors.Just as He et al.[27]uses abnormal trading behavior to predict the results,we integrate behavioral features to assist detection.This paper builds a neural network model to update automatically the weights of behavioral features, and to classify scholars based on the probability of their behavioral misconduct.Experimental results prove that using author behaviors is an effective method of detecting problems such as multiple submissions and duplicate submission phenomena.Moreover,the organic combination of behavioral data and text data can bring a considerable improvement in the accuracy and recognition rate of the model, and indicates that the comprehensive analysis of different types of data can help to improve the model’s performance.

    Funding Statement:This work is supported by the National Key R&D Program of China under grant 2018YFB1003205; by the National Natural Science Foundation of China under grants U1836208 and U1836110; by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund;and by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.

    Conflicts of Interest:The authors declare no conflicts of interest regarding the present study.

    高清av免费在线| 国产高清不卡午夜福利| 欧美丝袜亚洲另类| 男女国产视频网站| 国产精品福利在线免费观看| 一本色道久久久久久精品综合| 国产探花极品一区二区| 国产精品久久久久久精品电影小说 | 2018国产大陆天天弄谢| 日韩成人av中文字幕在线观看| 日韩一本色道免费dvd| 久久精品国产亚洲av天美| 国产亚洲av嫩草精品影院| 99热网站在线观看| 国产女主播在线喷水免费视频网站| 国产探花在线观看一区二区| 最近2019中文字幕mv第一页| 亚洲自偷自拍三级| 国产永久视频网站| 精品少妇黑人巨大在线播放| av在线app专区| 下体分泌物呈黄色| 免费大片18禁| 欧美日韩精品成人综合77777| 亚洲第一区二区三区不卡| 99久久人妻综合| 王馨瑶露胸无遮挡在线观看| 久久精品久久久久久噜噜老黄| 久久久久久久久久成人| 日韩一区二区视频免费看| 成人综合一区亚洲| 最后的刺客免费高清国语| 国产男女内射视频| 神马国产精品三级电影在线观看| 伊人久久国产一区二区| 99久久精品一区二区三区| 亚洲国产精品成人综合色| 国产成人91sexporn| 成人欧美大片| 亚洲成人一二三区av| 亚洲欧美清纯卡通| 99热网站在线观看| 国产免费一级a男人的天堂| 91精品一卡2卡3卡4卡| 亚洲国产成人一精品久久久| 国产视频首页在线观看| 久久精品久久久久久噜噜老黄| 在线看a的网站| 日韩在线高清观看一区二区三区| 建设人人有责人人尽责人人享有的 | 日韩成人av中文字幕在线观看| 国产精品秋霞免费鲁丝片| 精品熟女少妇av免费看| 看免费成人av毛片| 欧美日本视频| 国产色婷婷99| 国产亚洲精品久久久com| 国产成人午夜福利电影在线观看| 尤物成人国产欧美一区二区三区| 国产 精品1| videossex国产| 综合色丁香网| 大片电影免费在线观看免费| 国内揄拍国产精品人妻在线| 国产乱人视频| 夫妻性生交免费视频一级片| 久久久a久久爽久久v久久| 亚洲欧美中文字幕日韩二区| 91在线精品国自产拍蜜月| 亚洲内射少妇av| eeuss影院久久| 成人毛片a级毛片在线播放| 免费av不卡在线播放| 日日摸夜夜添夜夜添av毛片| 十八禁网站网址无遮挡 | av.在线天堂| 成人鲁丝片一二三区免费| 深夜a级毛片| 国产探花在线观看一区二区| 国产免费视频播放在线视频| 亚洲熟女精品中文字幕| 人人妻人人澡人人爽人人夜夜| 一级毛片aaaaaa免费看小| 日韩亚洲欧美综合| 18禁裸乳无遮挡动漫免费视频 | 视频区图区小说| 人妻制服诱惑在线中文字幕| 亚洲图色成人| 日韩成人av中文字幕在线观看| 成人无遮挡网站| 亚洲一级一片aⅴ在线观看| 寂寞人妻少妇视频99o| 美女被艹到高潮喷水动态| 免费看日本二区| 亚洲av成人精品一区久久| 亚洲怡红院男人天堂| 亚洲成人av在线免费| 激情 狠狠 欧美| 日本wwww免费看| 蜜臀久久99精品久久宅男| 国产精品不卡视频一区二区| 特大巨黑吊av在线直播| 久久久色成人| 男女下面进入的视频免费午夜| 日韩免费高清中文字幕av| 男女国产视频网站| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 在线播放无遮挡| 国产91av在线免费观看| 三级国产精品片| 亚洲国产色片| 91久久精品国产一区二区成人| 自拍欧美九色日韩亚洲蝌蚪91 | 大片免费播放器 马上看| 91久久精品国产一区二区三区| 精品久久久久久久久亚洲| 男女边摸边吃奶| 在线天堂最新版资源| 熟女电影av网| 99久久精品一区二区三区| av黄色大香蕉| 1000部很黄的大片| 看免费成人av毛片| 国产成人精品一,二区| 51国产日韩欧美| 在线观看免费高清a一片| 高清av免费在线| 五月伊人婷婷丁香| 国产免费一级a男人的天堂| 日本熟妇午夜| 男女无遮挡免费网站观看| 亚洲成人一二三区av| 99久久精品国产国产毛片| 激情五月婷婷亚洲| 草草在线视频免费看| 国内精品美女久久久久久| 欧美老熟妇乱子伦牲交| 免费黄频网站在线观看国产| 高清午夜精品一区二区三区| 777米奇影视久久| 亚洲综合色惰| 99久久精品热视频| 精品久久久久久久久亚洲| 欧美日韩在线观看h| 午夜日本视频在线| 日韩欧美 国产精品| 麻豆久久精品国产亚洲av| 国产老妇女一区| 国产亚洲5aaaaa淫片| 精品久久久噜噜| 亚洲在线观看片| 国产爽快片一区二区三区| 亚洲精品456在线播放app| 嘟嘟电影网在线观看| 免费黄色在线免费观看| 国产黄色视频一区二区在线观看| 人妻少妇偷人精品九色| 水蜜桃什么品种好| 简卡轻食公司| 五月天丁香电影| av女优亚洲男人天堂| 热99国产精品久久久久久7| 亚洲高清免费不卡视频| 嫩草影院入口| 色播亚洲综合网| 国产精品久久久久久精品古装| 99久久九九国产精品国产免费| 人人妻人人爽人人添夜夜欢视频 | 午夜激情福利司机影院| videossex国产| 国模一区二区三区四区视频| 91在线精品国自产拍蜜月| a级毛片免费高清观看在线播放| 五月玫瑰六月丁香| 亚洲自偷自拍三级| av又黄又爽大尺度在线免费看| 国产精品久久久久久精品电影| 欧美另类一区| 丝袜美腿在线中文| 啦啦啦中文免费视频观看日本| 亚洲国产欧美人成| 最近最新中文字幕免费大全7| 欧美日韩精品成人综合77777| 免费高清在线观看视频在线观看| 亚洲精品成人久久久久久| av国产久精品久网站免费入址| 色播亚洲综合网| 一二三四中文在线观看免费高清| 久久人人爽人人爽人人片va| 欧美亚洲 丝袜 人妻 在线| 少妇丰满av| 日韩不卡一区二区三区视频在线| 少妇人妻 视频| 日日撸夜夜添| 99久久中文字幕三级久久日本| 黑人高潮一二区| 一级毛片 在线播放| 亚洲av男天堂| 国产精品一及| 国产男人的电影天堂91| 久久人人爽人人爽人人片va| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产黄频视频在线观看| 免费看a级黄色片| 深夜a级毛片| 国产 一区精品| 哪个播放器可以免费观看大片| 婷婷色综合大香蕉| 国产黄色免费在线视频| 水蜜桃什么品种好| 热re99久久精品国产66热6| 男人狂女人下面高潮的视频| 午夜精品国产一区二区电影 | av黄色大香蕉| 国产精品人妻久久久久久| 黄色配什么色好看| 黄色日韩在线| av在线蜜桃| 国产 一区精品| 精品久久久久久电影网| 亚洲av福利一区| 综合色av麻豆| 青青草视频在线视频观看| 国产av国产精品国产| 欧美成人一区二区免费高清观看| 婷婷色综合www| 久久久久网色| 建设人人有责人人尽责人人享有的 | 日韩欧美精品免费久久| 日日摸夜夜添夜夜爱| 18禁在线无遮挡免费观看视频| 女人十人毛片免费观看3o分钟| 男女边吃奶边做爰视频| 亚洲aⅴ乱码一区二区在线播放| 欧美日韩视频高清一区二区三区二| 观看美女的网站| 亚洲aⅴ乱码一区二区在线播放| 午夜老司机福利剧场| 高清欧美精品videossex| 亚洲欧美中文字幕日韩二区| 亚洲精品456在线播放app| 久久久久网色| 亚洲精品国产av成人精品| 亚洲国产精品成人综合色| 国产成人免费无遮挡视频| 美女xxoo啪啪120秒动态图| videossex国产| 免费看不卡的av| 国产 一区精品| 欧美日韩精品成人综合77777| 少妇丰满av| 中文精品一卡2卡3卡4更新| 亚洲最大成人av| av卡一久久| 禁无遮挡网站| 麻豆精品久久久久久蜜桃| 成年av动漫网址| 又爽又黄a免费视频| 国产午夜精品一二区理论片| 岛国毛片在线播放| 成人毛片60女人毛片免费| 97人妻精品一区二区三区麻豆| 亚洲精品色激情综合| 国产一区二区亚洲精品在线观看| 夫妻午夜视频| 国产黄片视频在线免费观看| 国产成人aa在线观看| 国产在线一区二区三区精| 高清在线视频一区二区三区| 最近最新中文字幕大全电影3| 人妻 亚洲 视频| 国产男女超爽视频在线观看| 观看免费一级毛片| 久久久久久久亚洲中文字幕| 亚洲,欧美,日韩| 永久免费av网站大全| 日韩不卡一区二区三区视频在线| 国产精品三级大全| 校园人妻丝袜中文字幕| 国内精品宾馆在线| 91精品国产九色| 午夜精品一区二区三区免费看| 91狼人影院| 熟女电影av网| 亚洲自拍偷在线| 啦啦啦在线观看免费高清www| 在线观看美女被高潮喷水网站| 欧美日韩国产mv在线观看视频 | 日韩大片免费观看网站| 国产精品三级大全| 丝袜美腿在线中文| 国产美女午夜福利| 欧美高清性xxxxhd video| 中文字幕久久专区| 国产精品国产av在线观看| 欧美变态另类bdsm刘玥| 免费看日本二区| 午夜福利网站1000一区二区三区| 七月丁香在线播放| 黄片无遮挡物在线观看| 国产男女内射视频| 一区二区三区乱码不卡18| 中文字幕亚洲精品专区| 亚洲人成网站在线观看播放| 老司机影院毛片| 欧美日韩视频精品一区| 成人特级av手机在线观看| 亚洲欧美清纯卡通| 国产 精品1| 超碰97精品在线观看| 免费黄频网站在线观看国产| 亚洲av成人精品一区久久| 成人亚洲精品一区在线观看 | 欧美日韩亚洲高清精品| 人人妻人人看人人澡| 亚洲欧美精品自产自拍| 在线观看一区二区三区激情| 久久精品国产自在天天线| 中文精品一卡2卡3卡4更新| 少妇猛男粗大的猛烈进出视频 | 插逼视频在线观看| 婷婷色综合大香蕉| 国产综合精华液| 国产视频内射| 看黄色毛片网站| 亚洲精品自拍成人| www.色视频.com| av一本久久久久| 色婷婷久久久亚洲欧美| 在线天堂最新版资源| 日韩国内少妇激情av| 肉色欧美久久久久久久蜜桃 | 国产 精品1| 涩涩av久久男人的天堂| 免费看不卡的av| 欧美日韩一区二区视频在线观看视频在线 | 亚洲人成网站高清观看| 亚洲欧美精品自产自拍| 国产精品伦人一区二区| 可以在线观看毛片的网站| 免费观看av网站的网址| 五月伊人婷婷丁香| 国产精品爽爽va在线观看网站| 99久国产av精品国产电影| 内地一区二区视频在线| 人妻一区二区av| 91午夜精品亚洲一区二区三区| 亚洲综合色惰| 亚洲精品国产色婷婷电影| 午夜老司机福利剧场| 99热6这里只有精品| 丝袜喷水一区| 波野结衣二区三区在线| 精品国产乱码久久久久久小说| 久久综合国产亚洲精品| 韩国av在线不卡| 久久精品熟女亚洲av麻豆精品| 精品国产一区二区三区久久久樱花 | xxx大片免费视频| 亚洲综合精品二区| 国产日韩欧美亚洲二区| 激情五月婷婷亚洲| 波多野结衣巨乳人妻| 一个人看视频在线观看www免费| 亚洲激情五月婷婷啪啪| 精品久久久久久久久av| av女优亚洲男人天堂| 最近最新中文字幕大全电影3| 最近中文字幕2019免费版| 菩萨蛮人人尽说江南好唐韦庄| 在线观看免费高清a一片| 99热这里只有精品一区| 亚洲国产日韩一区二区| 黄片wwwwww| 国产高清有码在线观看视频| 国产亚洲一区二区精品| 在线精品无人区一区二区三 | 亚洲经典国产精华液单| 亚洲欧洲国产日韩| av国产精品久久久久影院| 色视频www国产| 国产成人一区二区在线| 欧美日韩视频精品一区| 嫩草影院新地址| 亚洲久久久久久中文字幕| 夜夜看夜夜爽夜夜摸| 九色成人免费人妻av| 少妇人妻精品综合一区二区| 国产成人精品福利久久| 中文字幕亚洲精品专区| 久久久久久久久久人人人人人人| 高清日韩中文字幕在线| 午夜福利视频1000在线观看| av在线app专区| 在线看a的网站| 综合色av麻豆| 极品教师在线视频| 久久女婷五月综合色啪小说 | 亚洲av电影在线观看一区二区三区 | 色视频www国产| 网址你懂的国产日韩在线| 亚洲av.av天堂| 人妻少妇偷人精品九色| 欧美激情国产日韩精品一区| 日本wwww免费看| 日韩亚洲欧美综合| 日韩在线高清观看一区二区三区| 亚洲自拍偷在线| 一区二区三区免费毛片| av专区在线播放| 欧美日韩视频精品一区| 成人无遮挡网站| 国产探花在线观看一区二区| 2021天堂中文幕一二区在线观| 亚洲婷婷狠狠爱综合网| 免费黄频网站在线观看国产| 我的老师免费观看完整版| 女人久久www免费人成看片| 国产伦精品一区二区三区视频9| 国产免费一区二区三区四区乱码| av在线老鸭窝| 中文天堂在线官网| 久久人人爽av亚洲精品天堂 | 精品一区二区免费观看| 99久国产av精品国产电影| 国产高清三级在线| 亚洲精品aⅴ在线观看| 午夜福利在线在线| 99久久精品国产国产毛片| 国产探花极品一区二区| 偷拍熟女少妇极品色| 另类亚洲欧美激情| 一级a做视频免费观看| 中文字幕免费在线视频6| 日本午夜av视频| 在线亚洲精品国产二区图片欧美 | 男人舔奶头视频| 熟女人妻精品中文字幕| 国产黄色免费在线视频| 99热6这里只有精品| 国产亚洲5aaaaa淫片| 亚洲精品成人av观看孕妇| 亚洲精品日韩在线中文字幕| 日韩一区二区三区影片| 国产 一区精品| 久热久热在线精品观看| 亚洲av成人精品一二三区| 女的被弄到高潮叫床怎么办| 欧美bdsm另类| 99久久人妻综合| 久久久欧美国产精品| 国产精品人妻久久久影院| 国产探花在线观看一区二区| 亚洲国产av新网站| 热99国产精品久久久久久7| 久久精品夜色国产| 久久久久久久国产电影| 成人高潮视频无遮挡免费网站| 久久热精品热| 精品一区二区免费观看| 韩国av在线不卡| 九九在线视频观看精品| 综合色av麻豆| 中文在线观看免费www的网站| 国产精品福利在线免费观看| 久久国内精品自在自线图片| 日韩欧美一区视频在线观看 | 精品久久久久久久久av| 精品人妻熟女av久视频| 免费av不卡在线播放| av在线天堂中文字幕| 国产一区有黄有色的免费视频| 2021天堂中文幕一二区在线观| 久久国内精品自在自线图片| 五月开心婷婷网| 亚洲av福利一区| 三级国产精品片| 亚洲欧美精品自产自拍| 嫩草影院新地址| 波多野结衣巨乳人妻| 国产精品一及| 最近的中文字幕免费完整| 插逼视频在线观看| 能在线免费看毛片的网站| 成年女人在线观看亚洲视频 | 中文字幕亚洲精品专区| 男女边摸边吃奶| 一级毛片我不卡| 亚洲国产日韩一区二区| 国产视频首页在线观看| 欧美一级a爱片免费观看看| 舔av片在线| 精品一区二区三区视频在线| 国产精品国产av在线观看| 18禁裸乳无遮挡动漫免费视频 | 国产午夜福利久久久久久| 亚洲国产精品成人综合色| 免费看不卡的av| 国产精品久久久久久精品电影| 成年人午夜在线观看视频| 国产男女超爽视频在线观看| 亚洲欧美日韩卡通动漫| 亚洲,欧美,日韩| 精品国产三级普通话版| 少妇人妻精品综合一区二区| 国产成人精品福利久久| 美女xxoo啪啪120秒动态图| 小蜜桃在线观看免费完整版高清| 菩萨蛮人人尽说江南好唐韦庄| 人妻少妇偷人精品九色| 日韩视频在线欧美| 国产极品天堂在线| 在线观看美女被高潮喷水网站| 亚洲精品国产av蜜桃| 欧美日韩一区二区视频在线观看视频在线 | 亚洲精品国产av蜜桃| 交换朋友夫妻互换小说| 国产v大片淫在线免费观看| 日日撸夜夜添| 国产午夜精品一二区理论片| 国产高清有码在线观看视频| 夜夜看夜夜爽夜夜摸| 免费在线观看成人毛片| 搡女人真爽免费视频火全软件| 久久久久久久亚洲中文字幕| 国产精品爽爽va在线观看网站| www.av在线官网国产| 午夜老司机福利剧场| 男女边摸边吃奶| 精品99又大又爽又粗少妇毛片| 亚洲自偷自拍三级| 日韩成人伦理影院| 最后的刺客免费高清国语| 日韩一本色道免费dvd| 女的被弄到高潮叫床怎么办| 欧美精品国产亚洲| 久久鲁丝午夜福利片| 久久国内精品自在自线图片| 国产精品偷伦视频观看了| 国产大屁股一区二区在线视频| 91久久精品国产一区二区三区| 国产精品一区二区三区四区免费观看| 国内精品宾馆在线| av免费观看日本| 国产日韩欧美在线精品| 亚洲av一区综合| 欧美日韩一区二区视频在线观看视频在线 | 五月玫瑰六月丁香| 另类亚洲欧美激情| 99九九线精品视频在线观看视频| 丝袜喷水一区| 亚洲最大成人中文| 狂野欧美白嫩少妇大欣赏| 国产毛片在线视频| 日本wwww免费看| 秋霞在线观看毛片| 三级男女做爰猛烈吃奶摸视频| 国产精品秋霞免费鲁丝片| 久久99热6这里只有精品| 在线看a的网站| 六月丁香七月| 国产伦在线观看视频一区| xxx大片免费视频| 成人鲁丝片一二三区免费| 欧美xxⅹ黑人| 欧美3d第一页| 欧美精品国产亚洲| 国产精品.久久久| 人妻少妇偷人精品九色| 久久久久精品久久久久真实原创| 人妻 亚洲 视频| 免费观看性生交大片5| 国产午夜福利久久久久久| 如何舔出高潮| 精品国产三级普通话版| 超碰97精品在线观看| 亚洲av日韩在线播放| 久久久久久久精品精品| 人妻一区二区av| 爱豆传媒免费全集在线观看| 青春草国产在线视频| 免费观看av网站的网址| 国产成人免费无遮挡视频| 麻豆成人av视频| 少妇的逼好多水| 精品一区二区三卡| 成人国产av品久久久| 亚洲怡红院男人天堂| 成人欧美大片| 好男人视频免费观看在线| 日本与韩国留学比较| 亚洲av成人精品一区久久| 欧美一区二区亚洲| 熟女电影av网| 欧美 日韩 精品 国产| 亚洲国产欧美人成| 国产老妇女一区| 国产视频内射| 国产成年人精品一区二区| 国产伦在线观看视频一区| 亚洲精品日本国产第一区| 最新中文字幕久久久久| 中国美白少妇内射xxxbb| 成人无遮挡网站| 亚洲欧美一区二区三区黑人 | 亚洲av中文字字幕乱码综合| 97在线视频观看| 99re6热这里在线精品视频| 欧美激情在线99| 国产精品久久久久久久电影| 亚洲精品国产av成人精品| 国产女主播在线喷水免费视频网站| 久久99热6这里只有精品| 男女下面进入的视频免费午夜| 亚洲天堂国产精品一区在线|