• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detection of Behavioral Patterns Employing a Hybrid Approach of Computational Techniques

    2022-08-24 12:57:18RohitRajaChetanSwarupAbhishekKumarKamredUdhamSinghTeekamSinghDineshGuptaNeerajVarshneyandSwatiJain
    Computers Materials&Continua 2022年7期

    Rohit Raja, Chetan Swarup, Abhishek Kumar, Kamred Udham Singh, Teekam Singh,Dinesh Gupta, Neeraj Varshneyand Swati Jain

    1Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, 495009, India

    2Department of Basic Science, College of Science & Theoretical Studies, Saudi Electronic University, 13316, Saudi Arabia

    3Department of Computer Science & IT, JAIN (Deemed to be University), Bangalore, 560069, India

    4Computer Science and Information Science, Cheng Kung University, 621301, Taiwan

    5School of Computer Science, University of Petroleum and Energy Studies, Dehradun, 248007, India

    6Department of CSE, I K Gujral Punjab Technical University, Jalandhar, 144603, India

    7Department of Computer Engineering and Applications, GLA University, Mathura, 281406, India

    8Department of Computer Science, Government J Yoganandam Chhattisgarh College, Raipur, 492001, India

    Abstract: As far as the present state is concerned in detecting the behavioral pattern of humans (subject) using morphological image processing, a considerable portion of the study has been conducted utilizing frontal vision data of human faces.The present research work had used a side vision of human-face data to develop a theoretical framework via a hybrid analytical model approach.In this example, hybridization includes an artificial neural network(ANN)with a genetic algorithm (GA).We researched the geometricalproperties extracted from side-vision human-face data.An additional studywas conducted to determine the ideal number of geometrical characteristics to pick while clustering.The close vicinity of minimum distance measurements is done for these clusters, mapped for proper classification and decision process of behavioral pattern.To identify the data acquired, support vector machines and artificial neural networks are utilized.A method known as an adaptive unidirectional associative memory (AUTAM) was used to map one side of a human face to the other side of the same subject.The behavioral pattern has been detected based on two-class problem classification, and the decision process has been done using a genetic algorithm with best-fit measurements.The developed algorithm in the present work has been tested by considering a dataset of 100 subjects and tested using standard databases like FERET,Multi-PIE, Yale Face database, RTR, CASIA, etc.The complexity measures have also been calculated under worst-case and best-case situations.

    Keywords: Adaptive-unidirectional-associative-memory technique; artificial neural network; genetic algorithm; hybrid approach

    1 Introduction

    To detect the behavioral pattern of any subject (human) is the most challenging task, especially in the defense field.The current study examines similar challenges using a side-by-side perspective of human-face data.According to the literature, only a few researchers have used side visions of human faces to identify behavioral traits.Most research has been conducted using frontal-vision data of human faces, either for face recognition or as a biometric characteristic assessment.Until now,very few types of research have been carried out to detect behavioral patterns.Several significant improvements have been made before identifying human faces from the side (parallel to the picture plane), using a five-degree switching mechanism in regression or decreasing step method.Bouzas et al.[1] used a similar method in his method for dimensional space reducing switching amount based on the requirement of mutual information between the altered data and their associated class labels.Later on, [2] enhanced the work performance by using descriptions to describe human-face pictures and a clustering algorithm to choose and classify variables for human-face recognition.

    Furthermore, Chatrath et al.[3] used facial emotion to interact between people and robots by employing a human-face front-vision.Furthermore, Zhang et al.[4] also achieved some frontal-vision related work while the target is distance.Many researchers suggested a better regression analysis classification technique by the upfront perspective of human-face data.Zhao et al.[5] has shown that learning representation to predict the placement and shape of face imagesmay boost emotion detection from human images.Similarly,Wang et al.[6,7] suggested a technique of interactive frontal processing and segmentation for human-face recognition.The literature analysis indicated that relatively few scholars carried out their work to discover behavioral patterns from human-face data.Also, statistical methodologies and classic mathematical techniques were discovered, although most of the study was conducted.Formerly, some artificial neural network components and other statistical approaches have achieved some significant satisfactory results [8-10].

    A subsequent study was conducted to recognize when the subject’s human face is aligned to the picture plane using hybrid methodology [11,12].The current research study was also conducted employing hybrid cloud computing.

    In the same year, Algorithms were proposed for secured photography using the dual camera.This method helps to identify issues such as authentication, forgery detection, and ownership management.This algorithm was developed for Android phones having dual cameras for security purposes [13].

    Introduce fuzzy logic-based facial expression recognition system which identifies seven basic facial expressions like happy, anger, sad, neutral, fear, surprise, disgust.This type of system is used in the intelligent selection of areas in the facial expression recognition system [14].An algorithm was proposed which is used in a video-based face recognition system.This algorithm can compare any still image with video, and matching videos with videos.For optimizing the rank list across video frames three-stage approach is used for effective matching of videos and still images [15].The method has introduced a method for exploring facial asymmetry using optical flow.In terms of shape and texture, the human face is not bilaterally symmetry and the attractiveness of human facial images can be increased by artificial reconstruction and facial beautification.And using optical flow image can be reconstructed according to needed symmetry [16] has proposed an effective, efficient, and robust method for face recognition based on image sets (FRIS) known as locally Grassmannian Discriminant Analysis (LGDA).For finding the optimal set of local linear bases a novel accelerated proximal gradient-based learning algorithm is used.LGDA is combined with the clustering technique linearityconstrained nearest neighborhood (LCNN) for expressing the main fold by a collection of the local linear model (LLMs).

    An algorithm was proposed to change the direction of the subject’s face parallel (zero degrees)to the picture plane diagonally (45 degrees) to the image plane.In this research, artificial neural networks (ANN) and genetic algorithms (GA) were used.Detailed research is divided into two parts:During the first part, features are obtained from the front face and database is built, and in the second portion, a test face image but with all feasible alignment must be developed and hybridized forwards computing approach performed for proper identification of the subject’s face.An actual perfectly matched classification-decision proceduremust be performed utilizing the datasets generated in the current research activity.Other statistics like FERET and so on datasets were examined for an acceptable optimization method.An algorithm was designed to identify cognitive qualities and the subject’s physiologic attributes to support the biometric safety system.To support the biometric security system, specific instance analysis must also be conducted.Development has been widely datasets, and a suitable comparison methodology has been analyzed [16].These studies reveal various features with varying performance [17].The work was structured for biometric study.Using Deep CNN with genetic segmentation, this research proposes a method for autonomous detection and recognition of animals.Standard recognition methods such as SU, DS, MDF, LEGS, DRFI, MR,andGC are compared to the suggested work.Adatabase containing 100 different subjects, two classes,and ten photos is produced for training and examining the suggested task [18].The CBIR algorithm examined visual image characteristics, such as colour, texture, shape, etc.The non-visual aspects also play a key role in image recovery.The image is extracted using a neural network which enables the computation to be improved using the Corel dataset [19,20].This paper presents a new age function modelling technique based on the fusion of local features.Image normalization is initially performed and a feature removal process is performed.The classifier for Extreme Learning Machine (ELM) is used to evaluate output pictures for the respective input images [21].The proposed algorithm has a higher recall value, accuracy and error rate than previous algorithms.New 5-layer SegNet-based algorithm encoder enhances the accuracy of various dataset benchmarks.The detection rate was up to 97 percent and the lifespan is reduced to one second per image [22].

    Modeling of Datasets

    In the present work, how the modeling of datasets is done is described briefly.The complete work has been carried out in two phases: themodeling phase and the understanding phase.In the first phase,a knowledge-based model called the RTR database model as corpus has been formed over human face images.The strategies that have been applied for the formation of the corpus are the image warping technique (IWT) and artificial neural network (ANN).The model has been formed after capturing the human face image through a digital camera or through scanning the human face image (Refer Appendix-B).Also, the collections of the human images have been done from the different standard databases (Refer Appendix-A).In the present work, how the human face images have been captured has been depicted in Fig.1 below.

    Figure 1: Functional block diagram for capturing the human face image

    From Fig.1, a known human face image has been captured through hardware, which means a camera or a scanner.During capturing of an image, a feedback control mechanism has been applied manually.The adjustments for two factors have been done.The factors are resolution and distance.A fixed resolution has been kept while capturing a known human face image.The distance has been fixed at 1 meter between face and camera.Although the second factor has been overcome by proper scaling and rectification of the image.This process has been jointly called as image warping technique (IWT).After proper adjustment of an image, it has been stored in a file with extension jpg (joint photographic group) format.

    The objective and highlight of research work are represented by the following steps: -

    ?Enhanced and compressed image (human-face image) has to be obtained.

    ?Segmentation of the face image has to be done.

    ?Relevant features have to be extracted from the face image.

    ?Modeling of face features using artificial neural network (ANN) technique, wavelet transformation, fuzzy c-means, and k-means clustering techniques, forward-backward dynamic programming.

    ?Development of an algorithm for the formation of the above model.

    ?Understanding of the above-framed model for automatic human face recognition (AHFR)using genetic algorithm method and classification using fuzzy set rules or theory.

    ?Development of an algorithm for the understanding of human face model for AHFR.

    The following sections comprise the planned study: Section 2 provides a solution methodology using mathematical formulations, Section 3 discusses actual results and discussions, Section 4 finishes with remarks and an expanded area of study, and Section 5 contains all references to the last section of the paper.

    2 Solution Methodology with a Mathematical Formulation

    This section may be divided by subheadings.It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

    The mathematical formulations and their practical execution of the current study are described in succeeding subcategories.

    As far as the present situation in the field of morphological image processing is concerned, a great deal of human-face identification research has been carried out using a 90-degree orientation in the imaging plane of any subject.The majority of the work was done using statistical methods and traditional mathematical approaches.A number of soft computing components and other statistical methods have yielded remarkable and good results.There has been little effort when the subject’s human face is parallel to the image plane in order to recognize the subject, using hybrid approaches of soft-computing technology.Soft-computing technology hybrid means a combination of softcomputing tools.The current research has also been done using advanced computing techniques.High-end technology involves combining soft-computing and symbolic computing.The technology of soft computing includes artificial neural networks, fuzzy set theory and genetic algorithms.Symbolic computing is a special type of data known as symbolic objects.Such computing enables researchers to perform mathematical operations without numerical valves calculation.Analytical symbolic calculations also include differentiation, partial differentiation, definite and indefinite integration and take into consideration limitations.Comprehensive transformation, Symbolic objects with symbolic variables, symbolic numbers and symbolic expressions and matrices.Very few previous contributions had been made using the neurogenetic approach to recognize the human face of a side-view subject(parallel to the image plane).The mechanism was applied as a fully 2-D method of transformation with a five-degree switch to reduce steps or regressive strategies.The algorithm was developed for the process of recognition, in which the orientation of the subject’s human face was moved to the diagonal position of the image plane (45 degrees) at zero degrees.The techniques used are the Artificial Neural Network (ANN), Genetic Algorithms (GA) and a little useful computing concept to identify a human face from the side view (parallel to the image plane).The research was done in two phases: the phase of modelling and the phase of understanding.In the first phase, only the frontal image of the human face must be studied to extract relevant features for the development of a corpus called a human facemodel.In the second phase, a test face image with all possible orientation has been captured and therefore the advanced computer approach is applied with high-end computing in order to properly recognize the person of the subject.A correct matching-classification-decision process using the data sets created during the present research has been carried out.Other datasets such as FERET, CASIA and so on were also tested for acceptable performance measures.Furthermore, computation of polynomial complexities with the proper transmission capability has been studied with adequate justification rather than spatial and time complexities for improving the performance of secured systems for the promotion of global cyber safety.An algorithm has been developed to support overall identification of the subject’s behavioral and physiological features in order to justify the biometric security system.A case-based study has also taken into account various datasets to justify the biometric safety system and a proper comparison model with different characteristics and performance variations was shown.The comparative study and final observations were experimentally tested with at least 10 images of 100 subjects with different ages and updates.Calculation of the complexity of developed algorithms and the performance measurements of them were also compared with other databases such as FERET,Multi-PIE, Yale Face and RTR.The corpus and algorithms developed in this work have been found to be satisfactory.Complete flow diagram of the work in Fig.2.

    2.1 Behavioral Pattern Detection

    Let side-vision of human-face data gathered under the situations stated below:

    ?The subject is either standing or sitting idle

    ?The outfit of the subject is actual

    ?The subject is either talking with someone face to face

    Let“ZLnFL1”,“ZLnFL2”,“ZLnFL3”,“ZLnFL4”, and“ZLnFL5”be the left-side-vision of human-face data at five different time intervals with minimum time-lag by the subject‘ZLn.’Similarly, also assume“ZRnFR1”,“ZRnFR2”,“ZRnFR3”,“ZRnFR4”and“ZRnFR5”be the right-side-vision of human-face data at five different time intervals with minimum time-lag by the subject‘ZRn,’where‘n’is the number, whose range is infinity≥n≥1.

    Normally, for the human face left-side-vision pattern is“ZLnFLm”and that for the right-side-vision of the human face is“ZRnFRm.”

    So, the frontal-vision of human-face data“ZRnVLRm”will yield to,

    (A) Clustering of geometrical features from left-side-vision of human-face data

    Clusters have even-odd elements.Distinguishing the clustering of left-hand data,ZLnFLm, into even and odd components, consider‘FZT’training datasets, where‘F’represents human-face data and‘T’represents total training datasets.Even components and Old component is‘FZE’and‘FZO’,respectively,for a left-side-vision‘L1’.Hence it yields to,

    Figure 2: Complete flow diagram of work

    So, the total training image ‘T’ gives even training sample image ‘E’ and odd training sample image‘O’sums.So mathematical linearity combined effect (2) gets,

    Thus, the equation for highly interconnected and poorly interconnected human-face collected data is represented by,ZLnFLm.And it becomes,

    where convolution operator is ρT, and?and the linearity factor for total training datasets,

    Now for even cluster We,the mean μefor even human face sample Neand for odd cluster Wothe mean μo for odd human face sample No, sample mean value is represented by μT,

    The diversion was represented in equation of the projected means on training human face odd and even sample images yields to,

    Let LFin= {LF1, LF2,..., LFn} and DFout= {DF1, DF2,..., DFm} input left-side-vision and output code word respectively, which are of maximum size.BF test_data_set = {BF1, BF2,..., BFu}have been AF trained_data_set = {AF1, AF2,..., AFq} linearity-index condition, DFout= DFin,

    Mathematically the relation is,

    The previous metrics provide the closest representation of the human face’s left-side image with tightly crucial elements.Thus, the system’s experience and understanding database analyzed each extracted feature in the data processing stream, C matching, and the greatest codeword was picked as the minimum average distance.If the unknown vector is inaccessible to the known vector, this condition is considered an OOL issue.Attributing values to all database codewords have reduced the OOL issue.Highest vector values, thus, yield to,

    Eq.(10) is CDIFFthe absolute difference and it is the cropped pattern and yields to,

    Dividing Eq.(8) by Eq.(10), it yields, to CCMR,

    B) Clustering of geometrical features from right-side-vision of human-face data

    Similarly, to distinguish the clusters of right-side-vision of human-face data,ZRnFRm, into even and odd components, consider‘FZT’.The mathematical formulations for this part of this paper follow Eq.(2) through (11).

    2.2 Gradient or Slope of Human-Face Data with Strongly Connected Components

    Let the slope of human face pattern isx?SCF, where the SCF means strongly connected features:“shape”,“size”,“effort”and“momentum”.The superscript‘x’is the number of strongly connected features.Let slope of left-side-vision of human-face and right-side-vision of human-face data isx?LSCFandx?RSCFbe the slope or gradient,

    3 Results and Discussions

    Practical investigations and discussions regarding identifying behavioral patterns have been conducted following image processing pre-processing activities.The performance data were initially processed using a schematic diagram and standardized image processing methods.Signal processing has been achieved utilizing the discrete-cosine transform technique, as it has been shown that it functions flawlessly for real-coded vectored analysis.As a result, the segmentation procedure was carried out by statistical analysis.Boundary detection is achieved using morphologicalmethods used in digital image processing, such as erosion and dilation.By initially picking the region of interest (ROI)and hence the items of interest, the image distortion approach was employed objects of interest (OOI).Cropping the item and hence image rectification are performed to ensure that the system’s efficiency does not suffer.Cropping the picture results in the storage of the cropped picture in a separate file and the extraction of crucial geometrical information.As seen in Fig.3, the clusters of these obtained traits are shown.

    Figure 3: Clusters of features extracted from test data of human-face

    As seen in Fig.4, only a few factors exhibit uniformity.As a result, additional analysis was conducted by switching the data set with five degrees of freedom between regressive and advanced modes.The research was graphed and is shown in Fig.4.

    Figure 4: Comparison of the different testing frames with different switching patterns

    Fig.5, the pattern’s typical behavior is very uniform.This indicates that the curve’s behavior is not highly diverse in origin.The test set of data has been subjected further to descriptive statistics.From Fig.5, it has been observed that the clusters of trained and test data set of human-face are almost close to linearity.Further, the cumulative distribution of the above test data sets has been performed, shown in Fig.5.

    Figure 5: Normal distribution of different data sets of human-face

    Asseenin Fig.6,the boundary values of both the test and training data sets are exceptionally near to fitness.Therefore, the most acceptable test was done utilizing the genetic algorithm methodology.Adjusted for most vital measurements.If compatibility fails, take another subject sample.As a result,further analysis is performed to determine the better measures.This was accomplished by gradually or regressively switching human faces with a five to ten-degree displacement.Subsequently, it was discovered that most variables follow the regular pattern of a corpus’s training samples.

    Figure 6: Boundary of face code detection using unidirectional temporary associative memory(UTAM)

    As a consequence, best-fit measurements were chosen, and further segmentation and detecting analyses utilizing the soft-computing approach genetic algorithm were done.Using face-code formation, one-to-one mappings were performed all through this technique.Fig.7, the border for detecting face-code using unidirectional transitory memory.Perceptual memories are neural network designs that store and retrieve correlated patterns in essential elements when prompted by associated matches.In other words, associative memory is the storage of related patterns in acceptable form.Fig.7 shows graphical behavioral pattern matching test data sets.“Over act”is defined as“Abnormal behavior”while“Normal act”is labeled as“Normal behavior.”Whenever the behavioral characteristic curve is missing, the behavior is expected.When the behavioral characteristic curve has a large number of interruptions, it is considered to be under act behavior; when the cognitive characteristic curve has a smaller number of disruptions, it is considered to be over act behavior and attitude.

    The whole behavioral detection system’s performance is proportional to the size of the corpus or database.If the unknown pattern is not matched against the known pattern throughout the detection procedure, an OOC (Out of corpus) error occurs.A substantial corpus has been generated in this study to prevent such issues, therefore resolving the OOC issue.Numerous common databases such as FERET, Multi-Pie, and Yale face databases were also examined in the current study for automated facial recognition.Tab.1 below illustrates the contrast.

    Figure 7:Applying a distinct subject’s face image to the overacting, regular act, and underactingmoods

    Table 1: Performance measure comparison for automatic face recognition using FERET, Multi-Pie,Yale face, RTR face database

    The obtained corpus RTR database was evaluated in combination with FERET, Multi-Pie,and FERET, and the results were determined to be almost adequate.Fig.6 illustrates the contrast graphically.

    As seen in Fig.8, the corpus created in this research endeavor produces findings that are pretty similar to those found in the FERET database.Additionally, it has been shown that after selecting two features, the behavioral detection system’s performance increases, and the whole evaluation procedure stays positive with seven features picked with the highest difficulty levels.Additionally,Tab.2 illustrates the overall performance for behavioral detection when the maximum number of characteristics is included.

    Fig.9 represents the full behavioral detection system results graphically, with an average detection rate of 93 percent for the Normal behavioral pattern.

    Fig.10 illustrates the developed algorithms’behavioral performance metrics and their comparability to other algorithms that use human face patterns.In addition to the findings and comments in the current study, Fig.10 depicts the general behavioral pattern for the training and testing datasets.

    Figure 8: Comparative study and analysis as per complexity measures

    Table 2: Performance measures of behavioral detection

    Figure 9: Graphical representations of performance measures

    Figure 10: Overall behavioral pattern of trained and test data sets

    As seen in Fig.11, when the appropriate set of attributes is found using a genetic algorithm methodology, the behavior of training and test data sets shows a similar pattern.For about the same dataset, the actual result was 93 percent recognition accuracy for usual behavior.Fig.11 shows the result.

    The method used to get and describe the given findings is given here, along with its sophistication.

    Figure 11: Outcome of normal behavioral pattern of the test data set

    Developed Algorithm HCBABD (Hybrid Computing based Automatic Behavior Detection):

    Algorithm 1: The developed algorithm called HCBABD (Hybrid Computing based Automatic Behavior Detection) has been depicted below.Main program {starts}Step 1.GEN1: Initial input data1 stream X(o) = (X1, X2,...,XM) and Y(o, p) = (Y1, Y2,..., YN)Step 2.READ1: CorpusRTR (Max_Size Z) and counter q is set to=0 Step 3.DO WHILE (b≤Z)GEN1: An input data1 Xr(b) & Yr(b)GENNEXT1: State input data1 X(b+1) and Y(b+1) for Xr(b) & Yr(b)COMPUTE1: WEIGHTn condition for Linearity Index, INPUTn ==OUTTPUTn COMPUTE1: Gradient_n or Slope_n for {geometrical feature: HFM}INR: Increment the size or length, b = b + 1 and the counter, q =q + 1 FITNESS_TEST1: f(Xi) & f(Yi) of each data stream Xi & Yi MAPPING: AUTAM (left_side_shape, right_side_shape)IF (TRUE), THEN{BEHAVIOUR1: Display“Normal and Acceptable”placed in category“NOT BOL”}ELSE{BEHAVIOUR1: Display“Abnormal and Not Acceptable”placed in category“BOL“and Display“Data is OOL (Out-Of-Limit)”}Main program {ends}

    Complexity measures for the develop system:In the worst-case assumption, let‘p’denote the total number of training data.Thus, the complexity is proportional to the number of loop executions divided by the total number of events.In the worst-case scenario, the cycle will execute as‘p+4’.Thus,in the worst-case scenario, the complexity measure is‘(p+4)/p’.Similarly, in the best situation, the smallest number of features necessary for the mapping procedure is one, which increases the execution time.Thus, in the best-case scenario, the complexity measures in the best-case scenario is‘(p+1)/p’.Current automatic emotion recognizers typically assign category labels to emotional states, such as“angry”or“sad,”relying on signal processing and pattern recognition techniques.Efforts involving human emotion recognition have mostly relied on mapping cues such as speech acoustics (for example energy and pitch) and/or facial expressions to some target emotion category or representation.The comparative analysis of algorithms is shown in Tab.3 and the time complexity represented in Tab.4.

    Table 3: Comparative analysis some recent algorithms with each dataset

    Table 3: Continued

    Table 4: Comparison of time complexity and memory consumption for different datasets

    4 Conclusion and Further Work

    In this present study effort, two behavioral patterns, namely standard and aberrant, have been categorized.The categorization in this study is based on four geometrical characteristics taken from human-face data for left- and right-side-vision s.Then, to make decisions about detecting behavioral patterns, these characteristics were grouped, and a correct mapping method was done.The gradation of each of the human-face extracted features from the left- and right-side vision s has been computed,and when shown, a uniformity index attributes feature is generated.The dispersion of the gradients has been calculated, providing either positive or negative values.For normal behavior, the decision is good; for aberrant behavior, the option is unfavorable.The efficiency of a proposed approach has been determined.In the worst-case scenario, the complexity of the suggested method is“(p+4)/p”;in the best-case scenario, the complexity of the suggested method is“(p+1)/p,”where‘p’is the total frequency of occurrence.The current work might be developed to incorporate identification and comprehension of human-brain signals and human-face-speech patterns to establish a trimodal biometrics security system.The assignment might be broadened to include diagnosing various health concerns related to breathing, speaking, brain function, and heart function.Furthermore, this technology might be utilized to further the development of a global multi-modal biometric network security.

    Acknowledgement:The authors extend their appreciation to the Saudi Electronic University for funding this research work.

    Funding Statement:The author thanks to Saudi Electronic University for financial support to complete the research.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    一区二区av电影网| 在线免费十八禁| 国内精品宾馆在线| 亚洲国产精品成人久久小说| 久久精品久久久久久久性| 久久久久久人妻| videos熟女内射| 午夜精品国产一区二区电影| 国产免费又黄又爽又色| 十八禁网站网址无遮挡 | 97热精品久久久久久| 国产老妇伦熟女老妇高清| 久久久久性生活片| 日本欧美视频一区| 国产精品国产av在线观看| 黄色日韩在线| 日韩欧美 国产精品| 欧美xxⅹ黑人| 日韩一本色道免费dvd| 亚洲精品日韩av片在线观看| 中文精品一卡2卡3卡4更新| 成人漫画全彩无遮挡| 最后的刺客免费高清国语| 国产淫片久久久久久久久| 晚上一个人看的免费电影| 又黄又爽又刺激的免费视频.| 亚洲无线观看免费| 久久鲁丝午夜福利片| 永久网站在线| 国产精品偷伦视频观看了| 国产精品av视频在线免费观看| 精品久久久久久久久av| 性色avwww在线观看| 少妇人妻精品综合一区二区| 内地一区二区视频在线| 日日摸夜夜添夜夜添av毛片| 亚洲欧美精品自产自拍| 高清日韩中文字幕在线| 嫩草影院新地址| 中文字幕精品免费在线观看视频 | 国产淫片久久久久久久久| 欧美日韩亚洲高清精品| 国产 一区 欧美 日韩| 欧美+日韩+精品| 欧美日韩综合久久久久久| 国内少妇人妻偷人精品xxx网站| 亚洲欧美日韩无卡精品| xxx大片免费视频| 少妇裸体淫交视频免费看高清| 亚洲成人手机| 欧美成人精品欧美一级黄| 久久婷婷青草| 精品国产一区二区三区久久久樱花 | 在线观看免费日韩欧美大片 | 国产 一区 欧美 日韩| 日本黄色日本黄色录像| 欧美亚洲 丝袜 人妻 在线| www.色视频.com| 欧美一区二区亚洲| 人妻系列 视频| 国产日韩欧美亚洲二区| 女性生殖器流出的白浆| 蜜臀久久99精品久久宅男| 国产一区二区三区av在线| 最近最新中文字幕免费大全7| 搡女人真爽免费视频火全软件| 一边亲一边摸免费视频| 国产精品嫩草影院av在线观看| 又黄又爽又刺激的免费视频.| 岛国毛片在线播放| av免费观看日本| 少妇的逼好多水| 国语对白做爰xxxⅹ性视频网站| 免费看av在线观看网站| h视频一区二区三区| 九九在线视频观看精品| av免费观看日本| 少妇的逼好多水| videossex国产| 久久99热6这里只有精品| 亚洲人成网站高清观看| 肉色欧美久久久久久久蜜桃| 免费播放大片免费观看视频在线观看| 免费观看性生交大片5| 国精品久久久久久国模美| 男人爽女人下面视频在线观看| 久久久久久久久久久丰满| 人妻一区二区av| 网址你懂的国产日韩在线| 久久99精品国语久久久| 国产精品免费大片| 女性被躁到高潮视频| 直男gayav资源| 人人妻人人爽人人添夜夜欢视频 | 国产精品人妻久久久久久| 国产熟女欧美一区二区| 九九久久精品国产亚洲av麻豆| 天堂8中文在线网| 在线免费十八禁| 中文字幕人妻熟人妻熟丝袜美| 国产毛片在线视频| 久久久亚洲精品成人影院| 欧美少妇被猛烈插入视频| 女性被躁到高潮视频| 久热这里只有精品99| 九九在线视频观看精品| 我的女老师完整版在线观看| 一级毛片我不卡| 久久久久精品性色| 尤物成人国产欧美一区二区三区| 赤兔流量卡办理| 三级国产精品欧美在线观看| 九九久久精品国产亚洲av麻豆| 校园人妻丝袜中文字幕| 国语对白做爰xxxⅹ性视频网站| 欧美极品一区二区三区四区| 久久精品国产亚洲av涩爱| 日韩欧美 国产精品| 我的女老师完整版在线观看| 偷拍熟女少妇极品色| 三级国产精品片| 久久国产精品男人的天堂亚洲 | 日本与韩国留学比较| 亚洲自偷自拍三级| 国产精品久久久久久精品电影小说 | 亚洲av成人精品一二三区| 国产高清有码在线观看视频| 极品少妇高潮喷水抽搐| 亚洲av日韩在线播放| 色婷婷av一区二区三区视频| 国模一区二区三区四区视频| 精品视频人人做人人爽| 男的添女的下面高潮视频| 国产精品一区二区在线不卡| 久久久久久久大尺度免费视频| 国产在线男女| 日韩中字成人| 超碰av人人做人人爽久久| 97精品久久久久久久久久精品| 国产成人a区在线观看| 国产av精品麻豆| av播播在线观看一区| 精品久久久久久久久av| 亚洲精品成人av观看孕妇| 一级毛片aaaaaa免费看小| 免费观看av网站的网址| 日韩中文字幕视频在线看片 | 久久久欧美国产精品| 亚洲第一区二区三区不卡| 免费观看a级毛片全部| 我要看日韩黄色一级片| 三级国产精品片| 国产精品人妻久久久久久| 久久热精品热| 国产精品欧美亚洲77777| 久久久久网色| a级一级毛片免费在线观看| 大陆偷拍与自拍| www.色视频.com| 91精品国产国语对白视频| 狂野欧美激情性bbbbbb| 一级爰片在线观看| 欧美+日韩+精品| 亚洲真实伦在线观看| 男女下面进入的视频免费午夜| 免费观看性生交大片5| 国产精品久久久久久av不卡| 亚洲图色成人| 久久久久久久久久人人人人人人| 日韩不卡一区二区三区视频在线| 18禁裸乳无遮挡免费网站照片| 国产深夜福利视频在线观看| 国产黄片美女视频| 美女国产视频在线观看| 国产精品一二三区在线看| 国产欧美亚洲国产| 欧美bdsm另类| 欧美日韩国产mv在线观看视频 | 精品国产乱码久久久久久小说| 欧美性感艳星| 午夜精品国产一区二区电影| 我的老师免费观看完整版| 中国国产av一级| 少妇 在线观看| 国产在线免费精品| 菩萨蛮人人尽说江南好唐韦庄| videossex国产| av在线app专区| 熟女av电影| 亚洲欧美精品专区久久| 大片免费播放器 马上看| 亚洲精品乱码久久久v下载方式| 伦理电影大哥的女人| 菩萨蛮人人尽说江南好唐韦庄| 97热精品久久久久久| 老师上课跳d突然被开到最大视频| .国产精品久久| 亚洲综合色惰| 熟女人妻精品中文字幕| 久久久久久久久久成人| 久久久久精品久久久久真实原创| 国产男女内射视频| 青春草国产在线视频| 另类亚洲欧美激情| av国产免费在线观看| 人人妻人人添人人爽欧美一区卜 | 国产精品一区二区在线观看99| 看非洲黑人一级黄片| 天美传媒精品一区二区| 国产精品久久久久久久久免| 十八禁网站网址无遮挡 | 亚洲精品自拍成人| 少妇裸体淫交视频免费看高清| 国产视频内射| 在线精品无人区一区二区三 | 亚洲欧美成人综合另类久久久| 国内精品宾馆在线| 久久女婷五月综合色啪小说| 18禁在线播放成人免费| 欧美一级a爱片免费观看看| 欧美精品人与动牲交sv欧美| 噜噜噜噜噜久久久久久91| 热re99久久精品国产66热6| 亚洲欧美成人精品一区二区| 欧美bdsm另类| 亚洲欧美清纯卡通| 国产午夜精品一二区理论片| 亚洲欧美日韩无卡精品| 美女内射精品一级片tv| 日韩电影二区| 深夜a级毛片| 国产精品一及| 亚洲人与动物交配视频| 成人午夜精彩视频在线观看| 亚洲激情五月婷婷啪啪| 舔av片在线| 免费看av在线观看网站| 久久国产精品大桥未久av | 免费观看a级毛片全部| 精品少妇久久久久久888优播| 亚洲国产成人一精品久久久| 国产精品一区二区在线观看99| 七月丁香在线播放| 成年人午夜在线观看视频| 午夜视频国产福利| 日本av免费视频播放| 校园人妻丝袜中文字幕| 蜜桃久久精品国产亚洲av| 黄色日韩在线| 国产男女内射视频| 麻豆精品久久久久久蜜桃| 国产一区二区三区综合在线观看 | 熟女av电影| 免费观看a级毛片全部| 色视频在线一区二区三区| 国产精品久久久久久久久免| 国产男女超爽视频在线观看| 丰满乱子伦码专区| av不卡在线播放| 丝袜喷水一区| 久久99蜜桃精品久久| 亚洲av免费高清在线观看| 亚洲av中文字字幕乱码综合| 国产精品一区二区性色av| 91精品国产国语对白视频| 少妇精品久久久久久久| 搡女人真爽免费视频火全软件| av不卡在线播放| 少妇的逼水好多| 亚洲av.av天堂| 99久久精品国产国产毛片| 人妻一区二区av| 亚洲怡红院男人天堂| 美女高潮的动态| 五月开心婷婷网| 久久久久视频综合| 免费黄色在线免费观看| 日本-黄色视频高清免费观看| 色哟哟·www| 国产一级毛片在线| 国产精品一区二区三区四区免费观看| 九色成人免费人妻av| 建设人人有责人人尽责人人享有的 | 国产在线男女| 国产精品秋霞免费鲁丝片| 热99国产精品久久久久久7| 国产精品人妻久久久久久| 在线观看一区二区三区| 简卡轻食公司| 亚洲av二区三区四区| 国产极品天堂在线| 亚洲四区av| 国产午夜精品一二区理论片| 久久热精品热| 性色avwww在线观看| 午夜视频国产福利| 欧美精品国产亚洲| 晚上一个人看的免费电影| av.在线天堂| 26uuu在线亚洲综合色| 久久精品久久精品一区二区三区| 久久久精品免费免费高清| 国产 一区精品| 久久人人爽人人爽人人片va| 乱码一卡2卡4卡精品| 日韩一本色道免费dvd| 国精品久久久久久国模美| 亚洲第一av免费看| 午夜福利高清视频| 久久国内精品自在自线图片| 久久鲁丝午夜福利片| 国产精品人妻久久久影院| 精品一区二区三区视频在线| 一级毛片黄色毛片免费观看视频| 一区二区三区四区激情视频| 日韩电影二区| 亚洲精品乱久久久久久| 建设人人有责人人尽责人人享有的 | 国产高潮美女av| 免费大片18禁| 青春草视频在线免费观看| 深夜a级毛片| 国内精品宾馆在线| 伊人久久精品亚洲午夜| 久久久国产一区二区| 欧美成人精品欧美一级黄| 亚洲欧美精品自产自拍| 六月丁香七月| 国产毛片在线视频| 欧美精品国产亚洲| 国产在线一区二区三区精| 亚洲精品视频女| 18禁在线无遮挡免费观看视频| 久久久欧美国产精品| 高清日韩中文字幕在线| 91精品国产九色| 精品国产露脸久久av麻豆| 美女内射精品一级片tv| 国产伦在线观看视频一区| 亚洲综合色惰| 久久久久久久久久久丰满| 亚洲精品日韩在线中文字幕| 插阴视频在线观看视频| 夜夜看夜夜爽夜夜摸| 欧美日韩视频高清一区二区三区二| 欧美极品一区二区三区四区| 国产精品成人在线| 五月天丁香电影| 精品少妇黑人巨大在线播放| 欧美日韩视频高清一区二区三区二| 少妇精品久久久久久久| 欧美亚洲 丝袜 人妻 在线| 乱系列少妇在线播放| 亚洲经典国产精华液单| 免费观看性生交大片5| 久热这里只有精品99| 久久99热这里只有精品18| 日韩亚洲欧美综合| 欧美高清成人免费视频www| 日韩三级伦理在线观看| 18禁动态无遮挡网站| 高清毛片免费看| 精华霜和精华液先用哪个| 国产v大片淫在线免费观看| 国产亚洲5aaaaa淫片| 97精品久久久久久久久久精品| 又黄又爽又刺激的免费视频.| 中国三级夫妇交换| 久久久久久久久久久丰满| 国产精品无大码| 国产片特级美女逼逼视频| 我的老师免费观看完整版| 在线观看免费高清a一片| 18+在线观看网站| 久久精品久久久久久噜噜老黄| 久久人人爽人人片av| 亚洲电影在线观看av| 日韩成人av中文字幕在线观看| 国产成人a区在线观看| 五月玫瑰六月丁香| 免费黄频网站在线观看国产| 国产欧美另类精品又又久久亚洲欧美| 久久精品久久久久久久性| av在线蜜桃| 有码 亚洲区| 九九爱精品视频在线观看| 日本wwww免费看| 久久精品国产亚洲网站| 男的添女的下面高潮视频| 亚洲精品久久午夜乱码| 午夜视频国产福利| 久久久久久伊人网av| 蜜桃亚洲精品一区二区三区| 亚洲精品视频女| av不卡在线播放| 久久久久久久久久久免费av| 欧美一区二区亚洲| 一级毛片aaaaaa免费看小| 亚洲综合色惰| 女的被弄到高潮叫床怎么办| 汤姆久久久久久久影院中文字幕| 日韩免费高清中文字幕av| 伊人久久精品亚洲午夜| 噜噜噜噜噜久久久久久91| 黄色欧美视频在线观看| www.色视频.com| 香蕉精品网在线| freevideosex欧美| 乱系列少妇在线播放| 一级毛片黄色毛片免费观看视频| 日日摸夜夜添夜夜爱| 嘟嘟电影网在线观看| 午夜福利网站1000一区二区三区| 中文字幕精品免费在线观看视频 | 三级经典国产精品| 亚洲欧洲日产国产| 国产大屁股一区二区在线视频| 亚洲精品日韩av片在线观看| 久久99热6这里只有精品| av在线蜜桃| 精品久久久精品久久久| 国产探花极品一区二区| 亚洲精品国产成人久久av| 一个人免费看片子| 欧美bdsm另类| 久久99热6这里只有精品| 我的老师免费观看完整版| 插阴视频在线观看视频| 国产白丝娇喘喷水9色精品| 一级二级三级毛片免费看| 男人添女人高潮全过程视频| av免费在线看不卡| 欧美成人精品欧美一级黄| 亚洲精品自拍成人| 精品一区二区三区视频在线| 日本免费在线观看一区| 久久午夜福利片| 欧美人与善性xxx| 亚洲国产欧美在线一区| 丝瓜视频免费看黄片| 亚洲精品日韩在线中文字幕| 熟女人妻精品中文字幕| 国产精品一区二区性色av| 大片免费播放器 马上看| 久久精品国产亚洲av涩爱| a级毛片免费高清观看在线播放| 国产大屁股一区二区在线视频| 美女主播在线视频| 干丝袜人妻中文字幕| 男女边摸边吃奶| 男人和女人高潮做爰伦理| 男女无遮挡免费网站观看| 国产精品一及| 少妇的逼水好多| 日本爱情动作片www.在线观看| av福利片在线观看| 丝袜脚勾引网站| 日日啪夜夜撸| 欧美人与善性xxx| av一本久久久久| 日韩视频在线欧美| 我的女老师完整版在线观看| 久久精品国产亚洲av天美| 2022亚洲国产成人精品| 欧美日韩国产mv在线观看视频 | 日日啪夜夜撸| 高清欧美精品videossex| 人人妻人人澡人人爽人人夜夜| 超碰av人人做人人爽久久| 又黄又爽又刺激的免费视频.| 天堂中文最新版在线下载| 美女福利国产在线 | 国产探花极品一区二区| 亚洲伊人久久精品综合| 国产一级毛片在线| 亚洲真实伦在线观看| 日韩 亚洲 欧美在线| av.在线天堂| 欧美成人一区二区免费高清观看| 久久久久国产网址| 午夜免费鲁丝| 国产成人一区二区在线| 欧美另类一区| 成人一区二区视频在线观看| xxx大片免费视频| 99热这里只有是精品50| 草草在线视频免费看| 男人狂女人下面高潮的视频| av免费观看日本| 久热久热在线精品观看| 这个男人来自地球电影免费观看 | 亚洲欧美清纯卡通| 国产白丝娇喘喷水9色精品| av在线播放精品| 国产精品99久久久久久久久| a级毛片免费高清观看在线播放| 免费少妇av软件| 亚洲av成人精品一二三区| 亚洲精品乱码久久久v下载方式| 亚洲精品日韩av片在线观看| 国产中年淑女户外野战色| 国产女主播在线喷水免费视频网站| 亚洲,一卡二卡三卡| 人人妻人人爽人人添夜夜欢视频 | 亚洲欧美日韩无卡精品| 欧美精品亚洲一区二区| 国内少妇人妻偷人精品xxx网站| 99久久精品国产国产毛片| 午夜日本视频在线| 久久久久性生活片| 欧美日韩在线观看h| 男人狂女人下面高潮的视频| 欧美变态另类bdsm刘玥| 久久ye,这里只有精品| 国产探花极品一区二区| 99久国产av精品国产电影| 成人亚洲精品一区在线观看 | 成人一区二区视频在线观看| 一边亲一边摸免费视频| 亚洲丝袜综合中文字幕| 麻豆国产97在线/欧美| 国产又色又爽无遮挡免| 在线观看国产h片| 尾随美女入室| 一边亲一边摸免费视频| 日韩欧美一区视频在线观看 | 免费观看在线日韩| 国产精品嫩草影院av在线观看| 在线天堂最新版资源| 在线 av 中文字幕| av在线蜜桃| 乱系列少妇在线播放| 亚洲精品日韩在线中文字幕| 成年免费大片在线观看| 国产精品秋霞免费鲁丝片| 新久久久久国产一级毛片| 各种免费的搞黄视频| 久久久久久久亚洲中文字幕| 中文精品一卡2卡3卡4更新| 久久婷婷青草| 日韩强制内射视频| 亚洲国产精品999| 一级毛片久久久久久久久女| a级一级毛片免费在线观看| 97在线视频观看| 啦啦啦视频在线资源免费观看| 精品少妇黑人巨大在线播放| 久久韩国三级中文字幕| 99视频精品全部免费 在线| 亚洲精品国产色婷婷电影| 在线播放无遮挡| 超碰av人人做人人爽久久| 久久精品国产亚洲av涩爱| 久久99热这里只频精品6学生| 看十八女毛片水多多多| 精品久久久精品久久久| 夜夜看夜夜爽夜夜摸| 国内精品宾馆在线| 久久久亚洲精品成人影院| 少妇 在线观看| 街头女战士在线观看网站| 妹子高潮喷水视频| 亚洲欧美日韩无卡精品| 欧美极品一区二区三区四区| 少妇人妻 视频| 中文字幕人妻熟人妻熟丝袜美| 中国三级夫妇交换| 国产 一区 欧美 日韩| 夫妻午夜视频| 熟女av电影| 偷拍熟女少妇极品色| 伦精品一区二区三区| 校园人妻丝袜中文字幕| 伦理电影大哥的女人| videossex国产| 全区人妻精品视频| 日本欧美国产在线视频| 成人一区二区视频在线观看| 激情 狠狠 欧美| 色5月婷婷丁香| av国产精品久久久久影院| 欧美日韩综合久久久久久| 中文乱码字字幕精品一区二区三区| 亚洲精品日韩在线中文字幕| 青春草国产在线视频| 哪个播放器可以免费观看大片| 熟妇人妻不卡中文字幕| 精品国产三级普通话版| 国产精品蜜桃在线观看| 久久精品久久久久久噜噜老黄| 国产爽快片一区二区三区| 国产精品福利在线免费观看| 最近手机中文字幕大全| 身体一侧抽搐| 日韩一区二区三区影片| 欧美精品一区二区免费开放| 高清在线视频一区二区三区| 日韩大片免费观看网站| 国产在线男女| 免费观看性生交大片5| 波野结衣二区三区在线| 午夜老司机福利剧场| 九草在线视频观看| 精品久久久精品久久久| 天天躁夜夜躁狠狠久久av| 少妇熟女欧美另类| 少妇丰满av| 一级爰片在线观看| 久久久精品94久久精品| 欧美激情国产日韩精品一区| 啦啦啦视频在线资源免费观看| 欧美三级亚洲精品| 一级黄片播放器| 黑人高潮一二区| 成人亚洲精品一区在线观看 |