• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Detection of Behavioral Patterns Employing a Hybrid Approach of Computational Techniques

    2022-08-24 12:57:18RohitRajaChetanSwarupAbhishekKumarKamredUdhamSinghTeekamSinghDineshGuptaNeerajVarshneyandSwatiJain
    Computers Materials&Continua 2022年7期

    Rohit Raja, Chetan Swarup, Abhishek Kumar, Kamred Udham Singh, Teekam Singh,Dinesh Gupta, Neeraj Varshneyand Swati Jain

    1Department of Information Technology, Guru Ghasidas Vishwavidyalaya, Bilaspur, 495009, India

    2Department of Basic Science, College of Science & Theoretical Studies, Saudi Electronic University, 13316, Saudi Arabia

    3Department of Computer Science & IT, JAIN (Deemed to be University), Bangalore, 560069, India

    4Computer Science and Information Science, Cheng Kung University, 621301, Taiwan

    5School of Computer Science, University of Petroleum and Energy Studies, Dehradun, 248007, India

    6Department of CSE, I K Gujral Punjab Technical University, Jalandhar, 144603, India

    7Department of Computer Engineering and Applications, GLA University, Mathura, 281406, India

    8Department of Computer Science, Government J Yoganandam Chhattisgarh College, Raipur, 492001, India

    Abstract: As far as the present state is concerned in detecting the behavioral pattern of humans (subject) using morphological image processing, a considerable portion of the study has been conducted utilizing frontal vision data of human faces.The present research work had used a side vision of human-face data to develop a theoretical framework via a hybrid analytical model approach.In this example, hybridization includes an artificial neural network(ANN)with a genetic algorithm (GA).We researched the geometricalproperties extracted from side-vision human-face data.An additional studywas conducted to determine the ideal number of geometrical characteristics to pick while clustering.The close vicinity of minimum distance measurements is done for these clusters, mapped for proper classification and decision process of behavioral pattern.To identify the data acquired, support vector machines and artificial neural networks are utilized.A method known as an adaptive unidirectional associative memory (AUTAM) was used to map one side of a human face to the other side of the same subject.The behavioral pattern has been detected based on two-class problem classification, and the decision process has been done using a genetic algorithm with best-fit measurements.The developed algorithm in the present work has been tested by considering a dataset of 100 subjects and tested using standard databases like FERET,Multi-PIE, Yale Face database, RTR, CASIA, etc.The complexity measures have also been calculated under worst-case and best-case situations.

    Keywords: Adaptive-unidirectional-associative-memory technique; artificial neural network; genetic algorithm; hybrid approach

    1 Introduction

    To detect the behavioral pattern of any subject (human) is the most challenging task, especially in the defense field.The current study examines similar challenges using a side-by-side perspective of human-face data.According to the literature, only a few researchers have used side visions of human faces to identify behavioral traits.Most research has been conducted using frontal-vision data of human faces, either for face recognition or as a biometric characteristic assessment.Until now,very few types of research have been carried out to detect behavioral patterns.Several significant improvements have been made before identifying human faces from the side (parallel to the picture plane), using a five-degree switching mechanism in regression or decreasing step method.Bouzas et al.[1] used a similar method in his method for dimensional space reducing switching amount based on the requirement of mutual information between the altered data and their associated class labels.Later on, [2] enhanced the work performance by using descriptions to describe human-face pictures and a clustering algorithm to choose and classify variables for human-face recognition.

    Furthermore, Chatrath et al.[3] used facial emotion to interact between people and robots by employing a human-face front-vision.Furthermore, Zhang et al.[4] also achieved some frontal-vision related work while the target is distance.Many researchers suggested a better regression analysis classification technique by the upfront perspective of human-face data.Zhao et al.[5] has shown that learning representation to predict the placement and shape of face imagesmay boost emotion detection from human images.Similarly,Wang et al.[6,7] suggested a technique of interactive frontal processing and segmentation for human-face recognition.The literature analysis indicated that relatively few scholars carried out their work to discover behavioral patterns from human-face data.Also, statistical methodologies and classic mathematical techniques were discovered, although most of the study was conducted.Formerly, some artificial neural network components and other statistical approaches have achieved some significant satisfactory results [8-10].

    A subsequent study was conducted to recognize when the subject’s human face is aligned to the picture plane using hybrid methodology [11,12].The current research study was also conducted employing hybrid cloud computing.

    In the same year, Algorithms were proposed for secured photography using the dual camera.This method helps to identify issues such as authentication, forgery detection, and ownership management.This algorithm was developed for Android phones having dual cameras for security purposes [13].

    Introduce fuzzy logic-based facial expression recognition system which identifies seven basic facial expressions like happy, anger, sad, neutral, fear, surprise, disgust.This type of system is used in the intelligent selection of areas in the facial expression recognition system [14].An algorithm was proposed which is used in a video-based face recognition system.This algorithm can compare any still image with video, and matching videos with videos.For optimizing the rank list across video frames three-stage approach is used for effective matching of videos and still images [15].The method has introduced a method for exploring facial asymmetry using optical flow.In terms of shape and texture, the human face is not bilaterally symmetry and the attractiveness of human facial images can be increased by artificial reconstruction and facial beautification.And using optical flow image can be reconstructed according to needed symmetry [16] has proposed an effective, efficient, and robust method for face recognition based on image sets (FRIS) known as locally Grassmannian Discriminant Analysis (LGDA).For finding the optimal set of local linear bases a novel accelerated proximal gradient-based learning algorithm is used.LGDA is combined with the clustering technique linearityconstrained nearest neighborhood (LCNN) for expressing the main fold by a collection of the local linear model (LLMs).

    An algorithm was proposed to change the direction of the subject’s face parallel (zero degrees)to the picture plane diagonally (45 degrees) to the image plane.In this research, artificial neural networks (ANN) and genetic algorithms (GA) were used.Detailed research is divided into two parts:During the first part, features are obtained from the front face and database is built, and in the second portion, a test face image but with all feasible alignment must be developed and hybridized forwards computing approach performed for proper identification of the subject’s face.An actual perfectly matched classification-decision proceduremust be performed utilizing the datasets generated in the current research activity.Other statistics like FERET and so on datasets were examined for an acceptable optimization method.An algorithm was designed to identify cognitive qualities and the subject’s physiologic attributes to support the biometric safety system.To support the biometric security system, specific instance analysis must also be conducted.Development has been widely datasets, and a suitable comparison methodology has been analyzed [16].These studies reveal various features with varying performance [17].The work was structured for biometric study.Using Deep CNN with genetic segmentation, this research proposes a method for autonomous detection and recognition of animals.Standard recognition methods such as SU, DS, MDF, LEGS, DRFI, MR,andGC are compared to the suggested work.Adatabase containing 100 different subjects, two classes,and ten photos is produced for training and examining the suggested task [18].The CBIR algorithm examined visual image characteristics, such as colour, texture, shape, etc.The non-visual aspects also play a key role in image recovery.The image is extracted using a neural network which enables the computation to be improved using the Corel dataset [19,20].This paper presents a new age function modelling technique based on the fusion of local features.Image normalization is initially performed and a feature removal process is performed.The classifier for Extreme Learning Machine (ELM) is used to evaluate output pictures for the respective input images [21].The proposed algorithm has a higher recall value, accuracy and error rate than previous algorithms.New 5-layer SegNet-based algorithm encoder enhances the accuracy of various dataset benchmarks.The detection rate was up to 97 percent and the lifespan is reduced to one second per image [22].

    Modeling of Datasets

    In the present work, how the modeling of datasets is done is described briefly.The complete work has been carried out in two phases: themodeling phase and the understanding phase.In the first phase,a knowledge-based model called the RTR database model as corpus has been formed over human face images.The strategies that have been applied for the formation of the corpus are the image warping technique (IWT) and artificial neural network (ANN).The model has been formed after capturing the human face image through a digital camera or through scanning the human face image (Refer Appendix-B).Also, the collections of the human images have been done from the different standard databases (Refer Appendix-A).In the present work, how the human face images have been captured has been depicted in Fig.1 below.

    Figure 1: Functional block diagram for capturing the human face image

    From Fig.1, a known human face image has been captured through hardware, which means a camera or a scanner.During capturing of an image, a feedback control mechanism has been applied manually.The adjustments for two factors have been done.The factors are resolution and distance.A fixed resolution has been kept while capturing a known human face image.The distance has been fixed at 1 meter between face and camera.Although the second factor has been overcome by proper scaling and rectification of the image.This process has been jointly called as image warping technique (IWT).After proper adjustment of an image, it has been stored in a file with extension jpg (joint photographic group) format.

    The objective and highlight of research work are represented by the following steps: -

    ?Enhanced and compressed image (human-face image) has to be obtained.

    ?Segmentation of the face image has to be done.

    ?Relevant features have to be extracted from the face image.

    ?Modeling of face features using artificial neural network (ANN) technique, wavelet transformation, fuzzy c-means, and k-means clustering techniques, forward-backward dynamic programming.

    ?Development of an algorithm for the formation of the above model.

    ?Understanding of the above-framed model for automatic human face recognition (AHFR)using genetic algorithm method and classification using fuzzy set rules or theory.

    ?Development of an algorithm for the understanding of human face model for AHFR.

    The following sections comprise the planned study: Section 2 provides a solution methodology using mathematical formulations, Section 3 discusses actual results and discussions, Section 4 finishes with remarks and an expanded area of study, and Section 5 contains all references to the last section of the paper.

    2 Solution Methodology with a Mathematical Formulation

    This section may be divided by subheadings.It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.

    The mathematical formulations and their practical execution of the current study are described in succeeding subcategories.

    As far as the present situation in the field of morphological image processing is concerned, a great deal of human-face identification research has been carried out using a 90-degree orientation in the imaging plane of any subject.The majority of the work was done using statistical methods and traditional mathematical approaches.A number of soft computing components and other statistical methods have yielded remarkable and good results.There has been little effort when the subject’s human face is parallel to the image plane in order to recognize the subject, using hybrid approaches of soft-computing technology.Soft-computing technology hybrid means a combination of softcomputing tools.The current research has also been done using advanced computing techniques.High-end technology involves combining soft-computing and symbolic computing.The technology of soft computing includes artificial neural networks, fuzzy set theory and genetic algorithms.Symbolic computing is a special type of data known as symbolic objects.Such computing enables researchers to perform mathematical operations without numerical valves calculation.Analytical symbolic calculations also include differentiation, partial differentiation, definite and indefinite integration and take into consideration limitations.Comprehensive transformation, Symbolic objects with symbolic variables, symbolic numbers and symbolic expressions and matrices.Very few previous contributions had been made using the neurogenetic approach to recognize the human face of a side-view subject(parallel to the image plane).The mechanism was applied as a fully 2-D method of transformation with a five-degree switch to reduce steps or regressive strategies.The algorithm was developed for the process of recognition, in which the orientation of the subject’s human face was moved to the diagonal position of the image plane (45 degrees) at zero degrees.The techniques used are the Artificial Neural Network (ANN), Genetic Algorithms (GA) and a little useful computing concept to identify a human face from the side view (parallel to the image plane).The research was done in two phases: the phase of modelling and the phase of understanding.In the first phase, only the frontal image of the human face must be studied to extract relevant features for the development of a corpus called a human facemodel.In the second phase, a test face image with all possible orientation has been captured and therefore the advanced computer approach is applied with high-end computing in order to properly recognize the person of the subject.A correct matching-classification-decision process using the data sets created during the present research has been carried out.Other datasets such as FERET, CASIA and so on were also tested for acceptable performance measures.Furthermore, computation of polynomial complexities with the proper transmission capability has been studied with adequate justification rather than spatial and time complexities for improving the performance of secured systems for the promotion of global cyber safety.An algorithm has been developed to support overall identification of the subject’s behavioral and physiological features in order to justify the biometric security system.A case-based study has also taken into account various datasets to justify the biometric safety system and a proper comparison model with different characteristics and performance variations was shown.The comparative study and final observations were experimentally tested with at least 10 images of 100 subjects with different ages and updates.Calculation of the complexity of developed algorithms and the performance measurements of them were also compared with other databases such as FERET,Multi-PIE, Yale Face and RTR.The corpus and algorithms developed in this work have been found to be satisfactory.Complete flow diagram of the work in Fig.2.

    2.1 Behavioral Pattern Detection

    Let side-vision of human-face data gathered under the situations stated below:

    ?The subject is either standing or sitting idle

    ?The outfit of the subject is actual

    ?The subject is either talking with someone face to face

    Let“ZLnFL1”,“ZLnFL2”,“ZLnFL3”,“ZLnFL4”, and“ZLnFL5”be the left-side-vision of human-face data at five different time intervals with minimum time-lag by the subject‘ZLn.’Similarly, also assume“ZRnFR1”,“ZRnFR2”,“ZRnFR3”,“ZRnFR4”and“ZRnFR5”be the right-side-vision of human-face data at five different time intervals with minimum time-lag by the subject‘ZRn,’where‘n’is the number, whose range is infinity≥n≥1.

    Normally, for the human face left-side-vision pattern is“ZLnFLm”and that for the right-side-vision of the human face is“ZRnFRm.”

    So, the frontal-vision of human-face data“ZRnVLRm”will yield to,

    (A) Clustering of geometrical features from left-side-vision of human-face data

    Clusters have even-odd elements.Distinguishing the clustering of left-hand data,ZLnFLm, into even and odd components, consider‘FZT’training datasets, where‘F’represents human-face data and‘T’represents total training datasets.Even components and Old component is‘FZE’and‘FZO’,respectively,for a left-side-vision‘L1’.Hence it yields to,

    Figure 2: Complete flow diagram of work

    So, the total training image ‘T’ gives even training sample image ‘E’ and odd training sample image‘O’sums.So mathematical linearity combined effect (2) gets,

    Thus, the equation for highly interconnected and poorly interconnected human-face collected data is represented by,ZLnFLm.And it becomes,

    where convolution operator is ρT, and?and the linearity factor for total training datasets,

    Now for even cluster We,the mean μefor even human face sample Neand for odd cluster Wothe mean μo for odd human face sample No, sample mean value is represented by μT,

    The diversion was represented in equation of the projected means on training human face odd and even sample images yields to,

    Let LFin= {LF1, LF2,..., LFn} and DFout= {DF1, DF2,..., DFm} input left-side-vision and output code word respectively, which are of maximum size.BF test_data_set = {BF1, BF2,..., BFu}have been AF trained_data_set = {AF1, AF2,..., AFq} linearity-index condition, DFout= DFin,

    Mathematically the relation is,

    The previous metrics provide the closest representation of the human face’s left-side image with tightly crucial elements.Thus, the system’s experience and understanding database analyzed each extracted feature in the data processing stream, C matching, and the greatest codeword was picked as the minimum average distance.If the unknown vector is inaccessible to the known vector, this condition is considered an OOL issue.Attributing values to all database codewords have reduced the OOL issue.Highest vector values, thus, yield to,

    Eq.(10) is CDIFFthe absolute difference and it is the cropped pattern and yields to,

    Dividing Eq.(8) by Eq.(10), it yields, to CCMR,

    B) Clustering of geometrical features from right-side-vision of human-face data

    Similarly, to distinguish the clusters of right-side-vision of human-face data,ZRnFRm, into even and odd components, consider‘FZT’.The mathematical formulations for this part of this paper follow Eq.(2) through (11).

    2.2 Gradient or Slope of Human-Face Data with Strongly Connected Components

    Let the slope of human face pattern isx?SCF, where the SCF means strongly connected features:“shape”,“size”,“effort”and“momentum”.The superscript‘x’is the number of strongly connected features.Let slope of left-side-vision of human-face and right-side-vision of human-face data isx?LSCFandx?RSCFbe the slope or gradient,

    3 Results and Discussions

    Practical investigations and discussions regarding identifying behavioral patterns have been conducted following image processing pre-processing activities.The performance data were initially processed using a schematic diagram and standardized image processing methods.Signal processing has been achieved utilizing the discrete-cosine transform technique, as it has been shown that it functions flawlessly for real-coded vectored analysis.As a result, the segmentation procedure was carried out by statistical analysis.Boundary detection is achieved using morphologicalmethods used in digital image processing, such as erosion and dilation.By initially picking the region of interest (ROI)and hence the items of interest, the image distortion approach was employed objects of interest (OOI).Cropping the item and hence image rectification are performed to ensure that the system’s efficiency does not suffer.Cropping the picture results in the storage of the cropped picture in a separate file and the extraction of crucial geometrical information.As seen in Fig.3, the clusters of these obtained traits are shown.

    Figure 3: Clusters of features extracted from test data of human-face

    As seen in Fig.4, only a few factors exhibit uniformity.As a result, additional analysis was conducted by switching the data set with five degrees of freedom between regressive and advanced modes.The research was graphed and is shown in Fig.4.

    Figure 4: Comparison of the different testing frames with different switching patterns

    Fig.5, the pattern’s typical behavior is very uniform.This indicates that the curve’s behavior is not highly diverse in origin.The test set of data has been subjected further to descriptive statistics.From Fig.5, it has been observed that the clusters of trained and test data set of human-face are almost close to linearity.Further, the cumulative distribution of the above test data sets has been performed, shown in Fig.5.

    Figure 5: Normal distribution of different data sets of human-face

    Asseenin Fig.6,the boundary values of both the test and training data sets are exceptionally near to fitness.Therefore, the most acceptable test was done utilizing the genetic algorithm methodology.Adjusted for most vital measurements.If compatibility fails, take another subject sample.As a result,further analysis is performed to determine the better measures.This was accomplished by gradually or regressively switching human faces with a five to ten-degree displacement.Subsequently, it was discovered that most variables follow the regular pattern of a corpus’s training samples.

    Figure 6: Boundary of face code detection using unidirectional temporary associative memory(UTAM)

    As a consequence, best-fit measurements were chosen, and further segmentation and detecting analyses utilizing the soft-computing approach genetic algorithm were done.Using face-code formation, one-to-one mappings were performed all through this technique.Fig.7, the border for detecting face-code using unidirectional transitory memory.Perceptual memories are neural network designs that store and retrieve correlated patterns in essential elements when prompted by associated matches.In other words, associative memory is the storage of related patterns in acceptable form.Fig.7 shows graphical behavioral pattern matching test data sets.“Over act”is defined as“Abnormal behavior”while“Normal act”is labeled as“Normal behavior.”Whenever the behavioral characteristic curve is missing, the behavior is expected.When the behavioral characteristic curve has a large number of interruptions, it is considered to be under act behavior; when the cognitive characteristic curve has a smaller number of disruptions, it is considered to be over act behavior and attitude.

    The whole behavioral detection system’s performance is proportional to the size of the corpus or database.If the unknown pattern is not matched against the known pattern throughout the detection procedure, an OOC (Out of corpus) error occurs.A substantial corpus has been generated in this study to prevent such issues, therefore resolving the OOC issue.Numerous common databases such as FERET, Multi-Pie, and Yale face databases were also examined in the current study for automated facial recognition.Tab.1 below illustrates the contrast.

    Figure 7:Applying a distinct subject’s face image to the overacting, regular act, and underactingmoods

    Table 1: Performance measure comparison for automatic face recognition using FERET, Multi-Pie,Yale face, RTR face database

    The obtained corpus RTR database was evaluated in combination with FERET, Multi-Pie,and FERET, and the results were determined to be almost adequate.Fig.6 illustrates the contrast graphically.

    As seen in Fig.8, the corpus created in this research endeavor produces findings that are pretty similar to those found in the FERET database.Additionally, it has been shown that after selecting two features, the behavioral detection system’s performance increases, and the whole evaluation procedure stays positive with seven features picked with the highest difficulty levels.Additionally,Tab.2 illustrates the overall performance for behavioral detection when the maximum number of characteristics is included.

    Fig.9 represents the full behavioral detection system results graphically, with an average detection rate of 93 percent for the Normal behavioral pattern.

    Fig.10 illustrates the developed algorithms’behavioral performance metrics and their comparability to other algorithms that use human face patterns.In addition to the findings and comments in the current study, Fig.10 depicts the general behavioral pattern for the training and testing datasets.

    Figure 8: Comparative study and analysis as per complexity measures

    Table 2: Performance measures of behavioral detection

    Figure 9: Graphical representations of performance measures

    Figure 10: Overall behavioral pattern of trained and test data sets

    As seen in Fig.11, when the appropriate set of attributes is found using a genetic algorithm methodology, the behavior of training and test data sets shows a similar pattern.For about the same dataset, the actual result was 93 percent recognition accuracy for usual behavior.Fig.11 shows the result.

    The method used to get and describe the given findings is given here, along with its sophistication.

    Figure 11: Outcome of normal behavioral pattern of the test data set

    Developed Algorithm HCBABD (Hybrid Computing based Automatic Behavior Detection):

    Algorithm 1: The developed algorithm called HCBABD (Hybrid Computing based Automatic Behavior Detection) has been depicted below.Main program {starts}Step 1.GEN1: Initial input data1 stream X(o) = (X1, X2,...,XM) and Y(o, p) = (Y1, Y2,..., YN)Step 2.READ1: CorpusRTR (Max_Size Z) and counter q is set to=0 Step 3.DO WHILE (b≤Z)GEN1: An input data1 Xr(b) & Yr(b)GENNEXT1: State input data1 X(b+1) and Y(b+1) for Xr(b) & Yr(b)COMPUTE1: WEIGHTn condition for Linearity Index, INPUTn ==OUTTPUTn COMPUTE1: Gradient_n or Slope_n for {geometrical feature: HFM}INR: Increment the size or length, b = b + 1 and the counter, q =q + 1 FITNESS_TEST1: f(Xi) & f(Yi) of each data stream Xi & Yi MAPPING: AUTAM (left_side_shape, right_side_shape)IF (TRUE), THEN{BEHAVIOUR1: Display“Normal and Acceptable”placed in category“NOT BOL”}ELSE{BEHAVIOUR1: Display“Abnormal and Not Acceptable”placed in category“BOL“and Display“Data is OOL (Out-Of-Limit)”}Main program {ends}

    Complexity measures for the develop system:In the worst-case assumption, let‘p’denote the total number of training data.Thus, the complexity is proportional to the number of loop executions divided by the total number of events.In the worst-case scenario, the cycle will execute as‘p+4’.Thus,in the worst-case scenario, the complexity measure is‘(p+4)/p’.Similarly, in the best situation, the smallest number of features necessary for the mapping procedure is one, which increases the execution time.Thus, in the best-case scenario, the complexity measures in the best-case scenario is‘(p+1)/p’.Current automatic emotion recognizers typically assign category labels to emotional states, such as“angry”or“sad,”relying on signal processing and pattern recognition techniques.Efforts involving human emotion recognition have mostly relied on mapping cues such as speech acoustics (for example energy and pitch) and/or facial expressions to some target emotion category or representation.The comparative analysis of algorithms is shown in Tab.3 and the time complexity represented in Tab.4.

    Table 3: Comparative analysis some recent algorithms with each dataset

    Table 3: Continued

    Table 4: Comparison of time complexity and memory consumption for different datasets

    4 Conclusion and Further Work

    In this present study effort, two behavioral patterns, namely standard and aberrant, have been categorized.The categorization in this study is based on four geometrical characteristics taken from human-face data for left- and right-side-vision s.Then, to make decisions about detecting behavioral patterns, these characteristics were grouped, and a correct mapping method was done.The gradation of each of the human-face extracted features from the left- and right-side vision s has been computed,and when shown, a uniformity index attributes feature is generated.The dispersion of the gradients has been calculated, providing either positive or negative values.For normal behavior, the decision is good; for aberrant behavior, the option is unfavorable.The efficiency of a proposed approach has been determined.In the worst-case scenario, the complexity of the suggested method is“(p+4)/p”;in the best-case scenario, the complexity of the suggested method is“(p+1)/p,”where‘p’is the total frequency of occurrence.The current work might be developed to incorporate identification and comprehension of human-brain signals and human-face-speech patterns to establish a trimodal biometrics security system.The assignment might be broadened to include diagnosing various health concerns related to breathing, speaking, brain function, and heart function.Furthermore, this technology might be utilized to further the development of a global multi-modal biometric network security.

    Acknowledgement:The authors extend their appreciation to the Saudi Electronic University for funding this research work.

    Funding Statement:The author thanks to Saudi Electronic University for financial support to complete the research.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    在线天堂中文资源库| 婷婷色综合www| 亚洲欧美激情在线| 18禁黄网站禁片午夜丰满| 大香蕉久久网| 久久久久精品国产欧美久久久 | 国产视频一区二区在线看| 国产一区有黄有色的免费视频| 欧美乱码精品一区二区三区| 97精品久久久久久久久久精品| 免费一级毛片在线播放高清视频 | 一级黄色大片毛片| 99国产精品一区二区蜜桃av | 人妻人人澡人人爽人人| 一级黄色大片毛片| 欧美黑人精品巨大| 日本a在线网址| 欧美日韩福利视频一区二区| 精品国产乱码久久久久久小说| 国产福利在线免费观看视频| 婷婷色综合www| 日韩中文字幕视频在线看片| 欧美精品人与动牲交sv欧美| 国产免费现黄频在线看| 少妇粗大呻吟视频| 三上悠亚av全集在线观看| 欧美在线黄色| 性高湖久久久久久久久免费观看| 这个男人来自地球电影免费观看| 免费女性裸体啪啪无遮挡网站| 看十八女毛片水多多多| 热99国产精品久久久久久7| 七月丁香在线播放| 亚洲久久久国产精品| 美女国产高潮福利片在线看| 成人影院久久| 少妇人妻久久综合中文| 亚洲av国产av综合av卡| 视频区图区小说| 欧美精品人与动牲交sv欧美| 久久久精品免费免费高清| 好男人视频免费观看在线| 久久久国产一区二区| 精品熟女少妇八av免费久了| 亚洲av日韩精品久久久久久密 | 三上悠亚av全集在线观看| av国产久精品久网站免费入址| 免费看十八禁软件| 精品少妇黑人巨大在线播放| 亚洲欧美日韩另类电影网站| 亚洲 欧美一区二区三区| 99国产精品免费福利视频| 97精品久久久久久久久久精品| 在线 av 中文字幕| 50天的宝宝边吃奶边哭怎么回事| 这个男人来自地球电影免费观看| 香蕉丝袜av| 男女无遮挡免费网站观看| 久久精品国产a三级三级三级| 国产精品免费视频内射| 午夜福利影视在线免费观看| 欧美少妇被猛烈插入视频| 亚洲美女黄色视频免费看| 国产成人精品久久二区二区91| 国产亚洲欧美在线一区二区| 久久精品国产亚洲av涩爱| 国产片特级美女逼逼视频| 一级毛片电影观看| 欧美国产精品一级二级三级| 香蕉国产在线看| 亚洲国产精品一区二区三区在线| 午夜免费观看性视频| 国产成人欧美| 18禁国产床啪视频网站| 国产高清视频在线播放一区 | 又粗又硬又长又爽又黄的视频| 少妇人妻久久综合中文| 国产国语露脸激情在线看| 免费不卡黄色视频| 99久久精品国产亚洲精品| 久久久久久久久久久久大奶| 国产麻豆69| 青春草亚洲视频在线观看| 不卡av一区二区三区| 久久人人97超碰香蕉20202| 亚洲男人天堂网一区| 国产有黄有色有爽视频| 国产97色在线日韩免费| 欧美国产精品va在线观看不卡| 丝瓜视频免费看黄片| 精品一品国产午夜福利视频| 午夜福利一区二区在线看| videosex国产| 菩萨蛮人人尽说江南好唐韦庄| 日日夜夜操网爽| 久久精品久久久久久久性| 久久久精品免费免费高清| 亚洲国产精品一区二区三区在线| 成人三级做爰电影| 国产老妇伦熟女老妇高清| 精品亚洲成a人片在线观看| 亚洲久久久国产精品| 1024视频免费在线观看| 精品第一国产精品| 亚洲精品中文字幕在线视频| 国产一区二区激情短视频 | bbb黄色大片| 黑人欧美特级aaaaaa片| 婷婷成人精品国产| 久久久精品免费免费高清| 国产成人精品久久二区二区91| 亚洲免费av在线视频| 午夜福利视频精品| 女人高潮潮喷娇喘18禁视频| 最近中文字幕2019免费版| 国产欧美日韩综合在线一区二区| 美女福利国产在线| 久久久久久人人人人人| 色94色欧美一区二区| 亚洲人成电影免费在线| 91精品国产国语对白视频| 免费在线观看影片大全网站 | 免费在线观看黄色视频的| 国产成人av激情在线播放| 久热爱精品视频在线9| 日韩电影二区| 老鸭窝网址在线观看| 国产精品国产三级专区第一集| 亚洲精品国产一区二区精华液| www日本在线高清视频| 国产在线观看jvid| 久久久亚洲精品成人影院| av网站在线播放免费| 超碰成人久久| 男人添女人高潮全过程视频| 国产成人精品无人区| 中文字幕色久视频| 亚洲一区中文字幕在线| 欧美变态另类bdsm刘玥| 一区二区三区精品91| 欧美大码av| 亚洲精品成人av观看孕妇| 午夜免费观看性视频| 一级片免费观看大全| 欧美另类一区| 国产一级毛片在线| 视频区欧美日本亚洲| bbb黄色大片| 午夜福利视频精品| 国产深夜福利视频在线观看| 黄频高清免费视频| 成年女人毛片免费观看观看9 | 欧美黄色淫秽网站| 丝袜喷水一区| 国产亚洲av高清不卡| 精品少妇一区二区三区视频日本电影| 每晚都被弄得嗷嗷叫到高潮| 啦啦啦啦在线视频资源| 啦啦啦啦在线视频资源| 亚洲av成人不卡在线观看播放网 | 国产有黄有色有爽视频| 久久久国产一区二区| 一边摸一边抽搐一进一出视频| 精品视频人人做人人爽| 婷婷色麻豆天堂久久| 久久午夜综合久久蜜桃| 少妇粗大呻吟视频| 少妇裸体淫交视频免费看高清 | 高清视频免费观看一区二区| 狂野欧美激情性xxxx| 亚洲精品成人av观看孕妇| 国产精品.久久久| 国产精品人妻久久久影院| a级毛片黄视频| 亚洲成av片中文字幕在线观看| 精品一区二区三区四区五区乱码 | 国产高清不卡午夜福利| 久久综合国产亚洲精品| 欧美精品高潮呻吟av久久| 国产成人精品久久久久久| 亚洲国产欧美在线一区| 国产在视频线精品| 免费黄频网站在线观看国产| 亚洲av国产av综合av卡| 久久青草综合色| 欧美+亚洲+日韩+国产| 人人妻人人澡人人爽人人夜夜| 午夜老司机福利片| 五月天丁香电影| 亚洲少妇的诱惑av| 精品视频人人做人人爽| 宅男免费午夜| 一本一本久久a久久精品综合妖精| 亚洲人成77777在线视频| 国产黄频视频在线观看| 精品国产乱码久久久久久小说| 中文字幕最新亚洲高清| 美女视频免费永久观看网站| 尾随美女入室| 欧美激情 高清一区二区三区| 高潮久久久久久久久久久不卡| 好男人视频免费观看在线| 国产片特级美女逼逼视频| 色播在线永久视频| 91精品伊人久久大香线蕉| 国产一区二区 视频在线| 亚洲欧美一区二区三区久久| 水蜜桃什么品种好| 国产高清视频在线播放一区 | 人人澡人人妻人| 亚洲欧美中文字幕日韩二区| 国产精品麻豆人妻色哟哟久久| 亚洲一码二码三码区别大吗| 激情五月婷婷亚洲| 妹子高潮喷水视频| videosex国产| 大码成人一级视频| 国产无遮挡羞羞视频在线观看| 国产在线免费精品| 国产激情久久老熟女| 欧美成人午夜精品| 人人妻人人澡人人爽人人夜夜| 首页视频小说图片口味搜索 | 国产精品二区激情视频| 在线观看免费午夜福利视频| 无限看片的www在线观看| 一区二区日韩欧美中文字幕| 波多野结衣一区麻豆| 亚洲欧美一区二区三区久久| 一区福利在线观看| 最黄视频免费看| 久久精品国产综合久久久| 欧美在线黄色| 69精品国产乱码久久久| 亚洲av片天天在线观看| 亚洲精品久久成人aⅴ小说| 国产色视频综合| 亚洲成人手机| 久久精品熟女亚洲av麻豆精品| 亚洲五月婷婷丁香| 欧美 亚洲 国产 日韩一| 国产一级毛片在线| 国产精品av久久久久免费| 亚洲精品自拍成人| 国精品久久久久久国模美| 丁香六月天网| 欧美亚洲 丝袜 人妻 在线| 午夜福利视频精品| 久久精品熟女亚洲av麻豆精品| 国产精品偷伦视频观看了| 亚洲成人手机| 日韩精品免费视频一区二区三区| 男人操女人黄网站| 一本久久精品| 国产av一区二区精品久久| 18在线观看网站| 黄色毛片三级朝国网站| 精品一品国产午夜福利视频| 日本vs欧美在线观看视频| 性色av一级| 一级毛片黄色毛片免费观看视频| 久久久久精品国产欧美久久久 | 大片电影免费在线观看免费| 国产精品欧美亚洲77777| 天堂8中文在线网| 无限看片的www在线观看| 国产片特级美女逼逼视频| 97精品久久久久久久久久精品| 亚洲欧美一区二区三区国产| av天堂在线播放| tube8黄色片| 国产精品国产三级国产专区5o| 亚洲专区国产一区二区| 久久久久久久国产电影| 建设人人有责人人尽责人人享有的| 三上悠亚av全集在线观看| 久久亚洲国产成人精品v| 久久久久久久久久久久大奶| 99热国产这里只有精品6| 观看av在线不卡| 亚洲中文日韩欧美视频| 一级a爱视频在线免费观看| 国产野战对白在线观看| 欧美黄色片欧美黄色片| 汤姆久久久久久久影院中文字幕| 中文字幕最新亚洲高清| 国产亚洲一区二区精品| 亚洲精品美女久久av网站| 日韩一卡2卡3卡4卡2021年| 夜夜骑夜夜射夜夜干| 日韩精品免费视频一区二区三区| 国产黄频视频在线观看| 亚洲专区国产一区二区| 国产精品 国内视频| 久久精品久久精品一区二区三区| 欧美黄色片欧美黄色片| 亚洲国产精品一区三区| 亚洲成人国产一区在线观看 | 亚洲精品久久成人aⅴ小说| 一本色道久久久久久精品综合| 午夜精品国产一区二区电影| 成人黄色视频免费在线看| 亚洲欧美中文字幕日韩二区| 国产免费福利视频在线观看| 婷婷色综合www| 午夜福利免费观看在线| 真人做人爱边吃奶动态| 女人久久www免费人成看片| 亚洲av电影在线观看一区二区三区| 国产真人三级小视频在线观看| 亚洲伊人久久精品综合| 韩国精品一区二区三区| 少妇人妻 视频| 欧美 日韩 精品 国产| 欧美日韩成人在线一区二区| 亚洲三区欧美一区| 91国产中文字幕| 亚洲五月色婷婷综合| 亚洲国产精品一区二区三区在线| 女性被躁到高潮视频| 欧美激情高清一区二区三区| 国产在视频线精品| 狠狠婷婷综合久久久久久88av| 真人做人爱边吃奶动态| 成人免费观看视频高清| 国产成人精品久久二区二区免费| 国产免费现黄频在线看| 亚洲av片天天在线观看| 色网站视频免费| 精品少妇内射三级| 国产免费现黄频在线看| 精品国产国语对白av| 看十八女毛片水多多多| 王馨瑶露胸无遮挡在线观看| 午夜福利在线免费观看网站| 精品久久久精品久久久| 香蕉丝袜av| 亚洲欧美一区二区三区久久| 国产精品av久久久久免费| 午夜日韩欧美国产| 黑丝袜美女国产一区| 亚洲国产欧美网| 女人久久www免费人成看片| 国产av国产精品国产| www.999成人在线观看| 欧美久久黑人一区二区| 国产欧美日韩精品亚洲av| 丰满迷人的少妇在线观看| 两个人免费观看高清视频| 日韩av免费高清视频| 少妇精品久久久久久久| 可以免费在线观看a视频的电影网站| 在线观看人妻少妇| 国产男人的电影天堂91| 午夜免费观看性视频| 欧美中文综合在线视频| 80岁老熟妇乱子伦牲交| 国产日韩一区二区三区精品不卡| 中文字幕人妻丝袜一区二区| 中文字幕av电影在线播放| 亚洲精品美女久久久久99蜜臀 | 中文字幕另类日韩欧美亚洲嫩草| 一本综合久久免费| 亚洲七黄色美女视频| 亚洲自偷自拍图片 自拍| 少妇裸体淫交视频免费看高清 | 国产精品久久久久久精品古装| 91国产中文字幕| 国产亚洲一区二区精品| 欧美人与性动交α欧美精品济南到| 黄网站色视频无遮挡免费观看| 满18在线观看网站| 丁香六月欧美| 美女午夜性视频免费| 日韩人妻精品一区2区三区| 搡老乐熟女国产| av天堂在线播放| 欧美成狂野欧美在线观看| 成人18禁高潮啪啪吃奶动态图| 一级黄片播放器| 国产欧美亚洲国产| 国产成人啪精品午夜网站| 一区二区三区激情视频| 国产在线视频一区二区| 精品久久久久久电影网| 免费日韩欧美在线观看| 午夜福利乱码中文字幕| 电影成人av| 在线观看免费高清a一片| 狠狠精品人妻久久久久久综合| 国产精品av久久久久免费| 十分钟在线观看高清视频www| 一边摸一边抽搐一进一出视频| 国产午夜精品一二区理论片| 成人黄色视频免费在线看| 在线观看免费高清a一片| 精品亚洲乱码少妇综合久久| 少妇被粗大的猛进出69影院| 日本色播在线视频| 91精品国产国语对白视频| 欧美老熟妇乱子伦牲交| 一级片'在线观看视频| 好男人电影高清在线观看| 中国美女看黄片| 国产片特级美女逼逼视频| 国产在线免费精品| 亚洲国产欧美在线一区| 性色av一级| 免费女性裸体啪啪无遮挡网站| 久久精品久久精品一区二区三区| 国产精品二区激情视频| 18禁黄网站禁片午夜丰满| 欧美 亚洲 国产 日韩一| 欧美激情极品国产一区二区三区| √禁漫天堂资源中文www| 国产爽快片一区二区三区| 最近中文字幕2019免费版| 精品一区在线观看国产| 丁香六月天网| bbb黄色大片| 欧美 亚洲 国产 日韩一| 久久女婷五月综合色啪小说| 午夜免费鲁丝| 男人添女人高潮全过程视频| 国产极品粉嫩免费观看在线| 超碰97精品在线观看| 电影成人av| 欧美精品啪啪一区二区三区 | 亚洲成人免费av在线播放| 又大又黄又爽视频免费| 曰老女人黄片| 欧美激情极品国产一区二区三区| av网站在线播放免费| 久久久国产欧美日韩av| 国产一区二区三区综合在线观看| 国产精品 欧美亚洲| 免费黄频网站在线观看国产| 国产黄频视频在线观看| 午夜激情av网站| 天天添夜夜摸| 人人澡人人妻人| 80岁老熟妇乱子伦牲交| 中文字幕色久视频| 精品亚洲乱码少妇综合久久| 99久久精品国产亚洲精品| 国产真人三级小视频在线观看| 夫妻午夜视频| 亚洲第一青青草原| 亚洲国产精品一区三区| 免费看不卡的av| 夫妻午夜视频| 免费黄频网站在线观看国产| 男人操女人黄网站| 建设人人有责人人尽责人人享有的| 51午夜福利影视在线观看| 国产成人av激情在线播放| 一区福利在线观看| 一级毛片 在线播放| 好男人电影高清在线观看| 男人操女人黄网站| 亚洲成人免费电影在线观看 | 欧美国产精品va在线观看不卡| 国产精品.久久久| 亚洲国产中文字幕在线视频| 在线观看一区二区三区激情| 七月丁香在线播放| 男人舔女人的私密视频| 中国美女看黄片| 欧美日韩国产mv在线观看视频| 蜜桃在线观看..| 可以免费在线观看a视频的电影网站| 免费看av在线观看网站| 日本a在线网址| 精品卡一卡二卡四卡免费| 国产欧美日韩精品亚洲av| 久久人人爽av亚洲精品天堂| 国产又色又爽无遮挡免| 男女边吃奶边做爰视频| 亚洲第一av免费看| 中文字幕色久视频| 天天影视国产精品| 黄片播放在线免费| 香蕉丝袜av| 一级黄片播放器| 18禁黄网站禁片午夜丰满| 久久国产精品男人的天堂亚洲| av一本久久久久| 亚洲国产日韩一区二区| 欧美国产精品va在线观看不卡| 只有这里有精品99| 人人妻,人人澡人人爽秒播 | 国产日韩欧美亚洲二区| 欧美精品av麻豆av| 国产亚洲欧美在线一区二区| 亚洲国产精品成人久久小说| 日本午夜av视频| 精品亚洲成国产av| av天堂久久9| 人人妻人人澡人人爽人人夜夜| 男人爽女人下面视频在线观看| 成年人黄色毛片网站| 老熟女久久久| 男女高潮啪啪啪动态图| 国产成人精品在线电影| 精品福利观看| 69精品国产乱码久久久| 男人添女人高潮全过程视频| 国产日韩一区二区三区精品不卡| 99精国产麻豆久久婷婷| 日韩一区二区三区影片| av片东京热男人的天堂| 欧美老熟妇乱子伦牲交| 一级毛片我不卡| 亚洲av欧美aⅴ国产| 超碰97精品在线观看| 女人精品久久久久毛片| 亚洲,欧美精品.| 久久精品国产综合久久久| 中国美女看黄片| 亚洲图色成人| 欧美国产精品一级二级三级| 90打野战视频偷拍视频| 一级黄色大片毛片| 免费不卡黄色视频| 亚洲第一青青草原| 无限看片的www在线观看| 久久性视频一级片| 国产一区有黄有色的免费视频| 9色porny在线观看| 黄频高清免费视频| 美女视频免费永久观看网站| 啦啦啦在线观看免费高清www| 人人妻人人爽人人添夜夜欢视频| 亚洲自偷自拍图片 自拍| 两性夫妻黄色片| 男人舔女人的私密视频| 亚洲伊人久久精品综合| 男女午夜视频在线观看| 日本一区二区免费在线视频| 丝袜在线中文字幕| 久久性视频一级片| 一级黄色大片毛片| 嫁个100分男人电影在线观看 | 日本a在线网址| 五月天丁香电影| cao死你这个sao货| 777久久人妻少妇嫩草av网站| 麻豆av在线久日| 在线观看www视频免费| 一区在线观看完整版| 18禁观看日本| 最新在线观看一区二区三区 | 少妇人妻 视频| 国产精品久久久人人做人人爽| 在线观看免费高清a一片| 男女边摸边吃奶| 极品少妇高潮喷水抽搐| 久久久久久久久免费视频了| 男女免费视频国产| 亚洲专区中文字幕在线| 欧美中文综合在线视频| 亚洲国产精品999| 免费一级毛片在线播放高清视频 | 成人手机av| 免费在线观看影片大全网站 | 久久ye,这里只有精品| 欧美老熟妇乱子伦牲交| 多毛熟女@视频| 1024香蕉在线观看| 久久ye,这里只有精品| 久久亚洲精品不卡| 国产亚洲午夜精品一区二区久久| 少妇被粗大的猛进出69影院| 国产高清videossex| 一区二区日韩欧美中文字幕| 男女边吃奶边做爰视频| 日韩欧美一区视频在线观看| svipshipincom国产片| 欧美激情 高清一区二区三区| 看免费av毛片| 国产欧美日韩精品亚洲av| 中文字幕av电影在线播放| 久久女婷五月综合色啪小说| 99久久99久久久精品蜜桃| 亚洲成色77777| 91精品伊人久久大香线蕉| 亚洲成色77777| 又黄又粗又硬又大视频| 亚洲精品自拍成人| 午夜视频精品福利| 少妇人妻久久综合中文| 男人添女人高潮全过程视频| 久久影院123| 国产免费现黄频在线看| 国产老妇伦熟女老妇高清| 欧美日韩成人在线一区二区| 欧美日韩黄片免| 人人妻人人澡人人看| 精品人妻1区二区| av又黄又爽大尺度在线免费看| 另类亚洲欧美激情| 亚洲色图综合在线观看| 大陆偷拍与自拍| 手机成人av网站| 老司机影院毛片| 欧美日韩av久久| 超色免费av| 每晚都被弄得嗷嗷叫到高潮| 最黄视频免费看| 午夜福利视频在线观看免费| 啦啦啦在线观看免费高清www| 精品视频人人做人人爽| 精品一区二区三区av网在线观看 | 日韩大片免费观看网站| 亚洲精品久久成人aⅴ小说|