• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Video Analytics Framework for Human Action Recognition

    2021-12-14 06:06:20MuhammadAttiqueKhanMajedAlhaisoniAmmarArmghanFayadhAleneziUsmanTariqYunyoungNamandTallhaAkram
    Computers Materials&Continua 2021年9期

    Muhammad Attique Khan,Majed Alhaisoni,Ammar Armghan,Fayadh Alenezi,Usman Tariq,Yunyoung Namand Tallha Akram

    1Department of Computer Science,HITEC University Taxila,Taxila,47080,Pakistan

    2College of Computer Science and Engineering,University of Ha’il,Ha’il,Saudi Arabia

    3Department of Electrical Engineering,College of Engineering,Jouf University,Sakaka,Saudi Arabia

    4College of Computer Engineering and Sciences,Prince Sattam Bin Abdulaziz University,Al-Khraj,Saudi Arabia

    5Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

    6Department of Computer Science,COMSATS University Islamabad,Wah Campus,47040,Pakistan

    Abstract:Human action recognition (HAR) is an essential but challenging task for observing human movements.This problem encompasses the observations of variations in human movement and activity identification by machine learning algorithms.This article addresses the challenges in activity recognition by implementing and experimenting an intelligent segmentation,features reduction and selection framework.A novel approach has been introduced for the fusion of segmented frames and multi-level features of interests are extracted.An entropy-skewness based features reduction technique has been implemented and the reduced features are converted into a codebook by serial based fusion.A custom made genetic algorithm is implemented on the constructed features codebook in order to select the strong and wellknown features.The features are exploited by a multi-class SVM for action identification.Comprehensive experimental results are undertaken on four action datasets,namely,Weizmann,KTH,Muhavi,and WVU multi-view.We achieved the recognition rate of 96.80%,100%,100%,and 100%respectively.Analysis reveals that the proposed action recognition approach is efficient and well accurate as compare to existing approaches.

    Keywords:Video analytics;action recognition;features classification;entropy;data analytic

    1 Introduction

    Action recognition based on human movements has drawn considerable interest due to its emerging applications in video analytics [1,2].An emerging trend of video labeling for various actions within certain sports such as football,swimming,paragliding,and even in typical daily life movements [3]such as for forensic analysis require recognition that can be made at certain levels of abstraction [4].Interactive applications [5]already involve human computer interaction in which substantial amount of work has been done covering a broad range of topics [6].In the literature,most of the works have addressed very specific problems in action recognition.These problems are human-body movement,facial expression,image labeling,and perception of human object iteration [7].Some authors have also focused on introducing feature selection algorithms for distance-based similarity measures and SVM [8].Many techniques have been recently introduced for HAR,which may be categorized into graph based,trajectory based,codebook based,feature extraction based [9],to name a few [10].Wu et al.[11]presented a HAR method with graph based visual saliency and space-time nearest points.Gao et al.[12]introduced a hypergraphbased method to compute the distance between two objects at a multiview scenario.In these methods,vertices and edges of the objects are defined in cluster view.Edges join multiple points and weights are assigned to each edge based on their relationships between any two views in the group.Yi et al.[13]introduced trajectory based HAR.This method solves the problem of motion information between distinct motion regions.The method makes use of the trajectory based covariance features and performs better as compared to Histogram of oriented gradients and its variants.Althloothi et al.[14]presented HAR based technique based on motion and shape features,extracted using spherical harmonics.

    In [15]introduced a new feature referred to as local surface geometric feature (LSGF).The LSGF features for human body posture and expression are extracted to be utilized further in the covariance matrix and feature vectorization.Chen et al.[16]presented depth motion maps(DMM).This method consists of four major steps such as depth map generation,features extraction by utilizing DMM,features reduction and recognition.The PCA is used for the dimensionality reduction and provides improved efficiency in recognition.The few other methods are 16-layers CNN [17],fusion of features [18],weighted segmentation based approach [19],fusion of deep and handcrafted [19],and name a few more [20].

    However,most of the recent contributions based on features selection have not addressed frame enhancement,which we believe is a crucial step in making the foreground object more visible.For instance,the optical flow algorithm,proposed in [21]fails to segment the foreground object due to low-resolution videos and variation in motion speed.Similarly,various feature selection and extraction techniques,such as [22],do not consider optimization of the local and global features,which usually lead to lower classification accuracy [23].We believe that a sound feature enhancement technique coupled with efficient features optimization mechanism would result in increased classification accuracy.In order to achieve greater classification accuracy,a novel framework has been proposed that implements segmented frames fusion and Entropy-Skewness (ES) based features reduction.In what follows we enumerate the primary contributions of the proposed work,which also describes our research methodology in order:

    · Construction of an enhanced HSI color space,utilizing a hybrid color transformation technique,which incorporates refinement of RGB channels,bottom-hat filtering,and NTSC transformation.

    · An implementation of novel maximum regions based segmentation technique in which pixels’fusion has been performed,using the proposed saliency mapped frame.

    · Extraction of a hybrid set of features and their dimension reduction by using the entropy skewness control method.

    · Construction of a feature codebook having a size of 1×470,using serial based features fusion.This is followed by an implementation of a genetic algorithm for prominent features selection.The selected features have a dimension of 1×210.

    Finally,an extensive experimentation and comparison has been performed between the proposed and existing methods by implementing two use cases.

    2 Proposed Framework

    The proposed architecture consists of five major steps:a) Frame enhancement using new series of steps;b) introduction to maximum regions based segmentation technique with an integration of frames fusion with novel saliency map;c) extraction of texture,local,and global features using SFTA,LBP,and HOG;d) a novel features reduction technique is implemented based on Entropy-Skewness (ES) control method and then serial based feature fusion is performed for the construction of features codebook having size 1 × 470;e) implementation of a custom made genetic algorithm for the selection of most optimal features prior to multi-class SVM for final classification.Fig.1 show the detail of proposed method.

    Figure 1:System architecture of the proposed action recognition approach

    2.1 Preprocessing

    Foreground visibility is a major issue in this area which is addressed in this section.Frame enhancement is an important preprocessing step for an accurate segmentation of foreground objects because we are dealing with raw input video data [24].This data contains many distorted,noisy,and dull kind of images (weak edges and fused boundaries).To get improved images and quality information,we need to enhance these frames to get our desired results,and this is the main motivation behind our frames’enhancement approach.Also,in few recent studies,an optical flow algorithm failed to segment the foreground object due to low-resolution videos.To handle this kind of problem,a new technique is implemented named as (BHNT SC-S) which incorporates two fundamental steps as bottom-hat filtering and color space transformations.The complete process is performed in parallel.Firstly,a conventional RGB frame is enhanced with bottom-hat filtering,which is subsequently utilized in the segmentation phase.

    The fusion relation between the bottom-hat filter and NTSC transformation frame is given as;LetφI(x,y)represents an original RGB frame having dimensions 256×256×3.The bottom hat-filtering technique is implemented onφI(x,y)to enhance the brightness of the foreground object and reduce the background contrast with respect to black pixels.The bottom-hat filtering technique effectively works on tiny objects on a scattered background as follows:

    wherel=[123],and represents an index for three channels of red,green,and blue,respectively.φR,φG,φBare modified red,green,and blue channel,respectively.The green channel is utilized for the gaussian mixture model (GMM) segmentation.

    whereis the NTSC frame.The enhanced NTSC frame is improved with Gaussian function and is further utilized for novel saliency segmentation.Finally,HSI transformation is performed onφI(x,y)for maximal region segmentation as shown in Fig.1.The visibility results are tested on each channel,however,the saturation channel produced better results.Hence,we select saturation channel for maximal region segmentation.The saturation channel is defined aswherechannel is set as input for maximal region segmentation.The results of the preprocessing step are shown in Fig.2.In this figure,it is showing that the original frames are initially processed in the green channel and then followed the bottom hat filtering and NTSC transformation.After that frames are reconstructed and get a saturation frame for further process.

    2.2 Frames Segmentation

    In this section,we segment the foreground objects for identification of their activities.The optical flow algorithm has been used for identification of motion regions in the frame.We then construct a novel saliency method,which is fused with a new maximal region segmentation technique.The optical flow algorithm is executed in parallel with the novel saliency method as shown in Fig.1.The purpose of frames fusion is to obtain maximum accuracy and reduce the error rate.

    Figure 2:Effects of preprocessing step.(a) Original frame;(b) green channel;(c) bottom hat filter frame;(d) NTSC frame;(e) enhanced frame after reconstruction;(f) combined histogram of reconstructed frame (RGB);(g) saturation frame;(h) histogram of saturation frame

    Saliency map:Letψrepresents optical flow function having three parameters for horizontal,vertical,and time ion(h,v,t)andrepresents a 3-dimensional enhanced frame.ψexecutes in parallel withto give motion information of a foreground object in the current frame.A chi-square distance function is performed on the resultant frame to calculate distance between the motion pixels.The motion pixels with minimum distance are considered as a salient object and pixels with maximum distance represent the background.The chi-square distance is calculated as follows:

    where,Trepresents a selection of a salient object and the background.IfXd2=minimum,the chi-square distance between pixels is minimum which in turn labels it as a salient object,otherwise it is considered as background.Then color features of a salient object are extracted,which are effective for saliency estimation.RGB and LAB color spaces are used for features extraction and mean,range and entropy are calculated for each channel.The cumulative mean and standard deviation are calculated for the color frame.The mean value is used as a threshold value for frame binarization and the centered value of the frame is computed byσas follows:

    The center value is subtracted from the color image and a new mapped frame is obtained as follows:φ(map)=φi(color)-Center

    We then perform an activation function to remove noise in the salient frame and make the object more visible.

    whereHdenotes the number of neighbor pixels andμis mean of the mapped frame.The noise removal function F(R) is performed on the mapped frameφ(map)to get a new improved salient frame.The improved salient frame is defined as:

    where,Im(sal)represents the improved salient frame.The graphical sample results are shown in Fig.3.

    Figure 3:Sample results of a mapped frame (a) color frame;(b) mapped frame;(c) Noise removal frame;(d) 3-D constructed frame;(e) histogram of the noise removal frame

    Finally,we set a threshold function to obtain the binary image as follows:

    where,μdenotes the cumulative mean value,which is computed from the color frame.Some morphological operations are performed to remove extra regions from the segmented frame.The saliency-based method is described in Algorithm 1.The results are shown in Fig.4.

    Figure 4:Final saliency results.(a) Initial salient frame;(b) morphological operations frame;(c) 3-D constructed frame;(d) contour frame;(e) mapped on original frame

    Segmentation Based on Maximum Region Extraction:The maximum extraction region has two primary steps.Firstly,a mask of input saturation channelis generated and secondly,a threshold value is obtained automatically using object magnitude for most significant regions in the masked frame.Later,few morphological operators are utilized to remove unused regions.

    Algorithm 1:Saliency Estimation Input:Motion vector ψ and φYIQ frame.Output:Saliency frame Fin(sal).i=1 N=Number of pixels Step 1:For i=1: N{Step 2:Calculate Chi-square distance Xd2 by using Eq.(4).Set threshold for minimum distance.Step 3:Extract color features.Step 4: μ and σ is calculated for center value.Step 5:Initial map is constructed by Eq.(7).Perform activation function by Eq.(8).Step 6:Final saliency map is constructed by Eqs.(9) and (10).}Step 7:END

    Mask generation:In mask generation of the saturation frame,we create a Zero matrix of size 256×256 and set condition up to 1 as follows:

    wherepandqdenote the number of pixels in one frame.We then set a dynamic threshold value and store the pixel value of the extracted object in theZero matrix.The threshold is set as follows:

    whereφv(T)is the threshold frame,T1is the threshold value,which is automatically selected depending on the object magnitude.Further closing and filling operations are performed to make the segmented image more accurate.Algorithm 2 explains the segmentation process based on the maximum region extraction.The sample results are shown in Fig.6d.

    Frames Fusion:Frames fusion corresponds to the process of combining comparative information of two frames into a single frame.The fused frame is more accurate with respect to parent frames and contains comprehensive information compared to any single segmented frame.In this article,we implemented a novel frame fusion technique based on similar pixel values as shown in Fig.5.The proposed fusion technique is simple but more effective as compared to above listed approaches.The fusion process follows the additive law of probability which overcomes the problem of over smoothness/over sharpness and provides the statistically balanced segmented frames.Moreover,enhancement procedure strengthens the weak edges that lead to an appropriate segmentation having clear boundaries.

    Figure 5:Representation of the fusion of two frames based on their similar pixels values

    LetΔdenotes the all pixel’s values of both segmented frames.Lets1denotes the pixel values of saliency frameFin(sal),s2denotes the pixel values of maximal region segmentation frameφv(T),and s3 denotes the common points offinandφv(T).The frames fusion is computed as follows:

    whereξdenotes the number of occurrences of frame pixels,∩denotes the common pixels between two segmented frames,c denotes the complement of segmented frames andφ(fused)is fused segmented frame.The detailed segmentation and fusion results are shown in Fig.6,which demonstrates the value of the fusion method.

    Figure 6:Segmentation and ROI detection results using KTH dataset.(a) Original frame;(b)initial mapped frame;(c) saliency frame;(d) maximal region frame;(e) refined frame of (d);(f)fused frame;(g) ROI detection frame

    Algorithm 2:Segmentation based on maximum region extraction Input:?φS fr Thrame.Output:eshold frame φv(T)Z ←Totalframes Step 1: For i=1: Z{Step 2:Calculate mask frame φ(mask) using Eq.(11)Step 3:Calculate φv(T) using Eq.(12) prior to morphological operations.Step 4:Perform morphological operations such as closing and filling.}Step 5:END

    2.3 Feature Descriptors

    Feature extraction is very important for representation of an object [25].In this section,we are dealing with raw video data,which possibly contains faces,texture coatings,background objects,etc.,with a variety of artificial makeup.To deal with these assortments we need to use a combination of features.Shape,SFTA,and LBP descriptors are extracted.The grayscale and the proposed fused segmented framesφ(fused)are used in the feature extraction phase.The SFTA features are extracted in three steps.Firstly,the fused segmented frameφ(fused)is used to make the set of binary frames.Secondly,the fractal features are calculated by using 8 neighborhood pixels.

    Finally,we calculate the mean(μ)and size (pixels) of the segmented frame.By using 8 neighborhood pixels,a 1×45 dimensional feature vector (FV) is obtained.For LBP features extraction,binary code is calculated for each pixel in the frame and compares it whether the intensity value of the pixel is greater or less than the current pixel intensity.Then a histogram is computed to count the number of occurrences of each binary code.The LBP features are defined as:wheren=7,mruns over 8 neighbors of the central pixelsgcandThe final LBP FV has dimensioned 1×59 for each frame,which is later utilized for fusion.Finally,the HOG features are extracted from fused segmented frame and obtained a vector of size 1×3780.Later on,the proposed features reduction technique,entropy skewness,is implemented on these features.

    Features Reduction using Entropy Skewness:A large number of features negatively hits the accuracy and increases computational time of the system [26,27].The PCA is used in literature for dimensionality rebate/reduction.In this article,we compare our proposed features reduction technique with PCA in terms of five performance measures.The workflows of ES methods are shown in Fig.7.The same size feature is used to analyze the information on the same dimensional frames to obtain the high similarity index before subjecting to the classification phase.For the proposed method,entropy and skewness value is calculated for all three types of extracted features.The entropy value for one frame features is calculated as follows:

    s(u)is:

    whereFtdenotes the total number of extracted features for one frame,Pdenotes the probability of occurrences of features andb=10.Similarly,the mean and standard deviation are calculated for each frame feature for skewness value.The skewness value is computed by mean and SD,that are defined as:andHence,

    whereμ,σ,andSdenote the mean,standard deviation,and skewness of extracted features,respectively.Then we add both entropy and skewness values as:

    Finally,40,30,and 400 features are selected from LBP,SFTA,and HOG respectively based on their mean value.The features are reduced to a value which is less than the mean value of Υ(+).Then the remaining features are fused by serial-based fusion to build a codebook having dimensions of 1×470.The serial-based fusion is simple but more effective.Let?(HOG),?(LBP)and?(SFTA)be three extracted feature vectors having dimensions 1×400,1×40,and 1×30,respectively.Then these features are added as:

    Figure 7:Workflow of the proposed entropy skewness based reduction and selection

    Finally,we get the codebook of sizeφ(CB)={1×470}.The constructed feature codebook is optimized by a custom-made genetic algorithm (GA) and selects the best features for action recognition.

    Features Selection:The feature selection is performed on the fused vectorφ(CB)in order to identify most relevant and uncorrelated feature data.For best features selection,we opted genetic algorithm which has the tendency to handle larger space problems even when the objective function is stochastic.In our proposed work,the input to the genetic algorithm is extracted codebookφ(CB)of size 1×470 whereas;the optimized features are the output,given to the classifier.Mainly,the GA is comprised of the standard steps of population initialization,fitness calculation,crossover,mutation and finally selection.Amongst several existing crossover techniques,we opted for uniform crossover technique having crossover rate of 0.7.Theφc=CrosOver(y1,y2),wherey1=α×x1+(1-α)×x2andy1=α×x2+(1-α)×x1is crossover.Thex1,x2are selected parents.In the mutation,a uniform approach is applied of rate 0.1.For selection,we adopted the roulette wheel and defined as:whereSpis sorted populationWcis the last number of population.Theβ1is selected for parent pressure,which is set to be 8.For our proposed case,the fitness function is defined to be a mean of chromosome as=mean(Cm).In our case,this function guarantees the optimized solution.The newly generated feature sets are used in the classification phase.In The classification phase,Multi-Class SVM is employed for final features classification.The labeled results are showing in Fig.8.

    Algorithm 3:Features selection Input:Codebook of size 1×470.Output:Optimized feature vector of size 1×210.Step 1:Initialize GA parameters:M1 ←1000 Np ←10 Cp ←0.7 MR ←0.1;β1 ←8 t ←0 POp0 ←Initialize with population size ΦNp Evaluate POp0 Step 2: For t<M1 Step 3: Parents (Xpar) ←Select Xpar for POpt.Step 4: Offspring (Xoff ) ←Crossover (Xoff (Cprob))Step 5:Mutation (Xoff,Mprob)Step 6:Evaluate Xoff Step 7: POpt+1 ←actual population through replacement Step 8: POpt & X′off t=t+1 Step 9:END?

    Figure 8:Proposed action recognition results using Weizmann dataset:(a) Original frame;(b) segmented fused frame;(c) recognized frame;(d) original frame;(e) segmented fused frame;(f) recognized frame

    3 Experimental Setup and Results

    The computational complexity of the proposed framework is linearly dependent on the input.For each pixel,N2r2q2(n21+n22,whereN2,q2,andr2are represents mass of input,search window,and patch respectively.This statement connects the total steps and operations perform in this work.The sumn21+n22represents total required operations during the fusion step.

    3.1 Selected Datasets

    Weizmann dataset:Weizmann dataset [28]is considered a flexible and comprehensive action recognition dataset.This dataset has been built in an indoor environment and contains a total of 90 videos.There are 10 classes of different actions which are described in Tab.1.Every action is performed by 9 actors in each class.

    KTH dataset:The KTH action dataset [28]includes a total of 599 videos of 6 action classes,which are described in Tab.1.Each action class is completed by 25 actors in four distinct situations like outdoors,scale variation,in outdoors,outdoors with distinct clothes and lighting variations in indoors.

    Muhavi dataset:The Muhavi action dataset [28]involves a total of 17 actions and each action is completed by 14 persons.Eight cameras are located on different views for the recording of human actions.A total of 10 actions are considering in this work for classification,depicted in Tab.1.

    WVU Multi-view dataset:The WVU multi-view action dataset [28]consists of total of 780 action videos.This dataset consists of 12 human actions and every action is performed by 2 persons.Eight different view cameras are located for human action recording.Tab.1 depicts the selected action for classification.

    Table 1:Description of action classes of the selected datasets.The L denotes class label

    3.2 Evaluation Methods

    The proposed framework is validated on four large action datasets:Weizmann,KTH,Muhavi,and WVU.The selected action classes and their respective class labels are depicted in Tab.1.To assess the proposed method performance,10-fold cross-validation is made on all three datasets.The MC-SVM is used for action recognition and we compare their performance with eight other classification algorithms:Fine-KNN,weighted-KNN,ensemble boosted tree (EBT),subspace discriminant analysis,DT,QDA,logistic regression,and Q-SVM.To measure the authenticity of the proposed algorithm,we implement five statistical measures of accuracy:FNR,precision,sensitivity,FPR,and correct recognition rate (CCR).The proposed performance is compared with PCA based features reduction model and then a comparison is made with existing methods.MATLAB 2019b based simulations are carried out on a personal computer.

    3.3 Results and Discussion

    The proposed framework is workflow of five major steps such as preprocessing,segmentation of ROI,features extraction and reduction,features selection,and recognition whereas each step is series of sub-steps as shown in Fig.1.The proposed framework is evaluated in two stages:a) Features reduction has been carried out by PCA which is then sent to MC-SVM for recognition;b) features reduction is performed by the novel ES method and then GA base selected features are provided to MC-SVM for recognition.The detailed description of each of these modules is given in Fig.7.Four publicly available datasets,namely,Weizmann,KTH,Muhavi,and WVU multi-view are selected for evaluation.For testing and training 50:50 strategy is adopted.A comprehensive comparison of the proposed algorithm is performed with eight classifiers and their performance is evaluated by five measures such as sensitivity,precision,FPR,FNR,and CRR.Additionally,we also compare our proposed method with existing works on the selected datasets just to support our claim of achieving best accuracy even with the most recent articles.

    Fig.9 summarizes the results of features reduction by PCA and muti-class SVM.The multiclass SVM achieved best recognition results of 91.7%,98.9%,99.8%,and 99.90% on Weizmann,KTH,Muhavi,and WVU muti-view dataset,respectively.Moreover,the average recognition execution time of PCA based reduction approach for selected datasets is 51.729 s.

    Figure 9:Classification results using PCA based reduction approach

    The proposed ES based features reduction and the GA based features selection results are shown in Tab.2.It is evident that the proposed method achieved best recognition results of 96.80%,100%,100%,and 100% on Weizmann,KTH,Muhavi,and WVU multi-view datasets,respectively.The recognition rate of the proposed method is explained by the confusion matrix in Tab.3.The selected classifiers such as W-KNN,Q-SVM,F-KNN,and QDA also achieved maximum recognition rate of 100% on the WVU multi-view dataset.The average recognition execution time for the proposed ES based reduction and GA based selection is 23.901 s,which is significantly lower as compared to PCA.

    Table 2:Proposed results using entropy-skewness (ES) based features reduction and GA based features selection

    Table 3:Confusion matrix of the proposed approach

    Finally,the proposed method results are compared with existing HAR methods for all selected datasets as given in Tab.4.In this table,the proposed method is evaluated on Weizmann dataset and achieved recognition accuracy of 96.80%,that when compared with existing approaches such as [29]shows improved performance.Secondly,the proposed recognition accuracy on the KTH dataset is 100%,that is quite good performance as compared to [30].Similarly,the recognition performance for the proposed algorithm on WVU and Muhavi datasets is 100%,that is significantly robust as compared to [31,32].From the experimental results,this is quite evident that the proposed feature selection approach performs better as compared to PCA based feature selection.It is noted that our proposed algorithm outperforms existing techniques in terms of recognition rate.The visual results are shown in Fig.8,where we can accurately observe the binary results and in turn get the most accurate label.

    Table 4:Comparison of proposed algorithm with recent techniques using selected datasets

    4 Conclusion

    In this article,we have introduced the Entropy Skewness (ES) based feature reduction and classification approach for the segmentation of regions of interest.The reduced features are optimized by a custom made genetic algorithm and the prominent features are selected,which are then provided to the multi-class classification algorithm (MC-SVM) for the classification of multiple action classes.The ES based features reduction technique performs far better as compared to PCA.The proposed system is evaluated on four publically available datasets including Weizmann,KTH,Muhavi,and WVU.Excellent results have been obtained with the recognition accuracy of 96.80%,100%,100%,and 100% respectively.We noticed that the proposed algorithm performs significantly better for a limited number of testing samples demonstrating scalability and efficiency of the proposed approach.The main limitation of this work is the limited number of training and testing samples.In future,we will focus on more complex action recognition challenges such as detecting suspicious behavior and forensic analysis of moving objects.To achieve this,we will investigate deep learning features to accurately and efficiently recognize complex movements.

    Funding Statement:This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724,The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲美女搞黄在线观看| 国产精品久久久久久av不卡| 哪个播放器可以免费观看大片| 999精品在线视频| 中文字幕精品免费在线观看视频 | 天堂8中文在线网| 男女高潮啪啪啪动态图| 日本av手机在线免费观看| 天美传媒精品一区二区| 黄色怎么调成土黄色| 少妇猛男粗大的猛烈进出视频| 欧美 亚洲 国产 日韩一| 日韩 亚洲 欧美在线| 嫩草影院入口| 99久久精品国产国产毛片| 欧美少妇被猛烈插入视频| 观看美女的网站| 丰满少妇做爰视频| 99热全是精品| 999精品在线视频| 国产日韩欧美亚洲二区| 国产免费又黄又爽又色| 亚洲美女搞黄在线观看| 99热全是精品| 天堂俺去俺来也www色官网| 亚洲精品第二区| 99久国产av精品国产电影| 国产黄片视频在线免费观看| 色视频在线一区二区三区| 日日摸夜夜添夜夜爱| 国产精品久久久久久av不卡| 下体分泌物呈黄色| 飞空精品影院首页| 久热久热在线精品观看| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品亚洲一区二区| 三级国产精品欧美在线观看| 一级毛片黄色毛片免费观看视频| 成人毛片a级毛片在线播放| 国产成人精品在线电影| 亚洲综合色惰| 日韩av不卡免费在线播放| 伊人亚洲综合成人网| 最新的欧美精品一区二区| 成人国产麻豆网| 一本色道久久久久久精品综合| 亚洲伊人久久精品综合| 亚洲美女黄色视频免费看| 女的被弄到高潮叫床怎么办| 熟妇人妻不卡中文字幕| 亚洲不卡免费看| 成人二区视频| 大香蕉97超碰在线| 一区二区三区乱码不卡18| 色婷婷久久久亚洲欧美| 国产精品国产三级国产专区5o| 日韩精品免费视频一区二区三区 | 九草在线视频观看| 成人国产av品久久久| 免费久久久久久久精品成人欧美视频 | 99热这里只有精品一区| 欧美成人精品欧美一级黄| 亚洲精品国产av蜜桃| 热99久久久久精品小说推荐| 午夜视频国产福利| 考比视频在线观看| 熟女电影av网| 国产女主播在线喷水免费视频网站| av.在线天堂| 午夜福利在线观看免费完整高清在| 男的添女的下面高潮视频| 大码成人一级视频| 秋霞在线观看毛片| videosex国产| av又黄又爽大尺度在线免费看| 日韩免费高清中文字幕av| 久久久国产欧美日韩av| 婷婷色麻豆天堂久久| 中文字幕最新亚洲高清| 久久久精品区二区三区| 内地一区二区视频在线| 免费黄频网站在线观看国产| videos熟女内射| 午夜久久久在线观看| 一个人看视频在线观看www免费| 国产成人一区二区在线| 欧美日本中文国产一区发布| 视频区图区小说| 亚洲欧美日韩卡通动漫| 丝袜喷水一区| 亚洲精品亚洲一区二区| 卡戴珊不雅视频在线播放| 内地一区二区视频在线| 波野结衣二区三区在线| 国产亚洲欧美精品永久| 日本黄色片子视频| 亚洲经典国产精华液单| 少妇的逼好多水| 九色亚洲精品在线播放| 久久久久国产网址| av免费在线看不卡| 水蜜桃什么品种好| 人人妻人人爽人人添夜夜欢视频| www.色视频.com| 婷婷色综合www| 久久国产精品男人的天堂亚洲 | 最近最新中文字幕免费大全7| 国产精品久久久久成人av| 日本欧美视频一区| 久久99蜜桃精品久久| 国产黄片视频在线免费观看| 欧美精品高潮呻吟av久久| 我的老师免费观看完整版| 国产亚洲午夜精品一区二区久久| 亚洲精品成人av观看孕妇| 高清视频免费观看一区二区| 少妇人妻久久综合中文| 最近手机中文字幕大全| kizo精华| 我的女老师完整版在线观看| 秋霞在线观看毛片| 国产高清不卡午夜福利| 丰满乱子伦码专区| 成年av动漫网址| 人体艺术视频欧美日本| 国产精品一区二区在线不卡| 香蕉精品网在线| 久久女婷五月综合色啪小说| 一级a做视频免费观看| 一边摸一边做爽爽视频免费| 国产男女超爽视频在线观看| 大片电影免费在线观看免费| 如何舔出高潮| 交换朋友夫妻互换小说| 卡戴珊不雅视频在线播放| 国产亚洲av片在线观看秒播厂| 亚洲国产精品一区二区三区在线| 天美传媒精品一区二区| 在线观看人妻少妇| 欧美日韩在线观看h| 亚洲高清免费不卡视频| 午夜视频国产福利| 日本vs欧美在线观看视频| 99久国产av精品国产电影| 热99久久久久精品小说推荐| 精品国产乱码久久久久久小说| 一本大道久久a久久精品| 成年女人在线观看亚洲视频| 国产极品粉嫩免费观看在线 | 久久久久久久久久久免费av| 久久久精品94久久精品| 内地一区二区视频在线| 国产黄色免费在线视频| 成人无遮挡网站| 男女无遮挡免费网站观看| 国产极品天堂在线| 美女福利国产在线| 久久久欧美国产精品| 国产熟女午夜一区二区三区 | 久久久午夜欧美精品| 久久久久久久精品精品| 久久狼人影院| 欧美bdsm另类| 亚洲五月色婷婷综合| 欧美日本中文国产一区发布| 国产精品欧美亚洲77777| 亚洲av综合色区一区| 夫妻性生交免费视频一级片| 一边亲一边摸免费视频| 国产爽快片一区二区三区| 三级国产精品欧美在线观看| 在线观看免费高清a一片| 欧美97在线视频| 久久99蜜桃精品久久| 午夜91福利影院| 亚洲丝袜综合中文字幕| 亚洲美女视频黄频| 一边摸一边做爽爽视频免费| 免费观看av网站的网址| 91在线精品国自产拍蜜月| 欧美精品高潮呻吟av久久| 久久99精品国语久久久| 成年人午夜在线观看视频| 久久久久久久久久成人| 国产男人的电影天堂91| 亚洲人成网站在线播| 国产免费福利视频在线观看| 国产成人a∨麻豆精品| 大又大粗又爽又黄少妇毛片口| 久久99蜜桃精品久久| 人妻人人澡人人爽人人| 十八禁网站网址无遮挡| 亚洲欧洲日产国产| 涩涩av久久男人的天堂| 成人黄色视频免费在线看| 性色avwww在线观看| 久久精品国产自在天天线| 欧美日韩国产mv在线观看视频| 精品一品国产午夜福利视频| 成人国产麻豆网| 熟女人妻精品中文字幕| 你懂的网址亚洲精品在线观看| 亚洲国产精品一区三区| 免费人成在线观看视频色| 五月伊人婷婷丁香| 亚洲国产精品成人久久小说| 国产成人精品在线电影| 视频中文字幕在线观看| 少妇被粗大的猛进出69影院 | 国产男女内射视频| 精品亚洲成a人片在线观看| 亚洲精品视频女| 美女福利国产在线| 在线精品无人区一区二区三| 国产精品免费大片| 少妇精品久久久久久久| av福利片在线| 韩国高清视频一区二区三区| 高清午夜精品一区二区三区| 青青草视频在线视频观看| 欧美精品高潮呻吟av久久| 欧美日韩视频精品一区| 亚洲精品成人av观看孕妇| 色网站视频免费| 国产视频首页在线观看| 精品久久久噜噜| 亚洲精品日韩av片在线观看| 久久久久国产精品人妻一区二区| 亚洲,欧美,日韩| 国产精品一国产av| 有码 亚洲区| 午夜老司机福利剧场| av在线app专区| 亚洲av国产av综合av卡| 一区二区三区乱码不卡18| 人人妻人人爽人人添夜夜欢视频| 夫妻午夜视频| 欧美精品高潮呻吟av久久| 欧美人与性动交α欧美精品济南到 | 男女边摸边吃奶| 一区二区三区免费毛片| 日韩,欧美,国产一区二区三区| 午夜激情久久久久久久| 一个人免费看片子| 亚洲国产精品专区欧美| 久久久久久久大尺度免费视频| 纵有疾风起免费观看全集完整版| 午夜91福利影院| 高清视频免费观看一区二区| 日韩强制内射视频| 日韩电影二区| 久久久久视频综合| 亚洲少妇的诱惑av| 久久精品国产自在天天线| 97精品久久久久久久久久精品| 熟女电影av网| 亚洲成人手机| 人妻制服诱惑在线中文字幕| 夫妻午夜视频| 一级毛片aaaaaa免费看小| 日本wwww免费看| 国产乱来视频区| 欧美三级亚洲精品| 插阴视频在线观看视频| 久久女婷五月综合色啪小说| 中文字幕av电影在线播放| 国产av一区二区精品久久| 日韩大片免费观看网站| 国产精品蜜桃在线观看| 在线观看免费日韩欧美大片 | 亚洲熟女精品中文字幕| 国产视频内射| 中文天堂在线官网| 久久久久网色| 日本黄大片高清| 大片免费播放器 马上看| 久久亚洲国产成人精品v| 五月玫瑰六月丁香| av在线app专区| 最黄视频免费看| 三级国产精品片| 成年美女黄网站色视频大全免费 | 两个人的视频大全免费| 国产精品熟女久久久久浪| 久久久久久久久久久免费av| 国产成人免费无遮挡视频| 国产精品人妻久久久久久| 精品午夜福利在线看| 亚洲av在线观看美女高潮| 日本av免费视频播放| 日韩亚洲欧美综合| 国产成人精品久久久久久| 嫩草影院入口| 日韩熟女老妇一区二区性免费视频| 一级二级三级毛片免费看| 午夜免费观看性视频| 一级毛片 在线播放| 国产爽快片一区二区三区| 欧美精品一区二区免费开放| 性色av一级| 亚洲精品自拍成人| 国产精品99久久99久久久不卡 | 妹子高潮喷水视频| 大香蕉久久成人网| 国产亚洲一区二区精品| 91国产中文字幕| 熟女人妻精品中文字幕| 天天躁夜夜躁狠狠久久av| 美女cb高潮喷水在线观看| 亚洲经典国产精华液单| 亚洲婷婷狠狠爱综合网| 波野结衣二区三区在线| 亚洲高清免费不卡视频| 国产男女超爽视频在线观看| 国模一区二区三区四区视频| 国产精品一区二区在线观看99| 免费观看性生交大片5| 色5月婷婷丁香| 亚洲精品国产色婷婷电影| 美女中出高潮动态图| 亚州av有码| 国产日韩欧美亚洲二区| 色吧在线观看| 在线观看人妻少妇| 久久久久视频综合| 国产色婷婷99| 国产成人精品婷婷| h视频一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91| 十八禁高潮呻吟视频| 人人妻人人澡人人看| 久热久热在线精品观看| 精品国产乱码久久久久久小说| 欧美亚洲日本最大视频资源| 日本午夜av视频| 成人二区视频| 少妇被粗大猛烈的视频| 精品99又大又爽又粗少妇毛片| 国产一区二区三区av在线| 国产日韩欧美视频二区| 欧美亚洲 丝袜 人妻 在线| 最近的中文字幕免费完整| 国产免费一级a男人的天堂| 国产在视频线精品| 成年美女黄网站色视频大全免费 | 国产精品国产三级专区第一集| 熟女人妻精品中文字幕| 午夜日本视频在线| 国产亚洲一区二区精品| 男人操女人黄网站| 精品99又大又爽又粗少妇毛片| 欧美日韩精品成人综合77777| 久久精品久久久久久久性| 亚洲av中文av极速乱| 久久久久国产精品人妻一区二区| 免费高清在线观看视频在线观看| 性色av一级| 国产精品偷伦视频观看了| 建设人人有责人人尽责人人享有的| 亚洲av中文av极速乱| 亚洲五月色婷婷综合| 亚洲av国产av综合av卡| 欧美3d第一页| 26uuu在线亚洲综合色| 亚洲国产欧美日韩在线播放| 国产国语露脸激情在线看| 久久久久久人妻| 少妇丰满av| 亚洲人成网站在线播| 日韩不卡一区二区三区视频在线| 秋霞伦理黄片| 搡老乐熟女国产| av视频免费观看在线观看| 欧美少妇被猛烈插入视频| 少妇被粗大的猛进出69影院 | 久久久精品94久久精品| 中国美白少妇内射xxxbb| 亚洲国产精品一区三区| 国产一区二区三区综合在线观看 | 自线自在国产av| 亚洲国产av新网站| 免费黄频网站在线观看国产| 日韩av免费高清视频| 日韩电影二区| 国产高清国产精品国产三级| 国产探花极品一区二区| 黄色毛片三级朝国网站| 精品国产一区二区三区久久久樱花| 最后的刺客免费高清国语| 大陆偷拍与自拍| 欧美日韩国产mv在线观看视频| 美女脱内裤让男人舔精品视频| av电影中文网址| 亚洲欧美精品自产自拍| 日韩亚洲欧美综合| 国产黄色免费在线视频| 久热久热在线精品观看| www.色视频.com| 少妇精品久久久久久久| √禁漫天堂资源中文www| 午夜免费男女啪啪视频观看| 日韩成人av中文字幕在线观看| 国产成人午夜福利电影在线观看| 亚洲av.av天堂| 国产一区二区在线观看日韩| 国产黄片视频在线免费观看| 精品视频人人做人人爽| 久久精品国产自在天天线| 亚洲欧美一区二区三区国产| 国产成人精品在线电影| 国产视频首页在线观看| av在线老鸭窝| 国产精品偷伦视频观看了| 热99国产精品久久久久久7| 国产在线免费精品| av天堂久久9| 男人添女人高潮全过程视频| 最新中文字幕久久久久| 国产日韩欧美视频二区| 中文欧美无线码| 三上悠亚av全集在线观看| 精品一区二区免费观看| 天天影视国产精品| 久久ye,这里只有精品| 亚洲国产精品999| av国产精品久久久久影院| 人人妻人人澡人人爽人人夜夜| 最近手机中文字幕大全| 男女啪啪激烈高潮av片| 亚洲精品456在线播放app| 国产欧美日韩综合在线一区二区| 日本91视频免费播放| 在线观看三级黄色| 久久久欧美国产精品| 黄色怎么调成土黄色| 黄色一级大片看看| 成人无遮挡网站| 国产成人精品无人区| 国产成人aa在线观看| 街头女战士在线观看网站| 精品99又大又爽又粗少妇毛片| 十分钟在线观看高清视频www| 夜夜爽夜夜爽视频| 18禁在线播放成人免费| 看十八女毛片水多多多| 亚洲精品日本国产第一区| 日韩一本色道免费dvd| 另类精品久久| www.色视频.com| 青春草视频在线免费观看| 久久久精品94久久精品| 岛国毛片在线播放| 大香蕉97超碰在线| 免费人妻精品一区二区三区视频| 久久久国产欧美日韩av| 国产成人免费观看mmmm| 人妻 亚洲 视频| 午夜免费鲁丝| 国产有黄有色有爽视频| 亚洲精品第二区| 3wmmmm亚洲av在线观看| 在线观看一区二区三区激情| 亚洲国产精品一区三区| 免费av不卡在线播放| 热99国产精品久久久久久7| 婷婷色麻豆天堂久久| 在线亚洲精品国产二区图片欧美 | 中文字幕亚洲精品专区| 国产精品人妻久久久影院| 人妻 亚洲 视频| 久久久久久久精品精品| av.在线天堂| 亚洲精品成人av观看孕妇| 精品久久久久久久久亚洲| 十八禁高潮呻吟视频| 有码 亚洲区| 国产探花极品一区二区| 99久久精品国产国产毛片| freevideosex欧美| av线在线观看网站| 免费av中文字幕在线| 国产欧美亚洲国产| 亚洲精品日本国产第一区| av播播在线观看一区| 自线自在国产av| 久久人人爽人人爽人人片va| 99国产精品免费福利视频| 一区二区三区免费毛片| 男女边摸边吃奶| 18禁观看日本| 色哟哟·www| 免费看光身美女| 我要看黄色一级片免费的| 男女边摸边吃奶| 99热这里只有精品一区| 午夜久久久在线观看| 久久鲁丝午夜福利片| 国内精品宾馆在线| 午夜av观看不卡| 极品少妇高潮喷水抽搐| 黑人猛操日本美女一级片| 久久久久国产网址| 男人爽女人下面视频在线观看| 国产一区二区三区av在线| 乱码一卡2卡4卡精品| 免费av中文字幕在线| 性高湖久久久久久久久免费观看| 精品一区二区三区视频在线| 欧美日韩精品成人综合77777| 一级爰片在线观看| 人人澡人人妻人| 欧美日韩综合久久久久久| 亚洲五月色婷婷综合| 亚洲国产av新网站| 少妇的逼水好多| 精品少妇内射三级| 男女边摸边吃奶| 久久精品国产亚洲av天美| 亚洲内射少妇av| 黄色一级大片看看| 熟女人妻精品中文字幕| 日韩不卡一区二区三区视频在线| 国产av一区二区精品久久| 91精品伊人久久大香线蕉| 天天躁夜夜躁狠狠久久av| 免费黄网站久久成人精品| 亚洲欧洲日产国产| 久久久国产精品麻豆| 简卡轻食公司| 亚洲欧美成人综合另类久久久| 久久精品熟女亚洲av麻豆精品| 亚洲欧美日韩卡通动漫| 欧美精品人与动牲交sv欧美| 欧美成人午夜免费资源| 91久久精品国产一区二区三区| 老司机影院毛片| 99热全是精品| 一本—道久久a久久精品蜜桃钙片| 国产精品人妻久久久久久| 日韩欧美精品免费久久| 大香蕉久久成人网| 一级爰片在线观看| 久久97久久精品| 伊人亚洲综合成人网| 丝瓜视频免费看黄片| 欧美国产精品一级二级三级| 国产精品一区二区在线观看99| 秋霞在线观看毛片| 最近最新中文字幕免费大全7| 人妻一区二区av| 插阴视频在线观看视频| 高清黄色对白视频在线免费看| 男人爽女人下面视频在线观看| 免费高清在线观看视频在线观看| 久久久久久久大尺度免费视频| 亚洲一级一片aⅴ在线观看| 日韩一区二区视频免费看| 亚洲欧洲精品一区二区精品久久久 | 成年av动漫网址| 18禁裸乳无遮挡动漫免费视频| 美女中出高潮动态图| 制服人妻中文乱码| 欧美人与善性xxx| 高清不卡的av网站| 99视频精品全部免费 在线| 国产精品久久久久久精品电影小说| 超色免费av| 自线自在国产av| 爱豆传媒免费全集在线观看| 午夜视频国产福利| 91精品国产九色| 久久久久久伊人网av| h视频一区二区三区| 国产精品一区二区在线不卡| 看免费成人av毛片| 美女内射精品一级片tv| 亚洲精品aⅴ在线观看| 亚洲精品成人av观看孕妇| 少妇被粗大猛烈的视频| 日本欧美视频一区| 久久影院123| 亚洲经典国产精华液单| 国产精品熟女久久久久浪| 在线观看免费日韩欧美大片 | 国产一区二区在线观看av| 男女无遮挡免费网站观看| 欧美日韩视频高清一区二区三区二| 亚洲人成77777在线视频| 老司机影院成人| 只有这里有精品99| 国产深夜福利视频在线观看| 又粗又硬又长又爽又黄的视频| 尾随美女入室| 大片免费播放器 马上看| 99精国产麻豆久久婷婷| 国产片特级美女逼逼视频| 黄片无遮挡物在线观看| 亚洲av不卡在线观看| 极品人妻少妇av视频| 成人综合一区亚洲| 青春草亚洲视频在线观看| 黄色一级大片看看| 2021少妇久久久久久久久久久| 这个男人来自地球电影免费观看 | 伦精品一区二区三区| 18禁在线无遮挡免费观看视频| 永久免费av网站大全| 国产69精品久久久久777片| 免费播放大片免费观看视频在线观看| 午夜影院在线不卡| 成人影院久久| 亚洲经典国产精华液单| 美女国产高潮福利片在线看| 一区二区三区精品91| 亚洲精品一区蜜桃|