• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Optimizing Big Data Retrieval and Job Scheduling Using Deep Learning Approaches

    2023-01-24 02:51:22BaoRongChangHsiuFenTsaiandYuChiehLin

    Bao Rong Chang,Hsiu-Fen Tsaiand Yu-Chieh Lin

    1Department of Computer Science and Information Engineering,National University of Kaohsiung,Kaohsiung,Taiwan

    2Department of Fragrance and Cosmetic Science,Kaohsiung Medical University,Kaohsiung,Taiwan

    ABSTRACT Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithm defeats both first-in-first-out and memory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.

    KEYWORDS Stacked sparse autoencoder;Elasticsearch;distributed indexing;data retrieval;deep neural network;job scheduling

    1 Introduction

    In recent years,the rapid growth of the amount of data coupled with the declining cost of storage equipment,the evolution of software technology,and the maturity of the cloud environment led to the rapid development of big data analytics[1].When the amount of data is enormous,and the speed of data flow is fast,traditional methods are no longer for dealing with data storage,computing,and analysis in time,and excessively large-scale access will also cause I/O severe delay in a system.Faced with such explosive growth of large amounts of data,implementing tremendous distributed computing[2]with large-scale data clustering and type of not-only-SQL(NoSQL)storage technology has become a popular solution in recent years.Apache Hadoop or Spark[3]is currently the most widely known big data analytics platform in business intelligence with decentralized computing capabilities.Each of them is very suitable for typical extract-transform-load(ETL)workloads[4]due to its large-scale scalability and relatively low cost.Hadoop uses first-in-first-out(FIFO)[5]scheduling by default to prioritize jobs in the order in which they arrive.Although this kind of scheduling is relatively fair,it is likely to cause low system throughput.On the other hand,the response time of data retrieval using a traditional decentralized system is lengthy,causing inefficient job execution.Therefore,how to improve the efficiency of data retrieval and job scheduling becomes a crucial issue of big data analytics in business intelligence.

    In terms of artificial intelligence development,deep learning[6]has rapidly developed in recent years.AlphaGo[7],developed by Google DeepMind in London,UK,defeated many Go masters in 2014.AI has become a hot research topic once again.The use of machine learning[8]or deep learning in various aspects of research and related applications is constantly prosperous.IBM developed a novel deep learning technology that mimics the working principle of the human brain,which can significantly reduce the speed of processing a large amount of data.Deep learning is a branch of artificial intelligence,and today’s technology giants Facebook,Amazon,and Google focus on its related development for many innovations.In this era,the explosive growth of data will exist,and how to process and analyze a large amount of data effectively has become an important topic.

    Nowadays,data retrieval and job scheduling research mainly focuses on using Hadoop and Spark open-source big data platforms in business intelligence systems to improve efficiency.Generally speaking,this study considers encoding and indexing technologies to realize fast data retrieval to increase the system throughput in big data analytics.Developing high-performance job scheduling in big data analytics reduces the time of job waiting for execution in a queue.The objective of this paper is to develop an advanced deep learning model together with a high-performed indexing engine that can beat the previous method,integrating deep autoencoder[9](DAE)and Solr indexing[10](abbreviated DAE-SOLR),over the speed of big data retrieval significantly.Based on deep learning model,this paper explored integrating stacked sparse autoencoder (SSAE) [11] and Elasticsearch indexing [12],abbreviated SSAE-ES,to create a fast approach of data searching and distributed indexing,which can reduce the search scope of the database and dramatically speeds up data searching.Implementing the proposed SSAE-ES can outperform the previous method DAE-SOLR with higher data retrieval efficiency and system throughput for big data analytics in business intelligence.On the other hand,this paper tried to exploit deep neural networks(DNN)[13]to predict the approximate execution time of jobs and give prioritized job scheduling based on the shortest job first(SJF)[14],which can reduce the average waiting time of job execution and average weighted turnaround time[15].

    2 Related Work

    2.1 Literature Review

    The real-time data processing and analysis are getting higher and higher for big data analytics in business intelligence.To pursue better performance of data analysis,the job schedule plays a vital role in improving the performance of big data analytics.For example,Yeh et al.[16]proposed the user can dynamically assign priorities for each job to speed up the execution speed of the job.However,it lacks automatic functions to get the job started and must get permission from the user every time.Thangaselvi et al.[17]mentioned the user could import self-adaptive MapReduce(SAMR)algorithm into Hadoop,which can adjust the parameters recalling the historical information saved on each node,thereby dynamically finding slow jobs.But SAMR works based on the model established by K-means[18].A K-means approach usually has specific assumptions about the data distribution,while DNN with hierarchical feature learning has no explicit assumptions about the data.Therefore,DNN can establish a more complex data distribution than K-means to get better prediction results.

    Many studies even apply deep learning to the distributed computing nodes.For example,Marquez et al.[19]used the concept of deep cascade learning to combine Spark’s distributed computing using multi-layer perceptrons to build a model that can perform large-scale data analysis in a short time[20].However,there will still be a longer waiting time for job execution due to this model’s,i.e.,FIFO,lack of job scheduling optimization.Likewise,Lee et al.[21] gave data clustering with deep autoencoder and Solr indexing to significantly improve the query performance.However,the system lacks suitable job scheduling.Besides,Chang et al.[22] proposed the memory-sensitive heterogeneous early finish time(MSHEFT)algorithm for improving job scheduling.Unfortunately,when encountering the same size as data in the different jobs,it performs scheduling just like a FIFO,and nothing can be improved at all.

    Nature-inspired optimization can give some searching advantages by using meta-heuristic-like approaches rather than deep learning models.L.Abualigah et al.proposed a novel population-based optimization method called Aquila Optimizer(AO)[23],which the Aquila’s behaviors have inspired nature while catching the prey.Abualigah et al.[24]proposed a novel nature-inspired meta-heuristic optimizer,called Reptile Search Algorithm(RSA),motivated by the hunting behavior of Crocodiles.Regarding the arithmetic optimization algorithm used for searching the target data,Abualigah et al.[25] presented a comprehensive survey of the Internet of Drones (IoD) and its applications,deployments,and integration.Integration of IoD includes privacy protection,security authentication,neural network,blockchain,and optimization based-method.Instead,Time-consuming will be a crucial problem about the cost when people consider meta-heuristic-like approaches or the arithmetic optimization algorithm for searching in significant data retrieval issues.

    2.2 Encoder for Data Clustering

    Deep autoencoder (DAE) is an unsupervised learning model of neural networks.Its model architecture includes two fully connected feedforward networks,called encoder and decoder,which perform compression and decompression.After training AE,the user reserves the encoder and can input the data into the encoder.The encoder outputs a point located in the three-dimensional space,as shown in Fig.1.Three-dimensional space divides itself into eight quadrants according to the X,Y,and Z axes,and the quadrants numbering from 1~8 are inserted into the last column of the table,as shown in Fig.2.

    Figure 1:Data set mapping to 3D coordinates

    Figure 2:Encoder for data clustering

    2.3 Stacked Sparse Autoencoder(SSAE)

    Since training AE will activate the neurons of a hidden layer too frequently,AE may easily result in overfitting due to the degree of freedom being more considerable.A sparse constraint is added to the encoder part to reduce the number of activations of each neuron for every input signal.A specific input signal can only activate some neurons and will probably inactivate the others.In other words,each input signal of the sparse autoencoder cannot activate all of the neurons every time during the training phase,as shown in Fig.3.

    Figure 3:Sparse autoencoder model

    The autoencoder with sparse constraints is called sparse autoencoder(SAE)[26].SAE defines loss function on Eq.(1),where L represents the loss function without sparse constraint,KLstands forKLdivergence[27],ρdenotes the expected activation degree of neurons in the network(if the activation function is a Sigmoid function,and SAE sets its value to 0.05,which means that most of the neural cell is not activated),andis the average activation degree of the jthneuron.Here,people definedKLon Eq.(2)and useKLdivergence to measure the similarity between the average activation output of hidden layer nodes and the sparsityρ.Eq.(3)defindsthe average activation degree on the training sample set in whichmrepresents the number of training samples,andajandxistand for the response output of the jthnode in the hidden layer to i samples.

    After training multiple SAEs in which the user tuned each SAE layer by layer,the user finally stacked multiple SAEs up and called it stacked sparse autoencoder(SSAE),as shown in Fig.4.The encoded output of the previous stage acts as the input of the next stage.After the training of the first SAE in Stage 1,the user can obtain the first hidden layer with m neurons denoted Hidden 1.Likewise,by cloning Hidden 1 to the second SAE in Stage 2,the user can obtain the second hidden layer with n neurons denoted Hidden 2.Finally,the user stacks two SAEs to form a stacked sparse autoencoder(SSAE).Based on the ahead of the hidden layer,the current hidden layer can generate a new set of features.Hopefully,the user can get more hidden layers according to layer-by-layer training in the different stages.

    Figure 4:Layer-wise pre-training for two SAEs and stack them up

    2.4 Data Retrieval by Indexing

    According to the quadrant value of the last field,the data is divided into different files and then sent to Solr for data indexing.To cope with large-scale data retrieval,the longer it takes when the index volume is enormous,distributed indexes can speed up the search time.SolrCloud reduces the pressure of single-machine processing,and multiple servers need to complete indexing together.Its architecture is shown in Fig.5.

    Figure 5:Solr architecture

    After creating an index(collection)of data,Solr divided index into multiple shards,and each shard has a leader for job distribution,as shown in Fig.6.When the replica completes the job,it will send the result back to the leader,and then the leader will send the result back to SolrCloud for the final result.The user can upload the file to any replica when the data is uploaded.If it is not the leader,it will forward the request to the leader of the same shard,and the leader will give the file path to each replica in the same shard.If the file does not belong to the same shard,the leader will transfer it to another leader in its corresponding shard.The corresponding leader will also give the file path to each replica in the same shard.The user manually uploaded the data to Solr after completing data clustering using the encoder process for subsequent node use.The flowchart of data preprocessing illustrates how to prepare the data clustering together with indexing,as shown in Fig.7.

    Figure 6:SolrCloud architecture

    Figure 7:Data preprocessing flow

    2.5 SQL Interface Selection

    Developing an automatic SQL interface selection lets users choose an appropriate SQL interface[28],such as Hive,Impala,or SparkSQL,for performing SQL commands with the best efficiency.The appropriate SQL interface selection is valid according to the remaining memory size given in a cluster node.After trial and error,we found two critical points labeled as L1 and L2,where L1 represents the amount of memory around 5 GB and L2 10 GB.In this study,we set each node the amount of memory with 20 GB.Thus it defines three memory zones ranging from 0~5 GB,5 GB~10 GB,and 10 GB~20 GB,as shown in Fig.8.When the remaining memory size is less than L1,the system will automatically select the Hive interface to perform SQL commands.The system will use the Impala interface between L1 and L2.If it is more than L2,the system will choose the SparkSQL interface,as shown in Fig.8.We note that the Impala and SparkSQL interfaces need more memory to run the job having a large amount of data.Given sufficient memory,SparkSQL with in-memory computing capability can run the best efficiency in big data analytics.On the contrary,the Hive interface can run any job at a low level of memory size but is time-consuming.

    Figure 8:SQL interface selection

    2.6 Job Scheduling Algorithm MSHEFT

    Heterogeneous early finish time(HEFT)[29]is a heuristic way to schedule a set of dependent jobs onto a network of heterogeneous workers that takes communication time into account.HEFT works on a list out of the scheduling algorithm that establishes a priority list first.HEFT will assign each job to the appropriate CPU to follow the sorted priority list to complete the job as soon as possible.People modified the HEFT algorithm to the memory-sensitive heterogeneous early finish time (MSHEFT)algorithm [22] that first considers the job’s priority,then the data size,and,finally,the remaining memory size to choose the appropriate SQL command interface automatically.The flow chart of job processing with the MSHEFT algorithm and automatic SQL interface selection,as shown in Fig.9.

    2.7 Two-Class Caching in Data Retrieval

    Two-class caching is usually used to maximize the distributed system’s performance and optimize data retrieval and job scheduling.Two-class caching consists of in-memory and in-disk caches to save excessive hardware resource consumption caused by repeated searches.Two-class caching describes the flow in the following,as shown in Fig.10.

    2.8 In-Memory Cache Mechanism

    In terms of in-memory cache design,Memcached [30],a distributed high-speed fast memory system,temporarily stores data.Memcached can store the data in the memory in the form of key-value.A colossal hash table is constructed in the memory so that users can quickly query the corresponding data value through the key-value.Therefore,it is often used as a cache function to speed up the access performance of the system.However,the Memcached system has specific restrictions on its use.The default maximum length of the key-value is only 250 characters,and the acceptable storage data size cannot exceed 1 MB.Therefore,it is necessary first to divide and store the search results when storing the cache.

    Figure 9:Job scheduling with MSHEFT algorithm and SQL interface selection

    Figure 10:The flow of two-class caching in data retrieval

    After the user enters the SQL command,the system will first use the MD5 algorithm [31] to convert the command into 16 bytes of hash value results and use the hash value as the unique identification code(i.e.,key-value)for this search job,as shown in Fig.11.If there is the exact SQL requirement later,it still results in the same hash value through the MD5 algorithm.The system can obtain the data through this unique identification code and realize the mechanism of cache design.

    2.9 In-Disk Cache Mechanism

    As for in-disk,this paper uses HDFS distributed file system as the platform for in-disk cache storage.Since HDFS does not have storage capacity limitations like the Memcached system,it is only necessary to save a copy of the search result and upload it to HDFS.Similarly,the exact unique identification code as applied in the in-memory cache system names its file for identification,as shown in Fig.12.Since files in HDFS will not become invalid automatically,users must periodically delete them manually in HDFS.Therefore,this paper also has the function of active clearance.The user only needs to enter the“purge x”command in the CLI,and the system can automatically delete the cache data that the user has not accessed within x days.

    Figure 11:In-memory cache flow

    Figure 12:Cached files saving in HDFS

    3 Method

    3.1 Improved Big Data Analytics

    In Fig.13,improving the performance of big data analytics in a business intelligence system concerns three aspects:data retrieval,job scheduling,and different SQL command interfaces.The first one aims to speed up the data searching in big data.Therefore,this study must reduce the scope of data searching in the database and then implement data retrieval as fast as possible.This study introduced an approach to reduce the searching scope using data clustering with stacked sparse autoencoder,followed by a fast query response using quickly distributed data indexing with Elasticsearch,then stored tables into HDFS.The speed of data retrieval is relative to SQL command interfaces,so we will explore how to find the appropriate interface for the current situations of a node.This study has introduced three SQL command interfaces:Hive,Impala,and SparkSQL[28].

    Figure 13:The optimization of big data analytics flow

    On the other hand,the deep neural network carried out the time prediction of job execution to optimize job scheduling in big data analytics.According to the job with the shortest time for execution as the higher priority in scheduling,it can reduce the average waiting time in a queue.Finally,users only need to enter SQL commands through the Command-Line Interface(CLI),and the system will automatically detect the remaining memory size of a node and select the appropriate SQL interface to carry out the jobs,which can also speed up the job execution and is called the automatic interface selection.In particular,a SQL query can save the searching result into an in-memory or in-disk cache.Therefore,you can directly cache the data without launching the SQL command interfaces if you want to retrieve the same data repeatedly in a short time.The cache indeed speeds up the data retrieval dramatically.

    3.2 Data Clustering by Mapping

    The Stacked Sparse Autoencoder (SSAE) model can apply to a data set clustering.The dimensional vector of the SSAE input and output layers is the number of initial data columns denoted m.SSAE constructs the l-layer neurons as an encoder and the mirror layers as a decoder with unsupervised learning.The output of each layer is connected to the input of followed layer to reduce the data dimension and finally obtain an output of n-dimensional vector in the middle layer,and then run the decoder of an SSAE in reverse order to restore the initial input data column.There is an example of SSAE,as shown in Fig.14.In Fig.14,the loss function is MSE,the activation function is Sigmoid,and the optimizer is Adam.

    Figure 14:Data clustering using stacked sparse autoencoder

    The results of running 50 epochs between AE and SSAE training show their visualized results in Fig.15.Since AE does not have sparsity restriction,it turns out to be some of the different classes of data entangled around the same area,resulting in uneven data distribution and worse the effect of data clustering.In Fig.15a,two data sets may be mapped to the same area and have entangled,where there is no apparent separation between the classes,such as gray-class tangled with purple-class,and redclass tangled with brown-class.The SSAE with sparse constraints can effectively solve this problem,making the sampled data of the same class closer and the separation between different classes more pronounced,as shown in Fig.15b.

    Figure 15:Data visualization results between AE and SSAE training

    Here,we separate data into eight classes for a specific data set.Therefore,the output of the last hidden layer of the encoder from a trained SSAE model represents the mapping of each row of data to a point in the 3-dimensional space.According to the X,Y,and Z axes in the 3-dimensional space,SSAE automatically divided the data into eight classes.The corresponding eight quadrants numbering from 1 to 8 are inserted into the last column of the original data table,as shown in Figs.1 and 2.In this study,the user applies the SSAE to data clustering instead of AE.After data clustering,the user can upload the tables to Elasticsearch for distributed indexing in them.

    3.3 Data Retrieval by Quick Indexing

    The system will generally spend longer I/O time to cope with large-scale data retrieval.Distributed indexing using Elasticsearch can speed up the data search speed and resolve the bottleneck problem about single-machine processing with time-consuming.Multi-server processing in a cluster can implement an indexing function to run the data search as fast as possible.When started each node,the system will automatically join this node and designate one of them to act as the master node.Accordingly,Elasticsearch,as shown in Fig.16,indexed the data and then stored the data into the distributed file system HDFS.

    Figure 16:Cluster by Elasticsearch

    After clustering input data into three-dimensional space,the user can manually upload this data set to Elasticsearch for distributed indexing,as shown in Fig.17.When an input data set needs to be indexed,the indexing tool starts if the current node is the master node;otherwise,it will forward it to the master node.In Fig.18,Elasticsearch first checks whether the data set was indexed before.If not,Elasticsearch will start building the index,write the results into the Lucene index,and use the hash algorithm to ensure that it makes indexed data evenly distributed and stored in the designated primary shard and replica shard.Meanwhile,Elasticsearch will create a corresponding version number and store it in translog.Elasticsearch supposes comparing the existing version numbers with the new ones to check any conflict if the data set is indexed early.If not,it can start indexing.If yes,Elasticsearch returns an error result that writes to translog.Finally,indexed data stored in HDFS can apply for the job with SQL command,as shown in Fig.13.

    Figure 17:Data clustering and indexing flow

    Figure 18:Indexing flow of Elasticsearch

    3.4 Deep Neural Network Prediction for Job Scheduling Optimization

    According to the SJF job scheduling algorithm,users can train a deep neural network (DNN)to predict the approximate job execution time.The input layer of this DNN has a six-dimensional vector,including data size,the number of data rows,the number of data columns,the time complexity of program execution,the SQL interface environment,and the remaining memory size.The output layer has only a one-dimensional vector(Label)as a time prediction of performing data retrieval,as shown in Fig.19.Users can collect real data from the web to train a DNN model.Three sets of SQL interfaces—Hive,Impala,and SparkSQL—are used to test a DNN time prediction applied to the data retrieval function under the different remaining memory sizes given.The activation function in a DNN is ReLU,the loss function MSE,and the optimizer Adam[32].We will give the DNN model architecture and loss curve during the training,as shown in Figs.20 and 21,respectively.

    Figure 19:Column“Label(s)”indicating the output of a deep neural network(DNN)

    In addition,we used a combination of deep neural network prediction and the shortest job priority scheduling to optimize the job scheduling.When a job enters the queue,the system first considers its execution priority and then predicts its approximate execution time to view as a job scheduling condition.Finally,the system will consider the remaining memory size to select the appropriate SQL interface to carry out the input SQL command.In such a way,the system can implement the significant throughput.Fig.22 describes the flow of the proposed method mentioned above.

    Figure 20:Deep neural network(DNN)architecture

    Figure 21:Loss curve during DNN training phase

    Figure 22:The flow of job scheduling optimization

    3.5 Execution Commends and Its Flow

    The designed programs in this study contain many application functions.Users can issue commands through the command-line interface (CLI) to input SQL commands.All commands of the programs describe their functions,as listed in Table 1.Fig.23 gives the execution flow.

    Table 1: Execution commands

    Figure 23:Job execution flow

    4 Experiment Results and Discussion

    4.1 Experimental Environment

    The experiment uses the dynamic and adjustable resource characteristics of Proxmox VE to set up experimental environments with different memory sizes for the nodes in a cluster.The test environment is listed in Table 2.

    Table 2: Test environment

    4.2 Data Sets

    The experiments proceed with all jobs with real data sets collected from the web to verify that the proposed approaches can effectively improve business intelligence performance in big data analytics.User inputs the SQL commands related to a certain real data set and measures the job execution time consumed in a specific SQL interface.The real data sets collected from the web includes (1)world-famous books[33],(2)production machine load[34],(3)semiconductor product yield[35],(4)temperature,rainfall,and electricity consumption related to people’s livelihood[36,37],(5)forest flux station data[38],(6)Traffic violations/accidents[39],(7)Analysis of obesity factors[40],(8)Airport flight data[41].The detailed information about real data sets describes as follows:

    (1)World-famous books

    This test is first to read the full text of the world-famous books as follows:Alice’s Adventures in Wonderland,The Art of War,Adventures of Huckleberry Finn,Sherlock Holmes,and The Adventures of Tom Sawyer.After that,the word count task counts the number of occurrences of words,and sorts them from high-hit to low-hit in order.An example of plain text file of Alice’s Adventures in Wonderland is shown in Fig.24.

    Figure 24:Plain text file of Alice’s Adventures in Wonderland

    (2)Production machine load

    Fig.25 is a product production process report provided by Taiwan’s large packaging and testing factory.The content includes the product number,the responsible employee,the name of the production machine,etc.The file format is.xls format.The purpose is to find the production machines used too frequently or too low in the production schedule and provide the decision-making analysis of the person in charge of the future schedule.Count the number of times the production machine is used and calculate the overall sample standard deviation.According to the concept of normal distribution,find out the data outside the “mean ± 2 * standard deviation”range and treat it as a machine that may be Overloading or Underloading.

    Figure 25:Records of production machine loading

    (3)Semiconductor product yield

    In Fig.26,the product test data provided by a significant packaging and testing company in Taiwan includes various semiconductor test items and PASS or FAIL results.The file format is a standard.csv format(separated by commas).The purpose is to calculate the yield rate of the product to see if it meets the company’s yield rate standard(99.7%).

    Figure 26:Records of semiconductor product

    (4)Temperature,rainfall,and electricity consumption related to livelihood

    The website of Taiwan meteorological bureau has collected the rainfall and temperature data,as shown in Fig.27.The website of TAIPOWER has provided livelihood electricity data,as shown in Fig.28.For both of them data collection period is from January 01,2007 to April 30,2020.The purpose is to find the correlation between rainfall,temperature,and electricity consumption in Taiwan.Based on statistical“Correlation,”the correlation coefficient between the data is calculated and displayed as positive,negative,or no correlation.A certain linear correlation exists when 0<|r|<1 between the two variables.The closer|r|approaches to 1,the closer the linear relationship between the two variables is.Contrarily,the closer|r|approaches to 0,the weaker the linear relationship between the two variables is.Generally,it can be divided into three levels:|r|<0.4 is a low-degree linear correlation;0.4 ≤|r|<0.7 is a significant correlation;0.7 ≤|r|<1 is a high-degree linear correlation.

    Figure 27:Records of temperature and rainfall

    Figure 28:Records of livelihood electricity consumption

    (5)Forest flux station data

    In Fig.29,EU Open Data Portal has provided the forest flux station data that contains various flux information,including time and location,illuminance,soil information,and atmospheric flux.The file format is a standard .csv format (separated by commas).The purpose is to calculate the correlation coefficient between the CO2flux coefficient and the light intensity,temperature,and humidity,respectively,to examine the degree of correlation.

    Figure 29:Records of forest flux station

    (6)Traffic violations/accidents

    Fig.30 has given the information about traffic violation/accident recorded from Maryland state in USA.Data are in the standard .csv format,i.e.,each item separated by commas.Calculate the frequency statistics of monthly traffic violations and accident locations.

    Figure 30:Records of traffic violation/accident

    (7)Analysis of obesity factors

    U.S.government has published the information about the obesity factors from 2011 to 2019,as shown in Fig.31.Data are in the standard .csv format,i.e.,each item separated by commas.The objective of this test is to analyze the relationship between age,weekly exercise status,vegetable and fruit intake,and BMI through data statistics.Therefore,people can understand whether these factors affect human’s body overweight.

    (8)Airport flight data

    Fig.32 has shown the airport flight information recorded from New York airports in USA.Data are in the standard .csv format,i.e.,each item separated by commas.This test is to calculate the proportion of the airport’s flights to Taiwan in the total flights in the year.

    Figure 31:Records of obesity factor

    Figure 32:Records of airport flight information

    4.3 Data Retrieval Experiment

    This experiment tested each data set in different experimental environments,executed them according to the issued SQL commands,and compared their performance using different approaches.Table 3 has listed the different approaches applied in the experiments.

    Table 3: Test method

    Table 3 (continued)

    In Table 3,MSHEFT,DAE-SOLR+MSHEFT,and SSAE-ES+DNNSJF are different scheduling algorithms,and thus the order of job execution is scheduled in different sequences.In particular,the Hive interface can only complete the mission when the job has a large amount of data for analysis.However,the Impala or SparkSQL interface could not work successfully because the remaining memory size is insufficient.We noted that the performance of the Impala interface is more prominent when the remaining memory size is moderate.When the remaining memory size is significantly large,the SparkSQL interface can perform best due to its in-memory computing capability.

    With automatic interface selection,the system can select the appropriate SQL interface and accept the SQL command from a user to perform the corresponding job in different scheduling algorithms,MSHEFT,DAE-SOLR+MSHEFT,and SSAE-ES+DNNSJF.If the data set sored in HDFS has completed data retrieval optimization in advance,the optimized data clustering and indexing can significantly reduce the average execution time of data searching.Experimental results have shown each job execution time,as shown in Figs.33–35,and the average job execution time and system throughput,as listed in Tables 4–6.

    Figure 34:Job execution time of various methods in Environment II

    Figure 35:Job execution time of various methods in Environment III

    Table 4: Average job execution time of various methods in Environment I

    Table 5: Average job execution time of various methods in Environment II

    Table 6: Average job execution time of various methods in Environment III

    4.4 Job Scheduling Experiment

    Since this study uses an automatic SQL interface selection,the appropriate SQL interface can be automatically selected to perform data analysis according to available memory size.Therefore,the experiments use this way to test an appropriate SQL interface selected to perform data analysis in different environments and compares the average job waiting time for execution among different scheduling algorithms.

    To comply with the requirement of the experimental tests hereafter,they will use SSAE and Elasticsearch for all data sets to execute data retrieval together with the various job scheduling,as listed in Table 7.We here tested the experimental environment I,II,and III,as listed in Table 2.Since FIFO,MSHEFT,and DNNSJF are different job scheduling algorithms,the job scheduling algorithms will execute in a different order.

    According to the experimental results,different data analysis jobs take different times.Generally speaking,the job scheduling performance of MSHEFT is good.However,the MSHEFT algorithm will do the same job scheduling as the FIFO algorithm when different jobs have the same data set size.The experimental results show that the proposed approach DNNSJF algorithm obtains the shortest average waiting time and average weighted turnaround time for each job execution,followed by the MSHEFT scheduling algorithm that schedules the jobs based on the amount of data size,and the FIFO algorithm reaches the poor one.The experimental results have shown the waiting time for each job execution,as shown in Figs.36–38.They have also shown the average job waiting time for execution and average weighted turnaround time,as listed in Tables 8–10.

    Table 7: Job scheduling algorithm

    Figure 36:Job waiting time of various methods in Environment I

    Figure 37:Job waiting time of various methods in Environment II

    Figure 38:Job waiting time of various methods in Environment III

    Table 8: Average job waiting time of various methods in Environment I

    Table 9: Average job waiting time of various methods in Environment II

    Table 10: Average job waiting time of various methods in Environment III

    4.5 Hypothesis Testing

    The data tested in this study cannot be directly assumed to obey a specific distribution,and this study should test it using a nonparametric statistical method.This paper will introduce the Wilcoxon signed-rank test[42]as a hypothesis testing in this study,which is a nonparametric statistical method for testing a single sample.In Table 11,the first hypothesis testing will proceed with the Wilcoxon signed-rank test between the previous method DAE-SOLR+MSHEFT and the proposed approach SSAE-ES+DNNSJF where test has sampled 30 valid data from Experiments I,II,and III of data retrieval.Next,the second one will proceed with the Wilcoxon signed-rank test between the previous method SSAE-ES+MSHEFT and the proposed approach SSAE-ES+DNNSJF where the test has sampled nine valid data from Experiments I,II,and III of job scheduling.

    Table 11: Wilcoxon signed-rank test

    Assuming a one-tailed test ofα=0.05,the null hypothesis and the alternative hypothesis are as follows:

    In Eq.(4),MPandMCare the median time-consuming of using the previous method and the proposed approach.Assuming that T is the smallest of the positive and negative sorted sums,Eq.(5)expresses the following formula:

    E(T)in Eq.(6)is the expected value of the random variable T andVAR(T)in Eq.(7)the variance

    When the number of samples is proper (n ≥20),Zstands for the test statistic in Eq.(8).If the number of samples is too small(n<20),Z*represents the continuity correction of the test statisticZin Eq.(9).

    As a result,looking-upZtable,theZvalue ofα=0.05 is 1.65 that is less than both 4.78(Zvalue)and 2.61(Z*value)as shown in Table 11.The decision can reject the null hypothesisH0.That indicates that the sampling data have obtained significant results in the Wilcoxon signed-rank test for the proposed approaches in this paper.

    4.6 Discussion

    According to the data retrieval experiments,the SSAE-ES approach partitions the vast data to eight data zones successfully,adds sparse constraints to enhance searching capability,and implements an indexing function for the large-scale database to realize fast indexing of target data.Compared with the previous method DAE-SOLR,the SSAE-ES approach effectively improves the speed of data retrieval up to 44~53%and increases system throughput by 44~53%.As a result,the proposed approach SSAE-ES outperforms the other alternatives in data retrieval in the experiments.The results of the experiments showed the strengths of the proposed SSAE-ES algorithm.Besides,a statistical test using Wilcoxon signed-rank test can support the claim on improved results obtained with the proposed approach.SparkSQL performs better data retrieval than Impala or Hive in the experiments given considerable memory size.

    Next,according to the job scheduling experiments,the MSHEFT algorithm performs job scheduling based on the data size for each job.It can decrease the waiting time of job execution in a queue.However,when it comes up with different jobs with the same data size,the MSHEFT algorithm proceeds with the job scheduling just like a FIFO scheduling.That is a bad situation.Therefore,the proposed DNNSJF approach can effectively overcome this problem because it can infer the different execution times for the same data size of many different jobs.Compared with FIFO and MSHEFT job scheduling algorithms,the DNNSJF approach can shorten the job’s average waiting time to 3~5%and 1~3%,and the average weighted turnaround time by 0.8%~09.9%and 16%~19%.The results of the experiments showed the strengths of the proposed DNNSJF approach.Besides,a statistical test using Wilcoxon signed-rank test can present the claim confirmed on the significance test results with the proposed approach.

    The experiments found that in-memory cache will lose part of retrieval data when a block of data size is more extensive than 100,000 bytes.Memcached may cause data loss when it has written a large amount of retrieval data back to the in-memory cache in a single stroke.The experiments showed the weaknesses of the Memcached used in the in-memory cache.

    5 Conclusion

    In this paper,the theoretical implication is to improve the efficiency of big data analytics on two aspects: speed up data retrieval in big data analytics,increase system throughput concurrently,and optimize job scheduling to shorten the waiting time for job execution in a queue.There are two significant findings in this study.First,this study explores the advanced searching and indexing approaches to improve the speed of data retrieval up to 53%and increase system throughput by 53%compared with the previous method.Next,this study exploits the deep learning model to predict the job execution time used to arrange prioritized job scheduling,shortening the average waiting time up to 5% and the average weighted turnaround time by 19%.As a result,big data analytics and its application of business intelligence can achieve high performance and high efficiency based on our proposed approaches in this study.However,the system has shortcomings about the limits of inmemory cache operations.When a large amount of data in a single in-memory cache block (more than 100,000 bytes) occurs,we will lose part of the block and not write the retrieval results entirely into the in-memory cache.Moreover,the system can only write a single stroke to the in-memory cache sequentially,i.e.,time-consuming.We have to find a way to improve both problems as mentioned earlier in the future.

    According to the themes discussed in this paper,there are some aspects worth exploring more profoundly in the future.The first is to find a model with better precision in the job execution time prediction.Looking for the other deep learning model,e.g.,a long short-term memory is technically feasible instead of the DNN model.Secondly,you can write a JDBC interface to connect with the main program to extend the other SQL interfaces.Finally,we have to find a better solution to the limits of in-memory cache operations about the size of a block and a single sequential stroke writing to avoid losing part of blocks while writing.

    Author Contributions:B.R.C.and Y.C.L.conceived and designed the experiments;H.F.T.collected the experimental dataset,and H.F.T.proofread the paper;B.R.C.wrote the paper.

    Funding Statement:This paper is supported and granted by the Ministry of Science and Technology,Taiwan(MOST 110-2622-E-390-001 and MOST 109-2622-E-390-002-CC3).

    Conflicts of Interest:The authors declare that there is no conflict of interests regarding the publication of the paper.

    精品一区二区三区av网在线观看 | 国产高清国产精品国产三级| 首页视频小说图片口味搜索| 国产三级黄色录像| 免费在线观看完整版高清| 欧美黑人精品巨大| 亚洲精品久久成人aⅴ小说| videos熟女内射| 久久久久久人人人人人| 午夜激情av网站| 免费在线观看日本一区| 国产激情久久老熟女| aaaaa片日本免费| 又大又爽又粗| 亚洲av欧美aⅴ国产| 成人免费观看视频高清| 国产在线观看jvid| 一边摸一边做爽爽视频免费| netflix在线观看网站| 自拍欧美九色日韩亚洲蝌蚪91| 一级片免费观看大全| 国产精品二区激情视频| 韩国精品一区二区三区| 99国产精品一区二区三区| 一区二区av电影网| 日韩一卡2卡3卡4卡2021年| 国产精品免费大片| 中文字幕最新亚洲高清| 精品国产乱码久久久久久小说| 嫁个100分男人电影在线观看| 又大又爽又粗| 国产精品久久久久久人妻精品电影 | 欧美日韩亚洲高清精品| 在线观看舔阴道视频| av天堂久久9| 少妇的丰满在线观看| av网站免费在线观看视频| 亚洲av欧美aⅴ国产| 午夜福利一区二区在线看| 天天影视国产精品| 丝袜喷水一区| 亚洲国产毛片av蜜桃av| h视频一区二区三区| 无遮挡黄片免费观看| 黄网站色视频无遮挡免费观看| 我要看黄色一级片免费的| svipshipincom国产片| 18禁裸乳无遮挡动漫免费视频| e午夜精品久久久久久久| 日韩大片免费观看网站| 两个人看的免费小视频| 中文字幕高清在线视频| 怎么达到女性高潮| av国产精品久久久久影院| 日韩免费高清中文字幕av| 国产成+人综合+亚洲专区| 欧美人与性动交α欧美软件| 久久久久网色| 色婷婷久久久亚洲欧美| 香蕉久久夜色| 一区二区三区乱码不卡18| 久久中文字幕人妻熟女| 中文字幕人妻丝袜一区二区| 高清视频免费观看一区二区| 中文字幕人妻丝袜一区二区| 99re6热这里在线精品视频| 亚洲成人国产一区在线观看| 国产精品二区激情视频| 日本黄色日本黄色录像| 宅男免费午夜| 男女床上黄色一级片免费看| 黄色片一级片一级黄色片| 黄色毛片三级朝国网站| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲精华国产精华精| 色尼玛亚洲综合影院| 亚洲欧美精品综合一区二区三区| 91精品三级在线观看| 麻豆国产av国片精品| 精品少妇久久久久久888优播| 亚洲精品中文字幕在线视频| 不卡一级毛片| 欧美乱码精品一区二区三区| 精品卡一卡二卡四卡免费| 男男h啪啪无遮挡| 一区二区三区乱码不卡18| 午夜福利影视在线免费观看| 又黄又粗又硬又大视频| 亚洲欧美日韩另类电影网站| 亚洲精品在线观看二区| 久久中文看片网| 国产有黄有色有爽视频| 蜜桃在线观看..| 亚洲欧美一区二区三区黑人| 中文字幕人妻熟女乱码| 国产精品美女特级片免费视频播放器 | 伦理电影免费视频| 一本久久精品| 老汉色∧v一级毛片| 免费在线观看视频国产中文字幕亚洲| 一边摸一边做爽爽视频免费| 亚洲av成人不卡在线观看播放网| 日韩视频在线欧美| kizo精华| 成人亚洲精品一区在线观看| 精品一品国产午夜福利视频| 日本精品一区二区三区蜜桃| 免费观看人在逋| 国产精品偷伦视频观看了| 日韩中文字幕视频在线看片| 午夜精品久久久久久毛片777| 狂野欧美激情性xxxx| 伦理电影免费视频| 国产成+人综合+亚洲专区| 啦啦啦免费观看视频1| 久久天躁狠狠躁夜夜2o2o| 久久午夜综合久久蜜桃| 人人妻,人人澡人人爽秒播| 精品免费久久久久久久清纯 | 久久ye,这里只有精品| 国产成人免费无遮挡视频| 一本色道久久久久久精品综合| 丰满人妻熟妇乱又伦精品不卡| 丁香六月欧美| 国产单亲对白刺激| 啦啦啦在线免费观看视频4| 岛国在线观看网站| 超碰97精品在线观看| 午夜激情av网站| 精品国产亚洲在线| 亚洲国产成人一精品久久久| 午夜福利一区二区在线看| 女人久久www免费人成看片| 日本wwww免费看| 两性午夜刺激爽爽歪歪视频在线观看 | 成人手机av| 18在线观看网站| 老司机午夜十八禁免费视频| 成人18禁在线播放| 精品卡一卡二卡四卡免费| 天堂8中文在线网| 无限看片的www在线观看| 久久婷婷成人综合色麻豆| 亚洲精品一卡2卡三卡4卡5卡| 亚洲va日本ⅴa欧美va伊人久久| 黑人操中国人逼视频| 在线观看www视频免费| 午夜福利视频在线观看免费| 午夜福利视频在线观看免费| 丁香欧美五月| 热re99久久精品国产66热6| 亚洲少妇的诱惑av| 美女高潮喷水抽搐中文字幕| 久久久水蜜桃国产精品网| 亚洲欧美一区二区三区黑人| 女警被强在线播放| 黑人欧美特级aaaaaa片| 老司机亚洲免费影院| 丁香六月欧美| 欧美日韩精品网址| 国产野战对白在线观看| 日韩欧美国产一区二区入口| 国产日韩欧美在线精品| 国产欧美日韩精品亚洲av| 国产有黄有色有爽视频| 亚洲av电影在线进入| 欧美亚洲 丝袜 人妻 在线| 色尼玛亚洲综合影院| 中文字幕高清在线视频| av天堂久久9| 午夜福利影视在线免费观看| 国产av国产精品国产| 久热爱精品视频在线9| 极品少妇高潮喷水抽搐| 久热爱精品视频在线9| 国产精品久久久久久精品古装| avwww免费| 国产淫语在线视频| 大陆偷拍与自拍| 十八禁高潮呻吟视频| 亚洲av成人不卡在线观看播放网| 日韩欧美三级三区| 国产成人精品久久二区二区91| 精品国产一区二区久久| 成年人免费黄色播放视频| 男女之事视频高清在线观看| 午夜福利乱码中文字幕| 精品国产一区二区三区四区第35| 色婷婷久久久亚洲欧美| 下体分泌物呈黄色| 精品国内亚洲2022精品成人 | 久久久精品免费免费高清| 亚洲成人国产一区在线观看| 午夜免费成人在线视频| 大型黄色视频在线免费观看| 80岁老熟妇乱子伦牲交| 国产在线免费精品| 国产亚洲欧美在线一区二区| 菩萨蛮人人尽说江南好唐韦庄| 欧美 亚洲 国产 日韩一| 大型黄色视频在线免费观看| a级毛片黄视频| 麻豆成人av在线观看| 最近最新中文字幕大全电影3 | 免费观看a级毛片全部| 国产主播在线观看一区二区| 狠狠婷婷综合久久久久久88av| 99热国产这里只有精品6| 热re99久久国产66热| 久久精品国产a三级三级三级| 欧美黑人精品巨大| 亚洲七黄色美女视频| 日韩有码中文字幕| a级毛片在线看网站| 久久人妻福利社区极品人妻图片| 最黄视频免费看| 日韩视频一区二区在线观看| 亚洲av日韩精品久久久久久密| 中国美女看黄片| 亚洲成a人片在线一区二区| 欧美另类亚洲清纯唯美| 我要看黄色一级片免费的| 国产在线一区二区三区精| 亚洲情色 制服丝袜| 国产在视频线精品| 狠狠精品人妻久久久久久综合| 91精品三级在线观看| 精品一区二区三卡| 国产精品久久久av美女十八| 亚洲精品粉嫩美女一区| 99久久99久久久精品蜜桃| 久久婷婷成人综合色麻豆| 深夜精品福利| 嫁个100分男人电影在线观看| 欧美一级毛片孕妇| 国产福利在线免费观看视频| 国产极品粉嫩免费观看在线| 久久人人爽av亚洲精品天堂| 久久ye,这里只有精品| 每晚都被弄得嗷嗷叫到高潮| 久久人妻熟女aⅴ| 女人精品久久久久毛片| 国产精品免费大片| 黄色视频在线播放观看不卡| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲性夜色夜夜综合| 欧美成狂野欧美在线观看| 国产一区二区 视频在线| 丰满迷人的少妇在线观看| 九色亚洲精品在线播放| 他把我摸到了高潮在线观看 | 精品久久久精品久久久| tube8黄色片| 国产午夜精品久久久久久| 亚洲色图 男人天堂 中文字幕| 怎么达到女性高潮| 亚洲国产成人一精品久久久| 久久av网站| 如日韩欧美国产精品一区二区三区| 亚洲人成电影免费在线| 少妇猛男粗大的猛烈进出视频| 久久人人97超碰香蕉20202| 亚洲成av片中文字幕在线观看| 母亲3免费完整高清在线观看| 成人黄色视频免费在线看| 老司机福利观看| 最近最新中文字幕大全电影3 | 亚洲美女黄片视频| 日本五十路高清| 日韩三级视频一区二区三区| 视频区图区小说| 女人爽到高潮嗷嗷叫在线视频| 一本色道久久久久久精品综合| 久久精品亚洲精品国产色婷小说| 一个人免费看片子| 九色亚洲精品在线播放| tube8黄色片| 夜夜爽天天搞| 国产av一区二区精品久久| a级毛片在线看网站| 久久久久视频综合| 亚洲午夜精品一区,二区,三区| 九色亚洲精品在线播放| 国产精品麻豆人妻色哟哟久久| 亚洲七黄色美女视频| 欧美精品人与动牲交sv欧美| 久久这里只有精品19| 亚洲国产欧美网| 男女午夜视频在线观看| 国产男女超爽视频在线观看| 人妻久久中文字幕网| 99精品在免费线老司机午夜| 欧美日韩黄片免| 成年人免费黄色播放视频| 久久精品亚洲av国产电影网| 久久人人爽av亚洲精品天堂| 欧美亚洲日本最大视频资源| 国产在线免费精品| 国产国语露脸激情在线看| www日本在线高清视频| 国产免费av片在线观看野外av| 嫁个100分男人电影在线观看| 午夜免费鲁丝| 日韩免费av在线播放| 人人妻人人澡人人爽人人夜夜| 一区二区av电影网| 一级毛片精品| 成人国产一区最新在线观看| 露出奶头的视频| 一进一出抽搐动态| 欧美 亚洲 国产 日韩一| 两个人免费观看高清视频| 午夜福利在线观看吧| 首页视频小说图片口味搜索| 国产在视频线精品| 日韩欧美一区视频在线观看| 亚洲色图 男人天堂 中文字幕| 亚洲午夜理论影院| 十八禁高潮呻吟视频| 亚洲黑人精品在线| 人人妻人人添人人爽欧美一区卜| 精品国产超薄肉色丝袜足j| 国产精品免费一区二区三区在线 | 国产在线视频一区二区| 视频区图区小说| 两性午夜刺激爽爽歪歪视频在线观看 | tube8黄色片| 欧美日韩成人在线一区二区| 无限看片的www在线观看| 亚洲av电影在线进入| 免费一级毛片在线播放高清视频 | 国产精品久久久久成人av| 午夜福利在线免费观看网站| 99九九在线精品视频| 成人免费观看视频高清| 亚洲av片天天在线观看| 国产av精品麻豆| 国产成人免费无遮挡视频| 99在线人妻在线中文字幕 | 美女视频免费永久观看网站| 成年人黄色毛片网站| 建设人人有责人人尽责人人享有的| 久久精品成人免费网站| 午夜激情av网站| 男女下面插进去视频免费观看| 亚洲自偷自拍图片 自拍| 99国产综合亚洲精品| 精品高清国产在线一区| 亚洲av电影在线进入| 中文字幕高清在线视频| 99精国产麻豆久久婷婷| 香蕉久久夜色| 亚洲av第一区精品v没综合| 日韩免费高清中文字幕av| 国产精品av久久久久免费| 精品国产国语对白av| 久久精品人人爽人人爽视色| 在线观看舔阴道视频| 国产精品自产拍在线观看55亚洲 | 男女午夜视频在线观看| 少妇的丰满在线观看| tube8黄色片| 亚洲精品乱久久久久久| 亚洲国产看品久久| 免费av中文字幕在线| 成年动漫av网址| 高清av免费在线| 精品视频人人做人人爽| 亚洲伊人久久精品综合| 一级黄色大片毛片| 亚洲男人天堂网一区| 国产成人一区二区三区免费视频网站| 真人做人爱边吃奶动态| 国产在线观看jvid| 国产成人精品无人区| 日韩大片免费观看网站| 午夜激情av网站| 午夜福利乱码中文字幕| 如日韩欧美国产精品一区二区三区| 精品少妇久久久久久888优播| 免费看a级黄色片| 又大又爽又粗| 中文字幕av电影在线播放| 自线自在国产av| 啪啪无遮挡十八禁网站| av天堂久久9| 女人被躁到高潮嗷嗷叫费观| 老汉色av国产亚洲站长工具| 日韩大片免费观看网站| 亚洲五月色婷婷综合| 亚洲精品美女久久久久99蜜臀| 最新在线观看一区二区三区| 99国产精品一区二区三区| 国产xxxxx性猛交| 又大又爽又粗| 精品少妇内射三级| 国产一区有黄有色的免费视频| 交换朋友夫妻互换小说| 久久国产精品人妻蜜桃| 男人舔女人的私密视频| 男女下面插进去视频免费观看| 制服人妻中文乱码| 12—13女人毛片做爰片一| 天天影视国产精品| 欧美激情高清一区二区三区| av网站在线播放免费| 建设人人有责人人尽责人人享有的| 日本黄色日本黄色录像| 高清av免费在线| 国产精品影院久久| 免费人妻精品一区二区三区视频| av不卡在线播放| 人人妻人人澡人人爽人人夜夜| 欧美久久黑人一区二区| 午夜视频精品福利| 日韩人妻精品一区2区三区| 久久久精品94久久精品| 伊人久久大香线蕉亚洲五| 男女免费视频国产| 啦啦啦在线免费观看视频4| 一进一出好大好爽视频| 色尼玛亚洲综合影院| 欧美激情 高清一区二区三区| 高潮久久久久久久久久久不卡| 亚洲精品粉嫩美女一区| 亚洲专区字幕在线| 一级,二级,三级黄色视频| 日本黄色视频三级网站网址 | 成人18禁高潮啪啪吃奶动态图| 一二三四社区在线视频社区8| 真人做人爱边吃奶动态| 精品免费久久久久久久清纯 | 考比视频在线观看| 亚洲精品久久午夜乱码| 亚洲精品美女久久久久99蜜臀| 欧美大码av| 啦啦啦免费观看视频1| 人人妻人人澡人人看| 九色亚洲精品在线播放| 久久国产精品大桥未久av| 欧美大码av| 热re99久久精品国产66热6| 久久精品91无色码中文字幕| 色老头精品视频在线观看| 久久人妻熟女aⅴ| 一区二区三区国产精品乱码| 超色免费av| 色婷婷久久久亚洲欧美| 嫁个100分男人电影在线观看| 成人av一区二区三区在线看| 女人精品久久久久毛片| 久久精品国产亚洲av香蕉五月 | 亚洲色图av天堂| 最新的欧美精品一区二区| 老汉色∧v一级毛片| 国产一区二区在线观看av| 日韩成人在线观看一区二区三区| 亚洲精品国产色婷婷电影| 日本黄色日本黄色录像| 国产成+人综合+亚洲专区| 一级片免费观看大全| 日韩中文字幕欧美一区二区| 在线观看免费午夜福利视频| 丁香欧美五月| 亚洲情色 制服丝袜| videosex国产| 老熟妇乱子伦视频在线观看| 成人手机av| 国产精品影院久久| 色婷婷av一区二区三区视频| 亚洲专区中文字幕在线| 99riav亚洲国产免费| 国产精品国产高清国产av | 亚洲av第一区精品v没综合| 男男h啪啪无遮挡| 在线天堂中文资源库| 国产av国产精品国产| 国产不卡av网站在线观看| 美女扒开内裤让男人捅视频| 久久精品国产a三级三级三级| 久久精品亚洲av国产电影网| 无限看片的www在线观看| 日韩视频一区二区在线观看| 午夜免费鲁丝| 欧美老熟妇乱子伦牲交| 性少妇av在线| 久久久精品区二区三区| 丝袜人妻中文字幕| 黑人巨大精品欧美一区二区mp4| 国产精品 国内视频| 手机成人av网站| 国产精品久久久久久人妻精品电影 | 精品国产超薄肉色丝袜足j| 亚洲九九香蕉| 热re99久久国产66热| 国产精品.久久久| 精品国内亚洲2022精品成人 | 在线观看66精品国产| 国产福利在线免费观看视频| 757午夜福利合集在线观看| 少妇裸体淫交视频免费看高清 | 国产欧美日韩一区二区三区在线| 国产精品偷伦视频观看了| 成人特级黄色片久久久久久久 | 法律面前人人平等表现在哪些方面| 国产高清激情床上av| 午夜激情久久久久久久| 午夜福利欧美成人| 久久精品国产99精品国产亚洲性色 | 日日夜夜操网爽| 99re6热这里在线精品视频| 黄片播放在线免费| 亚洲精品粉嫩美女一区| 国产精品国产高清国产av | 免费一级毛片在线播放高清视频 | 每晚都被弄得嗷嗷叫到高潮| 亚洲第一欧美日韩一区二区三区 | 一级毛片电影观看| 日韩熟女老妇一区二区性免费视频| 一边摸一边抽搐一进一出视频| 免费观看a级毛片全部| 在线观看一区二区三区激情| 欧美久久黑人一区二区| 狠狠婷婷综合久久久久久88av| 一区二区三区激情视频| 在线av久久热| 亚洲成av片中文字幕在线观看| 欧美人与性动交α欧美软件| 在线观看免费日韩欧美大片| 免费黄频网站在线观看国产| 国产日韩一区二区三区精品不卡| 国内毛片毛片毛片毛片毛片| 精品国产超薄肉色丝袜足j| 在线av久久热| 91精品三级在线观看| 99国产精品一区二区三区| 后天国语完整版免费观看| 搡老岳熟女国产| 一级毛片女人18水好多| 亚洲欧美一区二区三区黑人| 12—13女人毛片做爰片一| 一级片'在线观看视频| 精品人妻熟女毛片av久久网站| 欧美 亚洲 国产 日韩一| 如日韩欧美国产精品一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 怎么达到女性高潮| 欧美在线一区亚洲| 自线自在国产av| 高清在线国产一区| 99国产综合亚洲精品| 久久久精品区二区三区| 中文字幕色久视频| 国产在线视频一区二区| 欧美日韩av久久| 午夜老司机福利片| 在线观看免费视频日本深夜| 久久免费观看电影| 美女视频免费永久观看网站| 高清av免费在线| 美女高潮喷水抽搐中文字幕| 啦啦啦视频在线资源免费观看| 成人亚洲精品一区在线观看| 日韩免费高清中文字幕av| 亚洲av欧美aⅴ国产| 国产视频一区二区在线看| 亚洲熟女精品中文字幕| 丝袜在线中文字幕| 日本一区二区免费在线视频| 乱人伦中国视频| 女人精品久久久久毛片| 亚洲国产看品久久| 一级毛片电影观看| 中文字幕人妻丝袜一区二区| 国产成人av激情在线播放| 欧美激情极品国产一区二区三区| 91精品国产国语对白视频| 国内毛片毛片毛片毛片毛片| 高清欧美精品videossex| 大型av网站在线播放| 欧美日韩精品网址| 免费人妻精品一区二区三区视频| 午夜免费成人在线视频| 天堂8中文在线网| cao死你这个sao货| 亚洲av成人不卡在线观看播放网| 精品第一国产精品| 国产成人精品无人区| 日韩欧美国产一区二区入口| 欧美大码av| 久久精品国产亚洲av高清一级| 日韩免费av在线播放| 亚洲欧洲精品一区二区精品久久久| 少妇裸体淫交视频免费看高清 | 在线观看免费高清a一片| 日韩大码丰满熟妇| 啦啦啦免费观看视频1| 亚洲成人手机| 日本av免费视频播放| 午夜福利乱码中文字幕| 天天躁日日躁夜夜躁夜夜| tocl精华| 欧美变态另类bdsm刘玥| 亚洲专区字幕在线| 国产极品粉嫩免费观看在线| 欧美激情 高清一区二区三区| 在线十欧美十亚洲十日本专区| 国产成人一区二区三区免费视频网站| 日韩欧美三级三区| 9热在线视频观看99| 桃花免费在线播放| 一二三四社区在线视频社区8| 变态另类成人亚洲欧美熟女 | 国产精品二区激情视频| 黄色视频在线播放观看不卡|