• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Secure and Cost-Effective Training Framework Atop Serverless Computing for Object Detection in Blasting Sites

    2024-03-02 01:34:12TianmingZhangZebinChenHaonanGuoBojunRenQuanminXieMengkeTianandYongWang

    Tianming Zhang,Zebin Chen,Haonan Guo,Bojun Ren,Quanmin Xie,Mengke Tianand Yong Wang

    1Department of Computer and Science,Shanghai Jiaotong University,Shanghai,200240,China

    2Aerospace System Engineering Shanghai,Shanghai,200240,China

    3State Key Laboratory of Precision Blasting,Jianghan University,Wuhan,430056,China

    4Beijing Microelectronics Technology Institute,Beijing,100076,China

    ABSTRACT The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’ interest in machine learning methods for target detection in the field of blasting.Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience,which has aroused people’s interest in how to use it in the field of machine learning.In this paper,we design a distributed machine learning training application based on the AWS Lambda platform.Based on data parallelism,the data aggregation and training synchronization in Function as a Service(FaaS)are effectively realized.It also encrypts the data set,effectively reducing the risk of data leakage.We rent a cloud server and a Lambda,and then we conduct experiments to evaluate our applications.Our results indicate the effectiveness,rapidity,and economy of distributed training on FaaS.

    KEYWORDS Serverless computing;object detection;blasting

    1 Introduction

    Nowadays,in the era of big data,various industries are collecting and analyzing massive amounts of data to further improve production or service efficiency.For example, using deep learning in the Industrial Internet of Things (IIoT) [1] or healthcare system [2].In the field of blasting, planning blasting operations by analyzing data from blasting sites can significantly improve blasting quality and ensure human security.Today,mobile blasting robots[3–5]are usually used to place explosives instead of manual operation at blasting sites.Accurately identifying the work site within a complex blasting environment stands as the fundamental and indispensable initial phase for a mobile blasting robot.In the demolition blasting scene,there are thousands of blast holes in the buildings to be demolished.Without accurate identification, it will be impossible to accurately judge the explosive quantity and placement position,leading to excessive consumption of blasting materials and a significant increase in potential security hazards.

    Among many dangerous tasks, mobile blasting robots need to quickly analyze the data of the surrounding environment, such as detecting surrounding objects, so that they can sense the danger source in advance and alarm the explosion event.Deep learning methods are widely applied in these fields.Object detection [6] has always been an essential topic in image processing.Recently, object detection research has been developed rapidly with the development and broad application of deep learning.As machine learning is introduced into increasing fields to solve problems,it has become a consensus to solve the problem of object detection through machine learning.For example, using machine learning to solve the problem of image segmentation [7], image super-resolution [8], and 3D object detection [9].However, training deep learning models is a typical computing-intensive task,which often requires the support of hardware devices with parallel capabilities.Therefore,many machine learning participants refer to cloud computing for computing power to train deep models.

    Recently, serverless computing [10–12] has emerged as a new paradigm of computation infrastructure,which can support large-scale and elastically expanded data analysis and has been offered by major cloud service providers(e.g.,AWS Lambda,Azure Functions,and Google Cloud Functions).By implementing the training tasks of deep learning models on the serverless platform, more deep learning practitioners without hardware facilities can use cloud power to complete the training for complex models.Many developers favor serverless computing as it lifts the burden of provisioning and managing cloud computation resources(e.g.,with auto-scaling).Therefore,training machine learning(ML) models using serverless infrastructure has also attracted increasingly intensive attention from academia[13–15].

    However,there is still a long way to go in training deep learning models using serverless computing,mainly due to the following three challenges.The first challenge is that cloud computing providers are not necessarily trusted.Still, the data for model training is private or valuable, so data owners are unwilling to expose the data to the cloud platform.When data utilized for model training is uploaded to a third-party platform, the security of such data becomes a paramount concern for its owners.The common solution is that users can adopt federated learning[16]or methods based on hardware encryption[17].The second challenge is the gap between the demand for memory,network bandwidth,and other resources required for model training and the resources provided by the cloud computing platform.In the ML training process,the memory consumption increases with the model size and the activation size,and the latter is proportional to the batch size.Today’s serverless platform,e.g.,AWS Lambda,offers up to 10 GB of memory for a serverless function,often falling short for training with large batch sizes.The third challenge is that a large amount of intermediate data will be generated in the training process of the ML model,which requires high communication ability to transform the intermediate data.Serverless functions have minimal communication capability that does not meet the growing communication demand for training ML models.Moreover,serverless functions lack direct inter-function communication capability,making recent serverless-based training frameworks resort to two-hop communication via intermediary cloud storage such as Amazon S3.

    Our work attempts to train machine learning models for object detection in blasting sites on a serverless platform to solve the above challenges.We solve the problem of data security and model volume through two main approaches,namely,data encryption and distributed training.In the process of data set partitioning,the image data is encrypted and then uploaded to the public storage service,which solves the data security concerns.Our critical insight is that we can split data into partitions and disperse them to multiple Lambda functions for training.Distributed training can effectively reduces the platform’s memory requirements and thus enable large-volume object detection model training.Moreover,data parallelism can also effectively reduce the communication burden in training.On the basis of distributed training, we implement a parameter server architecture for distributed training using serverless computing to address the communication challenge,ensuring a cloud-native machine learning training solution.

    To measure the overall performance of our application, we configure relevant environments on AWS Lambda and AWS EC2.Then, we train different models on the platforms and record the indicators,such as the loss in the training process,the Intersection over Union(IoU)for the test set,and the estimated cost.We also verify the efficiency and security of our encryption algorithm when compared with other commonly used encryption algorithms.In short, we make the following main contributions:

    ? We design a distributed training application for object detection tasks in blasting sites,which can assist the blasting operation with cloud computing resources.

    ? We implement the training application in a serverless computing manner based on the AWS Lambda platform,which can realize data transmission and synchronized distributed training on Function as a Service(FaaS)and provides enough memory(up to 10 GB per Lambda function)for classical Deep Neural Networks(DNN)model training.

    ? We propose the idea of applying AES encryption for slice data when the data set is split and uploaded to the public bucket, which effectively reduces the risk of user-sensitive data being exposed when it is stored on a third-party platform.

    ? We extensively evaluate the overall performance of our application.The experimental results show that our application can train ML models more quickly, economically, and safely while approaching the performance of full data set training.

    The paper will be presented in the following order: In Section 2, we will introduce the overall architecture of our designed application,the implementation of distributed training,and the application of encryption algorithms.In Section 3,we will introduce the evaluation of our designed method.In Section 4,we will summarize and analyze relevant studies.In Section 5,we will give a conclusion of this paper.

    2 Design

    We implement a prototype FaaS-based Machine-Learning application built on Amazon Lambda for object detection.In this section,we will first introduce the overall design of our application and then introduce two important components: an effective distributed method to train models on the serverless platform and an encryption algorithm to guarantee the security of the training data set.

    2.1 Overview

    Challenges.When developing the distributed Machine-Learning training application for blasting data models on a serverless platform, we mainly consider two challenges: (1) distributed training method and relevant techniques,and(2)data security.

    The current FaaS infrastructure does not allow direct communication between stateless functions.It only allows functions to read/write intermediate state information generated during the iterative training by using specific storage channels.Therefore, we should design our distributed training method,consider the synchronization between functions,and realize intermediate data communication based on this situation.

    The original blasting data is usually confidential information of enterprise or military background units.When conducting data training on a third-party serverless platform,encryption of the original data must also be considered.

    Framework.Fig.1 shows the framework of the application.Before executing the application,users need to open related services on AWS in advance and deploy the application on the AWS lambda platform accordingly.The user first submits the specified configuration (data set location, common bucket location, training model, model parameters, etc.) to the trigger function.Then, the trigger function will split the data according to the number of workers and start the workers’function to conduct distributed training and aggregation of the net.Each running instance is a function running on AWS Lambda.The training data is partitioned and stored in S3, a distributed storage service in AWS.Each worker has a copy of the partition of the training data set in its local temporary storage.After training its network weights locally,a worker will upload it to the public bucket.Specific workers pool and aggregate the network weight according to the data set size.

    Figure 1: The framework of the application.The training process corresponds to the job execution part.The Train Engine is the corresponding functional module of the distributed training we wrote for the workers in the application

    Job Execution.A training job in our application has the steps below:

    1.Partition data and trigger workers.The trigger function reads the configuration parameters entered by the user,splits the original data set according to the number of workers,uploads the partition to the specified public bucket,and calls the functions of workers deployed on AWS Lambda by Function Invoke.

    2.Load data.Each worker loads the corresponding partition of training data from S3.

    3.Calculate weight.Each worker uses PyTorch to create the specified ML model and the allocated split training data and local model parameters to calculate the model parameters after this iteration.

    4.Upload weights.Each worker uploads its model weight to the designated S3.

    5.Aggregate weights.Based on the idea of average pooling, the model weights of all workers considered the intermediate state are aggregated by obtaining the average value of each worker’s weights to generate the global state of model weights.Note that the aggregate progress is conducted in a synchronous manner because each worker has similar performance, and the data set is partitioned equally.

    6.Update model.Each worker reads the merge status of model weights from S3 and updates the local model with this status.If the number of training iterations or loss requirements is not met,return to step 3 and run the next iteration.

    2.2 Distributed Training

    It is a popular research direction to deploy Distributed Deep Neural Networks (DDNN) to stateless serverless platforms for efficient and easy training.In the general process of deep neural network training, a large amount of intermediate data will be generated.However, an efficient distributed training method must be devised to operate within constrained function running time,involve temporary storage of data exclusively during function execution,and preclude communication between functions on a serverless platform.It is necessary to solve how to implement the optimization algorithm and complete the aggregation of data when realizing the distributed training of the deep learning model on a serverless platform.

    Data Parallelism.For given data setDand loss functionJ,each iteration of the training process can be represented as:

    In Eq.(1),trepresents thet-th training iteration,θis the model weights,J(·)is the loss function,?is the learning rate.Distributed training means that computing and storage requirements are distributed to multiple training devices,and data parallelism is a parallel strategy to solve this problem.The main principle of data parallelism follows the principle of Single Program Multiple Data,that is,the training task is divided into multiple processes(devices).Data setDis partitioned equally intonparts,and thei-th worker holds the partial data setDi(i∈[1,n]),nis the number of workers.Each process maintains the same model parameters and the same computing task but processes different batch data.In this way,the data and calculation under the same global batch are split into different serverless instances,thus reducing the pressure of calculation and storage on a single instance.Thet-th iteration of thei-th worker can be represented as:

    In the scenario of object detection,the loss function is set to IoU,which is computed as follows:

    In Eq.(3),Pis the predicted area,Gis the real area.There are many ways to realize data parallelism.The data parallelism of this framework is based on Distributed Synchronous SGD,which is the implementation method of data parallelism in the current mainstream deep learning training framework.

    Distributed SGD.Due to the success of deep neural networks,stochastic gradient descent(SGD)may be the most popular optimization algorithm in the world today.When implementing SGD in a distributed way, we consider the following variant: gradient average (GA).We divide the training data evenly and let one worker take charge of one partition.Each worker runs a small batch of SGDs independently and in parallel while sharing and updating the global ML model in case of synchronization barriers set by users(for example,after one or several iterations).The update mode of the global model is the difference between the variant we designed and other distributed SGD modes.GA updates the global model in each iteration by collecting and aggregating updated gradients from Workers.GA updates the overall model weights as follows:

    For most multi-layer ML models,this method of updating the global model is applicable.

    A FaaS-Based Data Aggregation.We design a data aggregation communication scheme using the persistent storage service(S3)provided by the AWS platform as the storage location of intermediate data.The entire communication process contains the following steps:

    1.Each worker is an aggregator, so the model weight of the iteration will be divided into the number of workers.

    2.Workers upload the weights of the areas in their charge to S3 for temporary files.

    3.Workers download the weights uploaded by other workers in their charge from S3.

    4.All workers read the merged model weight data from S3.

    5.All workers will refresh their local models with the information read from the merged file and then enter the next iteration or exit the training after reaching the set requirements.

    2.3 Security Guarantee Algorithm

    AES encryption algorithm [18] is the most popular computer encryption algorithm.Because it uses the same key during encryption and decryption,it belongs to the single-key encryption algorithm.AES algorithm has high security and can resist square attacks, side-channel attacks, penetration attacks,meet-in-the-middle attacks,statistical analysis attacks,energy analysis attacks,etc.

    AES Algorithm Encryption and Decryption.The encryption and decryption process of the AES algorithm is shown in Fig.2.The AES algorithm operates in bytes,and the 16-byte data is represented as a 4 ?4 matrix according to the specified byte arrangement.The whole encryption process can be understood as first performing AddRoundKey on the input 4 ?4 plain text matrix P to get the state matrix S,then performing ten iterations of functions,and finally getting the 4 ?4 ciphertext matrix C for output.Byte Subs,invShiftRow,Column obfuscation,and AddRoundKey are performed in turn in the first and ninth iterations.There is no Column obfuscation in the tenth round compared to the first nine iterations.The encryption and decryption process reverses the order of the round keys.

    Figure 2:Details of encryption and decryption process of AES algorithm.The row vector represents 16-byte data.Three 4 ?4 matrices represent the plaintext matrix,state matrix,and ciphertext matrix in turn.The flow chart at the bottom is the encryption process of Plaintext

    Data Encryption.The application partitions the original data set and uploads them to the public bucket,and the split data set needs to be encrypted.The application program partitions the data with a block length of 128 bits and then encrypts the data based on The Cipher Block Chaining Mode(CBC).Every 128 bits of data is represented as a stream in Line 3.Unlike the traditional CBC mode,which uses the previous ciphertext block as the Initialization Vector(IV),we randomly generate an IV for each data block to minimize the retention of original features.Due to the openness of IV,we splice the generated IV with the ciphertext block and save it.After the AES encryption, the cipher data is encoded into ASCII codes with BASE64 for data transmission.Algorithm 1 shows how to encrypt an original image completely.

    Data Decryption.We use the AES encryption algorithm in CBC mode, hence the decryption process of the data in the application is basically the reverse process of encryption.After the worker downloads the responsible data block from the S3 bucket,the data is first Base64 decoded.Then,it decrypts the data every 144 bytes according to the key provided by the user.The first 128 bits are the original data,and the last 16 bits are IV.Finally,the decrypted data is spliced.

    3 Evaluation

    In this section,we mainly design two scenarios of experiments to verify the overall performance and data security of our designed application.The first one is training different models with different training configurations.The second one is evaluating the efficiency and security of different encryption algorithms.

    3.1 Experiment Setup

    Testbed.Our assessment uses the popular Function as a Service (FaaS) platform and Platform as a Service (PaaS) platform, AWS Lambda, and AWS EC2 ECS.AWS EC2 ECS serves a local training baseline that takes the full data set, while AWS Lambda is used to test the distributed training performance.In our evaluation,AWS Lambda provides a maximum of 3008 MBytes memory allocation for each serverless function.Its corresponding cloud storage service,S3,grants unlimited bandwidth to concurrent access.According to the official guidance of Amazon,every 1 GB of running memory allocated for a function on Lambda is equivalent to allocating 0.6v CPU.AWS EC2 ECS provides many different configurations to meet various needs.In our evaluation,the purchased model is the computing enhanced t3.xLarge,which is configured with 4v CPU and 16 GiB memory.

    Dataset&Models.The data set used for our evaluation is the classic data set PennFudanPed[19]in the field of object detection.PennFudanPed includes the original image,mask,and annotation.The position and information of the person in the image are indicated in the annotation file.

    We use the following ML models in our evaluation.Fast R-CNN is proposed based on R-CNN.It eliminates the SVM classifier and bbox linear regression,places them in the integrated network,and uses the ROI-pooling layer to convert RPs of different sizes into the same size to complete classification and regression at one time.Mask R-CNN is composed of Fast R-CNN and semantic segmentation algorithm FCN.The former completes the object detection task,and the latter can accurately complete the semantic segmentation task.R-CNN with the model skeleton of MobileNet v2.MobileNet v2 uses Inverted Research and Linear Bottlenecks and performs well in small-sample object detection models.The batch size of the overall performance experiment is set to one.In addition, we commit another experiment of R-CNN with MobileNet with different batch sizes to examine the performance.We set the optimal learning rate for each ML model at 0.005 and the stop condition of training as the number of iterations to compare the network performance under the same number of iterations.The epoch of the training process is set to 3,which is enough for the comparison of different configurations.

    3.2 Overall Performance

    We compare the overall performance by running distributed ML training applications on Lambda and training the ML model on ECS.Since the trained ML model involves Fast R-CNN, which consumes more memory,the running memory on Lambda is uniformly set to a maximum of 3008 MB to prevent memory shortage.The number of workers is set to 5,7,and 9,which is equivalent to the computing power of a 3v CPU,4.2v CPU,and 5.4v CPU.

    Loss.In Fig.3a,we draw the training loss line graph of each epoch of the three ML models.The horizontal axis unit of the line graph is the epoch, and the vertical axis unit is the average loss.On ECS, due to a large amount of training data, the loss of each ML model tends to be stable after an epoch.On Lambda,the data set is partitioned,resulting in a relatively small data volume.The loss is relatively large in the early training period but decreases quickly.It can be seen from the image that the more workers,the greater the loss will be at the beginning of the training.However,as the iteration progresses, it will soon decrease.The cutting of the data set does not affect the training loss of the model.After multiple iterations,it can converge to the level of full data set training.

    Figure 3: Comparison of overall performance of different configurations.The loss in Fig.3a refers to the average loss of each batch.The specific criterion is the gap between the predicted value of the model and the actual value.The intersection-over-Union(IoU)in Fig.3b is a concept used in object detection.They are the overlap rate of the generated candidate box and the original marker box,that is,the ratio of their intersection and union,and are commonly used standards to measure the confidence of the detection results.The evaluation index used in the experiment is the Average Precision when IoU is 0.50 to 0.95

    Accuracy.In Fig.3b,we draw the global IoU of the three models with confidence levels of 50%to 95%.Regardless of the model,the best IoU indicators are obtained through model training on ECS.This is not counterintuitive because ECS has a complete training data set.Each Lambda only has a partial training set,and the weight of the overall training model is the integration of all local models.In addition,the figure shows that the more workers,the less overall accuracy acquired.This is because,with the decrease of the data set scale,each worker trains the model with less data,which causes the degeneration of the overall accuracy performance.However,the test results on Lambda are not much different from those on ECS,which is acceptable.

    Time.As shown in Fig.3c,the performance of distributed ML training applications is much better than that of ECS in both training time and total time.Theoretically, the shortened time is linearly related to the number of workers.Considering the aggregation time,synchronization time,and other necessary start-up times,using distributed training does not reduce the running time linearly.However,it is not difficult to see that distributed training still significantly reduces training time and total run time.With the increment in the number of workers,more overall computing resources can be utilized to train the model,leading to a minor training time and overall time.

    Cost.According to the official pricing manual, our estimated cost is shown in Table 1.Cost on Lambda mainly refers to the pricing of temporary storage and runtime pricing.When the running memory is 3072,the running time cost per 1 ms is 5.0×10-8USD.The cost of temporary storage is 3.7×10-8USD per GB per second.At present,Amazon does not charge for the storage space and communication of S3 bucket, so it does not need to be included in the bill.For the t3.xlarge EC2 instance we rented, its price is 0.2176 USD per hour.The traffic cost of EC2 instances is USD 0.01 per GB for the first 10 TB, and the cost of universal SSD storage is USD 0.08 per GB per month.The size of the data set is 50 MB,and the storage volume we set is 10 GB.Since we can focus on the implementation of a specific function on Lambda without paying for other hardware facilities, our distributed application achieves better performance in terms of cost performance.

    Table 1: Cost of different scenarios

    Batch Size.As shown in Fig.4, with the increment of batch size, the training process converges slower,which is consistent with the theory.Besides,Fig.4 shows that the training process with a larger data set has a better loss performance,which corresponds with the analysis in the overall performance experiment.Note that in the experiment with batch size set to 4, the demanding memory of ECS is beyond the memory constraint of AWS Lambda,so the result is set to 0 in the figure.This is intuitive because with the number of workers decreases,each worker needs more memory to storage the growing partial data set.

    Figure 4:Loss of training R-CNN with MobileNet with different workers under different batch size configurations

    3.3 Data Security

    Efficiency.In order to verify the advantages of our AES encryption algorithm in terms of encryption and decryption efficiency,we test the speed of the commonly used RSA-1024 algorithm,a widely used security encryption algorithm based on the principle of large prime product decomposition,DES-64 algorithm, which is a classical symmetric encryption method and AES-128 algorithm to encrypt and decrypt the same data, and the result is shown in Table 2.AES algorithm and DES algorithm are both symmetric encryption algorithms.The AES encryption algorithm exhibits a characteristic where the size of the ciphertext is twice that of the plaintext,consequently leading to a commensurate increase in the decryption time relative to the encryption process.In contrast, the DES algorithm maintains equivalent sizes for both ciphertext and plaintext, resulting in equivalent encryption and decryption speeds.It is noteworthy that RSA constitutes an asymmetric encryption algorithm.The speed of RSA algorithm decryption is relatively slow,as the decryption process involves exponential operation on large ciphertext, which requires more computational resources and time.In contrast,encryption speed is faster because encryption only requires one modular exponentiation operation.The speed performance of the RSA algorithm and DES algorithm for encrypting data is obviously inferior to that of the AES algorithm.Besides, with the increment of the file length, the decryption time grows exponentially,making it even slower.As for AES algorithm,it demonstrates notably swift encryption and decryption speeds comparing with AES and RSA,irrespective of whether applied to small or large file size.

    Table 2: Comparison of algorithm encryption/decryption speed

    Security.In order to measure and compare the security of encryption algorithms, we use the brute force attack method to test the resistance of the RSA algorithm and the AES algorithm.It can be seen from Table 3 that the performance of the AES algorithm is slightly worse than that of the RSA algorithm, which is known for its security.However, considering its encryption and decryption efficiency,these security losses are acceptable.The DES algorithm performs worst,making it unsuitable for our method’s data encryption process.

    4 Related Work

    Deep Learning for Blasting Applications.The extensive application of deep learning has extensively promoted productivity development in all walks of life,including some dangerous blasting industries.Literature [20] proposed to utilize deep learning models to remain time to close tap holes for blast furnaces.They adopt skip-dense layers to outperform the LSTM-based baselines in accuracy and performance.A deep learning model Mask R-CNN [21] trained through images captured from real blasting sites in Nui Phao open-pit mine is developed to evaluate the blasting results,expanding the possibility of the automated measurement of blast fragmentation.Literature[22]combined a hybrid deep learning-based computer vision method with a VMD algorithm for security detection of blasting furnace bearings,which achieves remarkable calculation speed and accuracy of bearing fault diagnosis.Literature[23]constructed a multi-hidden-layer neural network model and an LSTM neural network model based on PyTorch and Keras framework to predict the failure mode of the RC (reinforced concrete)columns under blast loading.Literature[24]adopted a convolutional neural network(CNN)for training models that can distinguish tectonic earthquakes and quarry blasts.They also apply different strategies due to different data sizes and yield high accuracy in seismic event discrimination with raw waveforms.Literature [25] decreased the blast-induced vibration in a tunnel excavation by deciding the initial setting of the MSP (multi-setting smart investigation of the ground and prelarge hole boring) machine with a deep learning-based prediction model.To avoid overfitting while training the model,they have applied several techniques like dropout,early stopping,and pretraining.Literature[26]performed five artificial intelligent algorithms with deep learning models to predict the flock phenomenon that is frequently appeared during the explosion in mining or construction projects.They also assess the performance of the five models, and Harris Hawks optimization-based MLP(HHO-MLP)achieves the best score.Our work aims to address the object detection tasks in blasting sites to improve the blasting performance.We design and implement a distributed machine learning training application for object detection models on a commercial serverless computing platform to achieve this goal.

    Distributed Machine Learning.Due to the complexity of the deep learning model and the explosive growth of training data,the requirements of improving training speed and reducing model convergence time can no longer be met on a single GPU on a machine.Therefore, distributed machine learning is proposed to accelerate the convergence of the model by using the idea of parallel computing.Distributed machine learning can be divided into data parallelism,model parallelism in data partition,and model partition.A parameter server [27] is a typical representation of data parallelism.The training is split into partitions and dispersed across workers,and each worker keeps a complete copy of the model for local updates.The centric server is responsible for weight aggregation by collecting the local weights from workers.However,due to the memory limitation of a single GPU,some giant models cannot be trained on a single GPU,so it is feasible to divide the model into different machines or GPUs for training, which is why model parallelism [28,29] plays an important role.Federated learning has gradually attracted people’s attention and has become a new paradigm of distributed machine learning for its privacy guarantee[30].What’s more,compared with the implementation in the uniformly managed device cluster, building stronger computing power on the edge devices with low configurations and more numbers to realize the parallel training of the model has been favored by some researchers[31,32].OSTTD[33]trains the task unloading model with the help of serverless service.OSTTD is a novel task-offloading method for multi-tier computing networks.This method addresses the challenges of offloading splittable tasks with topological dependence in complex and dynamic systems.Our work implements a distributed DNN model training application on a serverless computing platform,AWS Lambda,for the convenience of deploying and applying DNN applications on dangerous industrial sites.

    Serverless Computing.In the past decades,thanks to the development of virtualization technology,cloud computing has gradually moved from the heavyweight Infrastructure-as-a-Service(IaaS)to the lightweight Function-as-a-Service (FaaS).At present, serverless computing has gradually attracted the attention of researchers and engineers.Its main advantages include simplified development,operation,maintenance processes,automatic scalability,and a“pay-as-you-use”billing mode.Serverless computing has been widely used in computing-intensive or traffic mutation tasks, such as machine learning training[14,34],video processing applications[35–38],etc.Sprocket[35]is a scalable serverless framework deployed on AWS Lambda for multistage video processing, including video encoding,decoding,and classification.Llama[36]extends Sprocket by enabling automatic parameter determination to optimize resource configuration and heterogeneous hardware support (GPU) to accelerate video processing.For machine learning,serverless computing can satisfy the requirements of machine learning applications that parallel computing with high computing performance.FaaShark[39] is an end-to-end network traffic analysis system based on a serverless computing platform that provides valuable insights into the use of serverless computing platforms for network traffic analysis.Siren[34]is a distributed machine learning training framework deployed on AWS Lambda and mainly utilizes reinforcement learning to guide the resource configuration,including the worker number and memory quota.Besides model training,serverless computing can be applied in model serving because of its automatic scalability to handle workload variation.SSC [40] is a pre-warming and automatic resource allocation framework designed explicitly for serverless workflows,which can reduce the cold start rate significantly.Batch[37]provides an adaptive batch size specification when serving inference requests to improve system throughput and resource utilization.Gillis[38]adopts model partition to serve large models because of the memory limitation of serverless platforms,like AWS Lambda and Aliyun Function.It provides two partition schemes to optimize execution time or cost,respectively.In addition, introducing cloud computing into IoT devices has become a new direction of current research.Our work applies AES encryption on data uploaded to a public bucket when utilizing the serverless computing method,which improves the security of data significantly.

    5 Conclusion

    We design a distributed ML model training application for object detection in the blasting field on FaaS, mainly including its communication mode, optimization algorithm implementation, data storage, and synchronization mode.We also encrypt the data in the training process, effectively reducing the exposure risk of user data stored on third-party platforms.We then implement our distributed ML training application on Amazon Lambda, following which we conduct a series of experiments to evaluate the overall performance.Our results indicate that our application can complete ML model training faster and more economically,with a performance close to full data set training.Our design of serverless computing DNN training framework is not only suitable for object detection in blasting sites but also can be extended to other DNN tasks in different dangerous circumstances,for example,image classification and image segmentation.Besides,with the improvement of services from serverless computing vendors,the framework can support larger models with more parameters like YOLOv3 and SSD in the future.

    Acknowledgement:The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: T.Zhang, Z.Chen, M.Tian; software: T.Zhang, B.Ren; data collection: T.Zhang, Q.Xie;analysis and interpretation of results:T.Zhang,H.Guo,B.Ren;draft manuscript preparation:T.Zhang,Z.Chen;manuscript review:H.Guo,Q.Xie,M.Tian,Y.Wang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Publicly available data sets were analyzed in this study.This data can be found here:https://www.cis.upenn.edu/~shi/ped_html/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日本一二三区视频观看| 美女cb高潮喷水在线观看| 久久99热6这里只有精品| 超碰97精品在线观看| 亚洲av福利一区| .国产精品久久| 女性被躁到高潮视频| 亚洲av成人精品一二三区| 伦理电影大哥的女人| 欧美高清性xxxxhd video| 校园人妻丝袜中文字幕| 色婷婷av一区二区三区视频| 26uuu在线亚洲综合色| .国产精品久久| 极品少妇高潮喷水抽搐| 欧美丝袜亚洲另类| xxx大片免费视频| av在线app专区| 男女边吃奶边做爰视频| 久久99精品国语久久久| 久久久欧美国产精品| 久久99热6这里只有精品| 亚洲欧美清纯卡通| 在线观看美女被高潮喷水网站| 高清黄色对白视频在线免费看 | 精品人妻一区二区三区麻豆| 乱系列少妇在线播放| 久久99热这里只频精品6学生| 日韩大片免费观看网站| 欧美亚洲 丝袜 人妻 在线| 看免费成人av毛片| 男女无遮挡免费网站观看| 性色av一级| 成人无遮挡网站| 欧美精品国产亚洲| 亚洲成人中文字幕在线播放| 亚洲最大成人中文| 亚洲人与动物交配视频| 高清不卡的av网站| 特大巨黑吊av在线直播| 一边亲一边摸免费视频| 国产永久视频网站| 国产一区二区三区综合在线观看 | 亚洲三级黄色毛片| 久久国产精品大桥未久av | 久久99蜜桃精品久久| 黑人猛操日本美女一级片| 最后的刺客免费高清国语| 亚洲精品国产av蜜桃| 国产毛片在线视频| 观看av在线不卡| 欧美日韩视频高清一区二区三区二| 免费黄色在线免费观看| 亚洲国产精品999| 亚洲av中文av极速乱| 国产精品免费大片| 韩国av在线不卡| 久久毛片免费看一区二区三区| 永久网站在线| 国产精品免费大片| 成年免费大片在线观看| 久久久久国产网址| 国产成人91sexporn| 天天躁日日操中文字幕| 国产 一区 欧美 日韩| 人人妻人人澡人人爽人人夜夜| 日韩中字成人| 欧美xxxx黑人xx丫x性爽| 亚洲不卡免费看| 亚洲中文av在线| 看非洲黑人一级黄片| 日本av手机在线免费观看| 日本黄色片子视频| 免费大片黄手机在线观看| 日韩av不卡免费在线播放| 美女脱内裤让男人舔精品视频| 欧美zozozo另类| 99九九线精品视频在线观看视频| 视频中文字幕在线观看| 日韩强制内射视频| 久久久国产一区二区| 日本欧美国产在线视频| 少妇的逼水好多| 日韩一本色道免费dvd| 国国产精品蜜臀av免费| 天堂俺去俺来也www色官网| 日日摸夜夜添夜夜添av毛片| 欧美日韩精品成人综合77777| 嫩草影院新地址| 国产精品久久久久久精品古装| 国产成人一区二区在线| 亚洲精品久久久久久婷婷小说| 少妇人妻 视频| 大片免费播放器 马上看| 99久久人妻综合| 夫妻性生交免费视频一级片| 亚洲国产欧美在线一区| 亚洲精品成人av观看孕妇| 日本欧美视频一区| 男人舔奶头视频| 我要看日韩黄色一级片| 免费看av在线观看网站| 国产精品一区二区在线观看99| 久久久久精品久久久久真实原创| 18禁裸乳无遮挡免费网站照片| 老女人水多毛片| 色网站视频免费| 七月丁香在线播放| 国产成人aa在线观看| 亚洲人成网站在线观看播放| 日本色播在线视频| 性色avwww在线观看| 一个人免费看片子| 亚洲精品亚洲一区二区| 亚洲aⅴ乱码一区二区在线播放| 久久久久久久久久久丰满| 高清午夜精品一区二区三区| 日韩强制内射视频| 久久久久久久大尺度免费视频| 五月开心婷婷网| 亚洲av.av天堂| 在现免费观看毛片| 久久精品国产亚洲av涩爱| 哪个播放器可以免费观看大片| 啦啦啦视频在线资源免费观看| 亚洲精品一区蜜桃| 午夜福利在线观看免费完整高清在| 久久久国产一区二区| 亚洲精品成人av观看孕妇| 日韩国内少妇激情av| 在线观看一区二区三区| 国产精品熟女久久久久浪| 另类亚洲欧美激情| 日本午夜av视频| 亚洲婷婷狠狠爱综合网| 成人午夜精彩视频在线观看| 欧美bdsm另类| 亚洲欧美中文字幕日韩二区| 亚洲国产日韩一区二区| 又粗又硬又长又爽又黄的视频| 狂野欧美白嫩少妇大欣赏| 内射极品少妇av片p| 日韩制服骚丝袜av| 国产中年淑女户外野战色| 国产精品99久久久久久久久| 免费不卡的大黄色大毛片视频在线观看| av女优亚洲男人天堂| kizo精华| 国产一区二区在线观看日韩| 18禁裸乳无遮挡免费网站照片| 精品一品国产午夜福利视频| 亚洲精品日韩av片在线观看| 99久久精品国产国产毛片| 久久毛片免费看一区二区三区| 婷婷色综合大香蕉| 春色校园在线视频观看| 内射极品少妇av片p| 黄片wwwwww| 欧美日韩视频高清一区二区三区二| 亚洲综合精品二区| 狂野欧美激情性bbbbbb| 又黄又爽又刺激的免费视频.| 国产在线免费精品| 亚洲av欧美aⅴ国产| 日韩伦理黄色片| 各种免费的搞黄视频| 美女xxoo啪啪120秒动态图| 久久国内精品自在自线图片| 亚洲第一av免费看| 久久精品久久久久久噜噜老黄| 日韩免费高清中文字幕av| 成人综合一区亚洲| 亚洲美女黄色视频免费看| 亚洲色图综合在线观看| 亚洲三级黄色毛片| 一区二区av电影网| 欧美xxxx性猛交bbbb| 免费看光身美女| 亚洲一级一片aⅴ在线观看| 51国产日韩欧美| 亚洲欧美日韩东京热| 亚洲伊人久久精品综合| 99热这里只有是精品在线观看| 国产黄片美女视频| 欧美性感艳星| 精品视频人人做人人爽| av国产免费在线观看| 高清欧美精品videossex| 亚洲欧美成人精品一区二区| 国产精品国产三级国产av玫瑰| 久久人人爽人人片av| 性高湖久久久久久久久免费观看| 丰满乱子伦码专区| 亚洲国产毛片av蜜桃av| 观看av在线不卡| 五月天丁香电影| 不卡视频在线观看欧美| 男女国产视频网站| 在线观看一区二区三区激情| 99热国产这里只有精品6| 亚洲四区av| 久久精品国产亚洲av涩爱| 大码成人一级视频| 中文字幕免费在线视频6| 高清黄色对白视频在线免费看 | 精品久久久久久久末码| 国产成人一区二区在线| 亚洲欧洲日产国产| 精品亚洲成a人片在线观看 | 最近最新中文字幕大全电影3| 亚洲精品亚洲一区二区| 少妇猛男粗大的猛烈进出视频| 久久婷婷青草| 国内揄拍国产精品人妻在线| 国产视频内射| 交换朋友夫妻互换小说| 国模一区二区三区四区视频| 国产精品99久久久久久久久| 2018国产大陆天天弄谢| videos熟女内射| 欧美亚洲 丝袜 人妻 在线| 18禁裸乳无遮挡免费网站照片| 亚洲无线观看免费| av又黄又爽大尺度在线免费看| 妹子高潮喷水视频| 亚洲精品aⅴ在线观看| 国精品久久久久久国模美| 夜夜爽夜夜爽视频| www.色视频.com| 99热这里只有是精品在线观看| 你懂的网址亚洲精品在线观看| 欧美日韩视频高清一区二区三区二| 国产日韩欧美在线精品| 国产成人91sexporn| 日本黄色片子视频| 日韩av在线免费看完整版不卡| 亚洲高清免费不卡视频| 尾随美女入室| 夫妻午夜视频| 成人美女网站在线观看视频| 久久久成人免费电影| 亚洲av.av天堂| 大香蕉97超碰在线| 看十八女毛片水多多多| 日韩成人av中文字幕在线观看| 中文字幕久久专区| 最近最新中文字幕免费大全7| 国产精品一二三区在线看| 欧美三级亚洲精品| 一级毛片我不卡| 欧美+日韩+精品| 国产成人免费观看mmmm| 亚洲精品国产av蜜桃| 午夜日本视频在线| 国产淫片久久久久久久久| 国产白丝娇喘喷水9色精品| 亚洲精品国产成人久久av| 国产乱人偷精品视频| av在线观看视频网站免费| 美女cb高潮喷水在线观看| 国产探花极品一区二区| 亚洲一级一片aⅴ在线观看| 香蕉精品网在线| 精品人妻偷拍中文字幕| 精品一区二区三区视频在线| 插逼视频在线观看| 嫩草影院入口| 99国产精品免费福利视频| 亚洲欧洲日产国产| 欧美97在线视频| 最新中文字幕久久久久| 国产精品久久久久久精品古装| 男的添女的下面高潮视频| 王馨瑶露胸无遮挡在线观看| 性高湖久久久久久久久免费观看| 亚洲国产精品一区三区| 欧美高清成人免费视频www| 亚洲国产欧美在线一区| 又黄又爽又刺激的免费视频.| 99热国产这里只有精品6| a级毛色黄片| 高清黄色对白视频在线免费看 | 精品亚洲成国产av| 三级国产精品欧美在线观看| 激情 狠狠 欧美| 婷婷色av中文字幕| 国产精品.久久久| 亚洲欧美成人综合另类久久久| 2021少妇久久久久久久久久久| 国产亚洲av片在线观看秒播厂| 人体艺术视频欧美日本| 亚洲一级一片aⅴ在线观看| 久久午夜福利片| 一个人免费看片子| av在线蜜桃| 免费久久久久久久精品成人欧美视频 | 国产爱豆传媒在线观看| 欧美少妇被猛烈插入视频| 少妇人妻精品综合一区二区| 欧美高清成人免费视频www| 国产一区二区在线观看日韩| 婷婷色综合www| av视频免费观看在线观看| 视频区图区小说| 日本黄色片子视频| 日本与韩国留学比较| a 毛片基地| 高清午夜精品一区二区三区| 插阴视频在线观看视频| 久久国产精品大桥未久av | 人妻少妇偷人精品九色| 国产伦理片在线播放av一区| 亚洲精品视频女| 亚洲欧洲日产国产| 亚洲四区av| 亚洲三级黄色毛片| 婷婷色麻豆天堂久久| a级毛片免费高清观看在线播放| 99热这里只有是精品在线观看| 欧美激情极品国产一区二区三区 | 成人无遮挡网站| 亚洲丝袜综合中文字幕| 汤姆久久久久久久影院中文字幕| 日韩成人av中文字幕在线观看| 国产色爽女视频免费观看| 午夜福利在线在线| 能在线免费看毛片的网站| av视频免费观看在线观看| 各种免费的搞黄视频| 97在线人人人人妻| 国产成人91sexporn| 中文字幕免费在线视频6| 亚洲天堂av无毛| 国产视频内射| 国产精品偷伦视频观看了| 亚洲欧美日韩另类电影网站 | 丰满少妇做爰视频| 国产高清不卡午夜福利| 97热精品久久久久久| 国产在线免费精品| 亚洲aⅴ乱码一区二区在线播放| 不卡视频在线观看欧美| 女人十人毛片免费观看3o分钟| 亚洲性久久影院| 亚洲精品国产成人久久av| 午夜免费观看性视频| 日韩av免费高清视频| 久久久久久久亚洲中文字幕| 亚洲av.av天堂| 又爽又黄a免费视频| 男女下面进入的视频免费午夜| 美女高潮的动态| 国产亚洲5aaaaa淫片| 狂野欧美白嫩少妇大欣赏| 天天躁夜夜躁狠狠久久av| 国产精品一区www在线观看| 国产女主播在线喷水免费视频网站| 国产精品熟女久久久久浪| 亚洲第一区二区三区不卡| 亚洲va在线va天堂va国产| 久久女婷五月综合色啪小说| 国语对白做爰xxxⅹ性视频网站| 亚洲成人av在线免费| 少妇猛男粗大的猛烈进出视频| 国产成人精品福利久久| 嫩草影院入口| 肉色欧美久久久久久久蜜桃| 男人和女人高潮做爰伦理| 你懂的网址亚洲精品在线观看| 18禁裸乳无遮挡免费网站照片| 美女脱内裤让男人舔精品视频| 午夜老司机福利剧场| 久久久久久久久久人人人人人人| 美女cb高潮喷水在线观看| 肉色欧美久久久久久久蜜桃| 女性被躁到高潮视频| 1000部很黄的大片| 丝袜脚勾引网站| 天天躁夜夜躁狠狠久久av| 国产黄色视频一区二区在线观看| 亚洲精品亚洲一区二区| av国产久精品久网站免费入址| 亚洲精品一区蜜桃| 久久99热6这里只有精品| 欧美人与善性xxx| 国内揄拍国产精品人妻在线| 亚洲欧美一区二区三区国产| 欧美高清成人免费视频www| 国产一区二区三区av在线| 插阴视频在线观看视频| 日日摸夜夜添夜夜添av毛片| 亚洲一级一片aⅴ在线观看| 97精品久久久久久久久久精品| 欧美日韩综合久久久久久| 成人18禁高潮啪啪吃奶动态图 | 纵有疾风起免费观看全集完整版| 一二三四中文在线观看免费高清| 久久婷婷青草| 成人特级av手机在线观看| 国产爽快片一区二区三区| 人人妻人人澡人人爽人人夜夜| 狂野欧美激情性xxxx在线观看| 日韩在线高清观看一区二区三区| 2018国产大陆天天弄谢| 精品亚洲成国产av| 亚州av有码| 亚洲第一av免费看| 91久久精品国产一区二区三区| 午夜免费男女啪啪视频观看| 色视频在线一区二区三区| 国产一区二区三区综合在线观看 | 国产精品一区二区性色av| 女性被躁到高潮视频| 国产乱人视频| 色视频www国产| 国产 一区精品| 国产精品蜜桃在线观看| 日韩不卡一区二区三区视频在线| 又黄又爽又刺激的免费视频.| 一级片'在线观看视频| 欧美变态另类bdsm刘玥| 国产亚洲欧美精品永久| videos熟女内射| 老司机影院毛片| 久久久久国产网址| 精华霜和精华液先用哪个| 国产精品99久久99久久久不卡 | 精品国产乱码久久久久久小说| 国产精品99久久99久久久不卡 | 欧美性感艳星| 亚洲第一av免费看| 最新中文字幕久久久久| 国内精品宾馆在线| 熟女av电影| 亚洲一级一片aⅴ在线观看| 亚洲成人一二三区av| 国内少妇人妻偷人精品xxx网站| 一本久久精品| 成人高潮视频无遮挡免费网站| 欧美国产精品一级二级三级 | 亚洲不卡免费看| av线在线观看网站| 在线观看免费日韩欧美大片 | 国产女主播在线喷水免费视频网站| 久久综合国产亚洲精品| 日韩免费高清中文字幕av| 一本色道久久久久久精品综合| 乱系列少妇在线播放| 久久久久久久久久人人人人人人| 欧美日韩视频高清一区二区三区二| 80岁老熟妇乱子伦牲交| 日韩人妻高清精品专区| 黄片wwwwww| 性色av一级| 久久久亚洲精品成人影院| 日本一二三区视频观看| 人妻制服诱惑在线中文字幕| 在线观看国产h片| 秋霞伦理黄片| 日本爱情动作片www.在线观看| 夫妻性生交免费视频一级片| 人妻少妇偷人精品九色| 一区在线观看完整版| 伊人久久国产一区二区| 久久久国产一区二区| 日日摸夜夜添夜夜添av毛片| av卡一久久| 亚洲欧美一区二区三区黑人 | 熟妇人妻不卡中文字幕| 五月玫瑰六月丁香| 亚洲欧美成人精品一区二区| 日日摸夜夜添夜夜爱| 免费观看av网站的网址| 99热全是精品| 国产伦精品一区二区三区视频9| 久久国产精品男人的天堂亚洲 | 国产欧美亚洲国产| 日本欧美国产在线视频| 国产黄频视频在线观看| 97在线人人人人妻| 亚洲精品国产色婷婷电影| 亚洲人成网站在线观看播放| 久久久久久久精品精品| 插逼视频在线观看| 国产乱来视频区| 日韩一本色道免费dvd| 晚上一个人看的免费电影| 97超视频在线观看视频| 亚洲国产精品999| 一区二区三区免费毛片| 亚洲精品亚洲一区二区| 国产深夜福利视频在线观看| 夫妻午夜视频| 国产精品.久久久| 91精品伊人久久大香线蕉| 1000部很黄的大片| 国产精品一区www在线观看| 日本色播在线视频| 免费少妇av软件| 免费大片18禁| 在线天堂最新版资源| 少妇熟女欧美另类| 日本vs欧美在线观看视频 | 午夜福利在线在线| 国产乱人偷精品视频| 多毛熟女@视频| 少妇高潮的动态图| 精品一区二区三区视频在线| 五月伊人婷婷丁香| 大又大粗又爽又黄少妇毛片口| 综合色丁香网| 永久网站在线| 91精品国产九色| 一本—道久久a久久精品蜜桃钙片| 亚洲精华国产精华液的使用体验| 欧美xxxx黑人xx丫x性爽| 国产乱人视频| 老熟女久久久| 免费av中文字幕在线| 免费大片黄手机在线观看| 色婷婷久久久亚洲欧美| 精华霜和精华液先用哪个| 精品久久久噜噜| 天美传媒精品一区二区| 午夜福利在线观看免费完整高清在| 国产69精品久久久久777片| 亚洲av.av天堂| 国产精品不卡视频一区二区| 欧美日韩国产mv在线观看视频 | 久久精品国产亚洲av天美| 多毛熟女@视频| 看十八女毛片水多多多| 欧美xxxx性猛交bbbb| 国产精品av视频在线免费观看| 成人一区二区视频在线观看| 一二三四中文在线观看免费高清| 在线观看av片永久免费下载| 国产91av在线免费观看| 黑人猛操日本美女一级片| h日本视频在线播放| 久久久久国产精品人妻一区二区| 精品久久久久久电影网| 亚洲欧美成人精品一区二区| 亚洲精品国产av成人精品| 中文欧美无线码| 成人美女网站在线观看视频| 国产高清有码在线观看视频| 高清黄色对白视频在线免费看 | .国产精品久久| 少妇熟女欧美另类| 中国三级夫妇交换| 色网站视频免费| 在线免费观看不下载黄p国产| 草草在线视频免费看| 国产精品国产av在线观看| 六月丁香七月| 亚洲精品,欧美精品| 亚洲国产毛片av蜜桃av| 久久青草综合色| 我的老师免费观看完整版| 如何舔出高潮| 夫妻午夜视频| 国产av精品麻豆| 国产亚洲一区二区精品| 黑人高潮一二区| 久久av网站| 亚洲综合精品二区| 一区在线观看完整版| 一级毛片aaaaaa免费看小| 啦啦啦视频在线资源免费观看| 91午夜精品亚洲一区二区三区| 97在线人人人人妻| 青青草视频在线视频观看| 身体一侧抽搐| 欧美高清性xxxxhd video| 国产精品一区二区在线不卡| 夜夜看夜夜爽夜夜摸| 香蕉精品网在线| 日韩av免费高清视频| 国产精品一区二区在线观看99| 青春草国产在线视频| 在线观看三级黄色| 亚洲中文av在线| 免费在线观看成人毛片| 五月伊人婷婷丁香| 涩涩av久久男人的天堂| 啦啦啦在线观看免费高清www| 久久 成人 亚洲| av黄色大香蕉| 十分钟在线观看高清视频www | 久久久久久久精品精品| 亚洲最大成人中文| 欧美成人一区二区免费高清观看| 大陆偷拍与自拍| 在现免费观看毛片| 亚洲激情五月婷婷啪啪| 国产v大片淫在线免费观看| 五月伊人婷婷丁香| 精品一品国产午夜福利视频| av又黄又爽大尺度在线免费看| av视频免费观看在线观看| 黄片wwwwww| 午夜福利视频精品| 夜夜看夜夜爽夜夜摸| 久久人人爽人人片av| 五月伊人婷婷丁香| 尤物成人国产欧美一区二区三区| 五月玫瑰六月丁香| 久久亚洲国产成人精品v| 免费观看无遮挡的男女| 久久人妻熟女aⅴ| 午夜激情福利司机影院| 在线观看一区二区三区| 精品人妻一区二区三区麻豆| 亚洲内射少妇av|