• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Deep Two-State Gated Recurrent Unit for Particulate Matter(PM2.5)Concentration Forecasting

    2022-08-24 03:28:44MuhammadZulqarnainRozaidaGhazaliHabibShahLokmanHakimIsmailAbdullahAlsheddyandMaqsoodMahmud
    Computers Materials&Continua 2022年5期

    Muhammad Zulqarnain,Rozaida Ghazali,*,Habib Shah,Lokman Hakim Ismail,Abdullah Alsheddy and Maqsood Mahmud

    1Faculty of Computer Science and Information Technology,Universiti Tun Hussein Onn Malaysia,Batu Pahat,Johor,Malaysia

    2College of Computer Science,King Khalid University,Abha,Saudi Arabia

    3College of Computer and Information Sciences,Imam Mohammad Ibn Saud Islamic University,Riyadh,Saudi Arabia

    4Department of Management Information Systems,College of Business Administration,Imam Abdulrahman Bin Faisal University,31441,Dammam,Saudi Arabia

    Abstract: Air pollution is a significant problem in modern societies since it has a serious impact on human health and the environment.Particulate Matter (PM2.5) is a type of air pollution that contains of interrupted elements with a diameter less than or equal to 2.5 m.For risk assessment and epidemiological investigations,a better knowledge of the spatiotemporal variation of PM2.5 concentration in a constant space-time area is essential.Conventional spatiotemporal interpolation approaches commonly relying on robust presumption by limiting interpolation algorithms to those with explicit and basic mathematical expression,ignoring a plethora of hidden but crucial manipulating aspects.Many advanced deep learning approaches have been proposed to forecast Particulate Matter (PM2.5).Recurrent neural network(RNN) is one of the popular deep learning architectures which is widely employed in PM2.5 concentration forecasting.In this research, we proposed a Two-State Gated Recurrent Unit(TS-GRU)for monitoring and estimating the PM2.5 concentration forecasting system.The proposed algorithm is capable of considering both spatial and temporal hidden affecting elements spontaneously.We tested our model using data from daily PM2.5 dimensions taken in the contactual southeast area of the United States in 2009.In the studies,three evaluation matrices were utilized to compare the overall performance of each algorithm:Mean Absolute Error (MAE), Root Mean Square Error(RMSE), and Mean Absolute Percentage Error (MAPE).The experimental results revealed that our proposed TS-GRU model outperformed compared to the other deep learning approaches in terms of forecasting performance.

    Keywords: Deep learning;PM2.5 forecasting;air pollution;two-state GRU

    1 Introduction

    Particulate matter (PM) levels have recently become a global issue.Atmospheric aerosols are groupings of solid or liquid particles suspended in the air that arise from a variety of sources and come in different shapes and sizes.Furthermore,the majority of particulate matter is formed in the lowest layer of the atmosphere.Fine particles having aerodynamic dimensions less than 10 and 2.5 m are referred to as PM10and PM2.5respectively.Many epidemiological studies have demonstrated that PM is extremely harmful to people,especially at high concentrations[1].PM2.5is still a major public health concern[2],and it has been related to a number of health consequences,such as cancer,respiratory,mortality,and cardiovascular illnesses[3].Environmental exposure analysis has increased significantly due to advances in geospatial technologies,particularly Geographic Information Systems(GIS).An adequate knowledge of PM2.5in a continuous space-time domain is necessary for a useful evaluation of the quantifiable link between adverse health effects and PM2.5concentrations.Because air pollution data is frequently obtained at discrete or restricted sample areas,it is frequently essential to estimation air quality intensity at current information locations inside the region of a finite set of existing data points,which referred to interpolated in quantitative simulation.Based on the assumption,the spatial interpolation approaches have been already widely examined over the years that are higher associated with ordinary spatial interpolation approaches,including Inverse Distance Weighting(IDW)[4],trend surface[5]and splines[6].

    Most well-known interpolation approaches, such as IDW and Kriging, constrain interpolation methods which are commonly described with clear and simple mathematical expression.In contrast to conventional spatial interpolation,spatiotemporal interpolation requires consideration of an additional time dimension.There are some effective and efficient interpolation techniques for complicated spatiotemporal datasets.Some spatiotemporal approaches [7] integrate time and space individually and decrease the temporal interpolated issue to a series of multivariate statistical snapshots.Few other spatiotemporal approaches [8,9] consider time as a separate dimension in space and integrate both temporal and spatial aspects at the same time.Unfortunately,none of the thesis researches provided adequate methodologies for including time aspects, ensuring that the sequential measurement is handled “equitably”in comparison to the spatial dimension.Samal et al.[10] were referred to this issue as the “time scale issue”and later, over the past decades Fioravanti et al.[11] were revised the scaling ratio is known as“spatiotemporal anisotropy parameter”.To estimate this parameter,just a few basic approaches were proposed.The fundamental reason,there is a dearth of strong theoretical assistance for determining the relationship between time and space dimensions.A black box strategy,including the artificial intelligent algorithm,is a reasonable concept and an auspicious way to predict the spatiotemporal assessment parameter in such scenarios.Furthermore,Badli et al.[12]presented an effective parallel machine learning model to address this challenge in order to regulate the appropriate spatiotemporal anisotropy parameters.

    Through a hierarchical learning process, deep neural network approaches have extracted highlevel, features from data for learning processing [13].Artificial intelligence inspired the hierarchical learning mechanism,which resembles the deep layered learning mechanism of the core sensory fields of the human brain’s neocortex, which pulls functionalities and abstractions from the core input[14,15].Since its inception,deep learning has been effectively used in time series prediction[16],object detection [17] natural language processing [18], medical images analysis [19] multi-class skin cancer classification[20],and sentiment analysis[21].Deep recurrent neural network(DRNN)[22]is one of the most suitable deep learning models for time series prediction and sequence modelling because it often perceives the current input but also a trace of formerly obtained data through use of repeatable process,that permits a directly dispensation of sequential relationship and other hidden probability.

    Lately,in 2017,F(xiàn)an et al.[23]introduced a DRNN-based comprehensive forecast architecture for air pollution level.It’s a helpful forecasting approach that can’t be employed for common interpolation goals.Recently in 2018, Qi et al.[24] proposed broad and efficient technique to address the finegrained air quality interpolation,forecasting,and feature analysis in one model.The RNN was used as the major ingredient in their method as well.Previous RNN-based deep learning approaches relied solely on historical data.Furthermore,they considered that the present air pollution intensity is solely driven to the concerned point’s by previous information and the present air quality strengths of its environmental situations.This argument is flawed because it ignores the time relation between spatial neighbors.

    This research intends to create a unique spatiotemporal interpolation technique for predicting PM2.5based on Two-State Gated Recurrent Unit,which considers both past and future spatiotemporal relationships between geographical neighbours.Our methodology enables the creation of more precise air pollution estimates on a vast geographic area due to long period of time.Generally, RNN’s memory of previously acquired patterns fades with time,resulting in a calculation difficulty known as vanishing gradient[25].We explored the advanced variant of recurrent neural networks such as Gated Recurrent Unit(GRU)[26]GRU handles this problem by retaining an internal flow of information and establishing routes where the gradient can flow for a long period of time.Specifically, we used the Two-State GRU (TS-GRU) to train our prediction model for air pollution concentrations.In particularly,the two-state principle divides the neurons of a developed TS-GRU into two directions,allowing for simultaneous consideration of both past and future information.We assessed the model performance of our proposed model using ground PM2.5measurements from the US Environmental Protection Agency(EPA)’s Air Quality System(AQS).In order to examine the impact of concentration influence on the temporal dimension, we also compared our developed approaches to the existing GRU RNN model.

    2 Related Works

    2.1 Spatial Temporal Interpolation

    Despite the fact that PM2.5concentrations are usually observed in common countries including USA, the number of sensors and their geographic range continue restricted.Mostly In any circumstance, none of these studies provide a strategy for predicting pollution over the next several months or identifying associated factors.With ever-increasing levels of air pollution, it’s critical to develop efficient air quality monitoring simulations based on data provided by pollution sensors.These algorithms may help predict the concentration of particles and provide an assessment of air pollution in each location.As a result,air quality assessment and forecasting has become a significant study area.In conjunction to pieces within the ensemble that address the hypothesis of air pollution.Advanced spatiotemporal interpolation technique is critical for gaining a good understanding of the observed air pollutants because it can have a significant influence on the precise assessment of humanoid revelation to PM2.5and obtain more consistent analysis of the correlation among PM2.5and disease consequences through time [26].Assume that in an areaA, there arenvarious monitoring stations{S1,...,Sn}.The analysis for particular situationSiat a certain time stamptcan be defined as a tuplexi,t=(loni,lati,t,vi) whereviis the reported air pollutant concentration is measured,loni and latidefining the longitude and latitude of the stationSi,accordingly.As a result,the input dataset can be referred to as n time series,{ts1,...,tsn}.The sequential time seriests1=xi,1,...,xi,Tthat observed by data at a single stationSi.Based on the time series the basic purpose of this research is target to estimateAat any time for the position ofv.The local air quality is frequently affected by nearby places in the spatial dimension as air pollutants can disseminate or spread across the atmosphere with the wind[27].

    Historical air pollution levels can influence present and future levels in the temporal dimension.For example,the pollution levels of the previous hour will have an impact on the next hours of pollution levels during the observation process.Furthermore,some various cases have included in recent years is that, atmosphere has tended to be similar during the same time of periods.In conclusion, to all influencing factors mentioned above, many other factors including weather, human activities and traffic flow can cause changes in air quality in both geographic and frequency domain,affecting air pollutant concentrations.It is difficult to construct a comprehensive mathematical model to estimate the levels of air quality due to the lack of a available dataset and only three affecting parameters,namely longitude,latitude,and time have been used in the recent studies.Although GRU is one of the most effective approaches,it has been applied for the prediction of various types of particulate matter(PM)levels.

    2.2 Gated Recurrent Unit(GRU)

    The GRU is a more advanced and simplified version of the recurrent neural network such as LSTM, which was first proposed on statistical machine translation by [28].GRU is based on the LSTM, which uses an update gateztand a reset gatertto handle information flow inside the unit without the use of separate memory.As a result,GRU can capture the mapping relationship between time-series data[29,30],while also offering appealing benefits such as reduced complexity and a faster computational procedure.Fig.1 demonstrate the GRU computational structure, which shows the connection among the update and reset gates.Furthermore, GRU uses internal memory to retain the filter information and combines the input and forget gates into a single update gate with previous stateht-1and the candidate computation illustrated by ?ht.The update gate,reset gate,and candidate state are the three major components of GRU,and their equations summaries as follows:

    Figure 1:GRU Architecture

    whereVxz,Vxrandpresent to the weights vector between the input layer and update gate, reset gate and candidate vector while weight matrixUhz,Uhrandreferring the recurrent connection respectively.φis the nonlinear activation function of update and reset gates,*conducts multiplication operation between the component andBz,Brandare the associated biases.

    3 The Proposed Model:Two-State GRU Mechanism

    GRU is the latest kind of traditional RNN which particularly has to be used for sequential modeling.However,a recurrent layer required the input vectorht∈Rnat each timestept,and hidden statehtby implementing the recurrent procedure:

    whereW∈Rm*n,b∈Rm*m,b∈Rmweights matrix, and element-wise nonlinearity is represented by f.Training the long-term dependencies with RNN is very complicated due to the problem of vanishing gradient and exploding[31].By applying the gating architecture,GRU can maintain memory substantially better than traditional RNN[32].However,based on the existing literature,we explored that when GRU analyze a word it only includes the forward semantic information,so it is impossible for GRU to learns the backward contexts.As the results, we also observed that in any language approach, which process of sentence is not affected only through forward information but also in the backward context.Therefore,in this study,we proposed Two-State GRU(TS-GRU)to solve the aforementioned issue.The proposed TS-GRU model consists of two processes,one for positive pass known as“forward pass”,and other for negative pass known as“backward pass”presented in Fig.2.The two-state GRU can efficiently learn the context through both directions.

    Figure 2:The proposed Two-State GRU architecture for sentiment analysis

    TS-GRU is inspired by the bidirectional recurrent neural networks(BRNNs)in[33].It consists of two separate recurrent nets in the terms of forward passes(left to right(for future information))and backward passes(right to left(for past information))in the training process and finally both of them are merged to produce output layer.The formulas for update gatezt,reset gatert,candidate state,and final output activation statehtof the forward and backward GRU are shown as a follows:

    Forward pass:

    Additionally,we implemented backward pass in the proposed approach to explore more valuable information.

    Backward Pass:

    The activation of a word at timet:= for an arbitrary sequence (x1,x2,...,xn)containingnwords, at timeteach word illustrated as a dimensional vector.The forward GRU computeswhich takes left-to-right contexts of the sentence whereas the reverse GRU consider rightto-left contextsfor attention.Then forward and backward context descriptions are then combined into a single context.In common,Backpropagation Through Time(BPTT)is a gradient constracted based methodology and a veriation of the conventional backpropagation method that can be used to train the DRNN(Chauvin and Rumelhart 1995)[34].BPTT starts with development of a unfolding RNN in time so that each timestep has one input timestep,one copy of the network,and one output.

    The system flow diagram of the proposed TS-GRU is presented in Fig.3.During training to avoid being excessively fractional to a particular dimension,the original dataset is first normalized,that is,the data points of all dimensions are constrained to a range of 0 to 1.Furthermore,the regularized data is divided into two sections:training data and testing data.Only the training data is used throughout the training to maintain the impartiality of performance evaluation.When training data is fed into the TS-GRU,a loss value is created,and the enhancement adjusts the parameters of TS-GRU using the backpropagation method.The forecasting performance of TS-GRU will become more precise with the increase of training iterations.The testing data is entered into the TS-GRU when the learning is finished, and evaluate the performance of the TS-GRU the testing results and real results were compared.Overfitting can happen when there is not enough training data or when there is too much training.

    Figure 3:The system flow diagram of the proposed TS-GRU

    However, overfitting can be avoided using several methods including, regularization [35], data augmentation[36],dropout[37],and dropconnect[38].Regularization can be divided into two types:L1 regularization and L2 regularization,both of which are commonly employed in deep learning.To avoid overfitting, both of these strategies minimize the weight value of the neural network as much as possible.The goal of data augmentation is to enhance the dimension of the dataset as much as conceivable,for example,by adding random bias or noise,in order to diversity the training data and improve training results.Dropout and dropconnect are similar in that the former pauses the neuro’s operation at random,while the latter eliminates the connection at discrete points.The early stopping technique is implemented in this paper[39].

    4 Experimental Design

    In this section, we briefly explained the experimental settings, measure of performance and empirical results of the developed two-state GRU approach.

    4.1 Data Set Description

    We investigated the daily PM2.5data set in Florida in 2009 to illustrate the performance and efficiency of our developed approach.This data was accessed by the United States Environmental Protection Agency’s Air Quality System (AQS) controlling process and can be accessible via EPA’s website.In this dataset,a tuple entry(t,lon,lat,v),is determined by each dataset wherelonandlatare referring to the length and parallel coordinates of the controlling station,t=(year,month,day)representing the date when a PM2.5dimension is reserved,and v is the calculated PM2.5value.Tab.1 shows the separate entity from one controlling station.The extracted features from dataset defined the collection of n time series{ts1,...,tsn}fromncontrolling positions.Each time seriestsi=xi,1,...,xi,Tis an sequential observation of data at a one stationSi, andxi,t=(t,lon,lat,v) represents one assessment from each stationSiat a certain time stept.We can notice from the sample data that the range of raw information fluctuates greatly.Suppose,the[1,12]is the range features of the month,whereas the limit of PM2.5values is(0,210].As a result,we measure the informative features so that all values fall between 0 and 1.Moreover, Ioffe et al.[40] also shown that when features are scaled,gradient descent converges substantially faster.

    Table 1:One monitoring site on sample dataset

    The original dataset consists incorrect entries, indicating that no measurements were taken at a distinct location and on a specific day.There were 6,698 everyday proportions at 30 controlling locations on all 365 days of 2009 after eliminating all the incorrect entries.

    4.2 Implementation Detail and Parameter Settings

    In the temporal interpolation,we assume that reginal pollution levels are influenced not just in nearby fields,but similarly associated by factual and prospective information from surrounding places[41].Our proposed framework uses the TS-GRU(illustrated in Fig.2)to collect the both geographical and temporal relationships.In the proposed framework two directions GRU layers and conventional dense layers are stacked in the network.Furthermore,the random uniform approach is used to set the parameters of each layer randomly and equally,and the sigmoid nonlinear process is utilized to imitate non-linearity in each layer.We used MAPE as our loss function because of its scale independence and interpretability.Finally,we used Adam algorithm[42]to train and optimized the entire neural network,which is a numerically effective technique for rapid stochastic optimization.Kingma and Ba showed that, Adam algorithm is suitable for issues with enormous amounts of data and is also suitable for non-stationary goals[42].All the simulations of this research were implemented on Intel core-i7-3770 CPU@3.40 GHz,DDR3 and 8 GB of RAM with Window 10 operating system.We used Python 3.7 compiler and a high-level NNs API-Keras as the development environment, with required libraries TensorFlow 1.14 and Keras 2.3.

    It is usual to run into the overfitting issue,when training neural network algorithm(see Tab.2.for the details),which indicates the performance of both training and testing set.However,the training set is substantially superior than testing set.To solve the overfitting issue and enhance the robustness of our approach,we used the k-fold cross validation[43]and dropout technique[37].We divide the dataset into k equivalent-size subgroups for k-fold cross validation,then choose one subset as the testing set and train on the subsisting k–1 subset iteratively.In this study,we used the generally utilized 10-fold cross validation method.As a result, we randomly partition our dataset into 78% training set, 10%validation set and 12%testing test,and then train our network on the training set using the 10-fold cross validation technique.

    Table 2:Parameters description of the deep learning models

    When training a neural network,the dropout technique works by sampling a“thinned”model.To optimize the model,we arbitrarily selected a portion of nodes in hidden layer and temporarily deleted them from the network,as well as all input and output connectivity at each iteration.As a side effect,the dropout technique also improves the training efficiency by requiring fewer computations.We also explored that the actual air quality is strongly linked to air quality levels in the past and future days.

    4.3 Evaluation Methods

    In this paper,we employed three assessment measures to access the performance of the developed model.When comparing predictions to actual values, these metrics were calculated:mean absolute error(MAE),root-mean-square error(RMSE),and mean absolute percentage error(MAPE).Smaller values indicate better performances.The Large error are given relatively high weights by the RMSE.These equations are follows:

    whereOirepresents the observed air quality,while predicted air quality denoted byPi,and the number of assessment samples showed byN.The absolute error is calculated using the previous two indices,whereas the relative error is calculated using the third.In other words,the extreme consequence and error scope of the projected values are expressed by RMSE and MAE, while the specificity of the average projected value is represented by MAPE[44].

    5 Results and Discussion

    In this section,three experiments were conducted to investigate the spatiotemporal relationships and to illustrate the efficacy of our proposed methodology.

    Experiment 1:network architectureto insert our spatiotemporal dataset,we initially investigated appropriate deep learning architecture.The purpose of our first experiment is determined to stability the efficacy of our spatial and temporal interpolation technique by selecting the dropout rate,epoch numbers,and batch size.We consider that both the variables of closest neighbors and the quantity of the influencing days are 1,i.e.,k= 1,t= 1.In this experiment,the training set is separated through a number of constant dimension batches during the neural network training, in which each batch being transformed in order to during one learning session.As a result, we notice that the gradient and frequency of weights updates by batch size.Smaller batch sizes are usually encountered in less training epochs,whereas higher batch sizes provide additional similarity and thus superior calculation competence,as the separate learning instances within a single batch might be procedure in similar[45].In this study,we experimented with numerous batch sizes because our training data set is rather tiny,with only about 7,000 entrances.we experimented with several batch sizes{4,8,16,32,64,128,256}.Tab.3 illustrates the results.Finally, we selected 32 as our batch size to attain a proportion among the computational efficacy and competence.An epoch is a single pass over the complete training set batch by batch.The drawbacks of neural networks include the possibility of overfitting and a high computational cost.

    Table 3:For 60 epochs,various batch sizes result in varied measurements and running times

    We trained our model over 60 epochs when the batch size was 32,and the temporal training and validation losses were reported in Fig.4.During the training process, we noticed that both kinds of losses generally remain constant when the epoch number is greater than 45.Therefore, for our subsequent experiments,we set the epoch number up to 45.As we discussed in the previous section,the dropout technique is employed to enhance the performance of our developed approach by neglecting a smaller faction of interconnections at randomly.We have attempted eight various dropout rates to determine the dropout rate:{0.00,0.05,0.10,0.15,0.20,0.25,0.30,0.35}.Tab.4 contains the statistical data,and Fig.5.Depicts a performance of MAPE in the term of visualizations representation.In this case,we noticed that the values of MAPE increases as the dropout rate increases,as shown in Fig.5.Which indicates a weak model for a higher dropout rate.

    Figure 4:The training and validation losses

    Table 4:Measurements with/without 10-fold cross validation (CV) for various dropout rates.The k-cross validation technique increases the performance except for the emphasized cases

    Figure 5:MAPE for various dropout rate

    For improving the performance of our developed approach, we adopted another approach, 10-fold cross-validation,during the training procedure.The previous experiment was replicated without the 10-fold cross validation approach, as well as the findings are summarized in Tab.4.The 10-fold cross validation certainly enhance the robustness of our developed approach in the vast majority of scenarios.

    Experiment 2:number of influencing neighbors and daysWe examined variouskandtat the interested point to investigate how environmental and sequential neighbours affect air quality.We setk∈{1, 2, 3, 4, 5, 6} andt∈{1, 2, 3, 4, 5, 6} in a more specific way.Tabs.5 and 6 reported the statistical measurements were collected by experiment.In this experiment,we noticed that during the training process the network takes into the account additional geographical neighbours of the interested site and the MAPE tends to reduces.On the other hand,when the network considers further previous and future days,these reducing features probably applies.

    Table 5:Assessments for our method based on the amount of days influencing the outcome

    Table 5:Continued

    Table 6:Assessments for our method based on various number of neighbors

    Table 6:Continued

    Experiment 3:comparison with GRU-base RNNIn our final experiment, we compared the developed TS-GRU model with the existing deep GRU discover if the present condition of the air quality is associated with the future outcomes.In this experimental procedure,we build a GRU-based deep RNN, that is comparable to the network in Fig.2 besides that the Two-State GRU layers are adjusted as the standard GRU architecture.In the GRU, we suppose that the existing level of air pollution is unaffected by descriptive statistics.In other words,the existing GRU is a spatiotemporal prediction network.As a experimental results,the left subfigure of Fig.6 depicts a three-dimensional mesh representation of the MAPE values.On the right side,a three-dimensional mesh representation for the TS-GRU is illustrated for comparison.When we compared to the present GRU, the MAPE values reduces askortincreases,which is a comparable observation of GRU.However,the intensity is much smaller for the GRU.Another interesting observation has been made.The GRU attains superior results than the TS-GRU whentis small (t ≤3) no matter whatkis.In contract, iftis substantial adequate,i.e.,k >3,the TS-GRU got remarkable performance than traditional GRU for allkvalues.More particularly,historical levels of air pollution have a greater impact on future levels of air pollution.

    Figure 6:MAPE for traditional GRU and the proposed TS-GRU models

    Despite the fact that the TS-GRU model analyses based on informative contents from the future,the near future data brings extra unpredictability or uncertainty into the system, causing the TSGRU to perform poorly whentis small.The existing GRU approach picks up noise from the past information,whereas the TS-GRU model can calibrate these noises through future information.As a result,the TS-GRU illustrates excellent performance as compared to existing GRU whentis large enough.

    Fig.7 presents the comparison analysis of four experimental approaches.The real data is shown by the solid blue line.Usually, the LSTM and GRU models showed the poor match to the actual data,whereas another hybrid model was consistent.The hybrid approach mostly performed superior than the single approaches.In order to forecasting PM2.5levels,both the GRU and LSTM approaches were ineffective in forecasting the future higher and lower levels.The hybrid approach predicts the extreme events and commonly outperforms the single approaches.The proposed TS-GRU approach remarkable predicted PM2.5concentration levels, as compared to hybrid CNN–GRU model over 3 days in the term of future hours.The proposed TS-GRU model outperformed as compared to existing approaches and might be used to predict high PM concentrations in the future.

    Figure 7:Predicted 3-day PM2.5 concentration;all approaches

    6 Conclusion

    In this study,we proposed a novel spatiotemporal technique for interpolating PM2.5concentrations based on Two-State gated recurrent unit.This technique is based on recently proposed deep learning techniques and considers both spatial and temporal aspects simultaneously.In order to remember facts from the past as well as the future,we used the Two-State GRU to split the neurons of an existing GRU into two directions.The particulate matter(PM2.5)predictions are done using deep learning approaches based on the statistical computations of parameters including;MAE,RMSE and MAPE.The results illustrate that our proposed model is perform superior than the existing approaches and also present the actual values and predicted values are very near to each other.To the best of our observation,it is the first time that the Two-State GRU has been used in the spatiotemporal interpolation of air pollutants concentrations.Our future research will focus on this technique for further investigation on ground PM2.5measurements as well as auxiliary data such as satellite-derived aerosol optical depth(AOD), land use, roads, elevation, and weather circumstances.We will also further investigate the robustness of this strategy for prediction of other pollutant concentrations including ozone(O3)and nitrogen dioxide(NO2)and increase our research field to cover a larger geographical domain.In the future research, we will also explore how to speed up the developed model through using cluster computing frameworks.

    Acknowledgement:The authors thank to King Khalid University of Saudi Arabia for supporting this research under the grant number R.G.P.1/365/42.

    Funding Statement:This research work supported by Khalid University of Saudi Arabia under the grant number R.G.P.1/365/42.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久精品夜色国产| 欧美xxⅹ黑人| 搡女人真爽免费视频火全软件| 听说在线观看完整版免费高清| 禁无遮挡网站| 国产成人精品一,二区| 国产一区二区三区综合在线观看 | 欧美性猛交╳xxx乱大交人| 久久久久九九精品影院| 国产视频首页在线观看| 国产高清国产精品国产三级 | 久久亚洲国产成人精品v| 赤兔流量卡办理| 国产午夜精品一二区理论片| 午夜免费男女啪啪视频观看| av黄色大香蕉| 国产色婷婷99| 亚洲av中文字字幕乱码综合| 国产精品不卡视频一区二区| 日本与韩国留学比较| or卡值多少钱| 日本黄色片子视频| 一个人看视频在线观看www免费| 午夜福利在线在线| 淫秽高清视频在线观看| 天堂俺去俺来也www色官网 | 在线观看av片永久免费下载| 亚洲精品国产成人久久av| 久久久色成人| 国产探花在线观看一区二区| 麻豆久久精品国产亚洲av| 国产 亚洲一区二区三区 | 成人午夜高清在线视频| 精品99又大又爽又粗少妇毛片| 黄色配什么色好看| 亚洲国产av新网站| 亚洲综合精品二区| av黄色大香蕉| 寂寞人妻少妇视频99o| 我的女老师完整版在线观看| 能在线免费看毛片的网站| 欧美成人午夜免费资源| 女人久久www免费人成看片| 美女主播在线视频| 国产成人a∨麻豆精品| 亚洲久久久久久中文字幕| 不卡视频在线观看欧美| 日韩不卡一区二区三区视频在线| 18禁裸乳无遮挡免费网站照片| 日本三级黄在线观看| 黄色一级大片看看| 亚洲国产精品专区欧美| 欧美成人a在线观看| 狠狠精品人妻久久久久久综合| 国产成人freesex在线| 中文欧美无线码| 国产精品精品国产色婷婷| 亚洲精品成人av观看孕妇| 国产亚洲精品av在线| 亚洲av国产av综合av卡| 欧美日韩视频高清一区二区三区二| 人妻系列 视频| 91精品一卡2卡3卡4卡| freevideosex欧美| 69人妻影院| 午夜精品国产一区二区电影 | 国产午夜福利久久久久久| 我的女老师完整版在线观看| 亚洲欧美一区二区三区国产| 看黄色毛片网站| 蜜臀久久99精品久久宅男| 久久久精品欧美日韩精品| 亚洲精品第二区| 人人妻人人看人人澡| 午夜日本视频在线| av在线天堂中文字幕| 青春草国产在线视频| 亚洲国产日韩欧美精品在线观看| 亚洲成人精品中文字幕电影| 丝袜美腿在线中文| 99久久人妻综合| 老司机影院毛片| 久久久久久伊人网av| 丰满少妇做爰视频| 国产欧美日韩精品一区二区| 国产午夜精品久久久久久一区二区三区| 免费看美女性在线毛片视频| 亚洲精品乱码久久久v下载方式| 国产黄片美女视频| 国产精品人妻久久久影院| 一本一本综合久久| 18+在线观看网站| 国产极品天堂在线| 国产免费视频播放在线视频 | 亚洲欧洲日产国产| 亚洲国产成人一精品久久久| 肉色欧美久久久久久久蜜桃 | 国产淫片久久久久久久久| 国产探花在线观看一区二区| 亚洲熟女精品中文字幕| 久久97久久精品| 欧美高清性xxxxhd video| .国产精品久久| 国产精品一及| 熟女人妻精品中文字幕| 色综合色国产| 中文精品一卡2卡3卡4更新| 男人爽女人下面视频在线观看| 美女被艹到高潮喷水动态| 亚洲精品乱码久久久久久按摩| 噜噜噜噜噜久久久久久91| 成年免费大片在线观看| .国产精品久久| 亚洲自偷自拍三级| 亚洲成人久久爱视频| 免费看不卡的av| 亚洲国产精品sss在线观看| 国精品久久久久久国模美| 日本三级黄在线观看| 肉色欧美久久久久久久蜜桃 | 久久人人爽人人片av| 色5月婷婷丁香| 男女下面进入的视频免费午夜| 嫩草影院入口| 中文字幕av在线有码专区| 日本与韩国留学比较| 免费观看无遮挡的男女| 免费黄频网站在线观看国产| 国产亚洲5aaaaa淫片| 亚洲av电影在线观看一区二区三区 | 亚洲激情五月婷婷啪啪| 深夜a级毛片| 久久久成人免费电影| 久久精品综合一区二区三区| 一区二区三区四区激情视频| 久久午夜福利片| 中文乱码字字幕精品一区二区三区 | 久久久久国产网址| 中文字幕av成人在线电影| av免费在线看不卡| 久久97久久精品| 最近2019中文字幕mv第一页| 99热这里只有是精品50| freevideosex欧美| 丰满少妇做爰视频| 国产伦理片在线播放av一区| 草草在线视频免费看| 免费观看的影片在线观看| 亚洲丝袜综合中文字幕| 看非洲黑人一级黄片| 18禁裸乳无遮挡免费网站照片| 秋霞伦理黄片| 日韩制服骚丝袜av| 国产黄色视频一区二区在线观看| 日韩,欧美,国产一区二区三区| 成人国产麻豆网| 水蜜桃什么品种好| 久久久久国产网址| 男女边吃奶边做爰视频| 51国产日韩欧美| 欧美+日韩+精品| 日本wwww免费看| 午夜精品在线福利| 在线播放无遮挡| 久久这里只有精品中国| 国产精品久久久久久精品电影小说 | 国产亚洲一区二区精品| 亚洲经典国产精华液单| 激情 狠狠 欧美| 欧美成人精品欧美一级黄| 精品不卡国产一区二区三区| 欧美另类一区| 尤物成人国产欧美一区二区三区| 国产精品福利在线免费观看| 五月天丁香电影| 中国美白少妇内射xxxbb| 又大又黄又爽视频免费| 狂野欧美白嫩少妇大欣赏| 国产探花极品一区二区| 高清av免费在线| 成人鲁丝片一二三区免费| 精品国产一区二区三区久久久樱花 | 亚洲欧美日韩东京热| 久久久久久久久久黄片| 国产精品久久久久久久久免| 一区二区三区高清视频在线| 一个人观看的视频www高清免费观看| 亚洲人成网站高清观看| 日韩中字成人| 简卡轻食公司| 一个人看的www免费观看视频| 国产黄a三级三级三级人| 又爽又黄无遮挡网站| 精品国产一区二区三区久久久樱花 | 99久久九九国产精品国产免费| 国产精品国产三级国产av玫瑰| 国产成人aa在线观看| 国产精品.久久久| 国产男人的电影天堂91| 亚洲精品国产av蜜桃| 日本黄大片高清| 日韩成人av中文字幕在线观看| 中文在线观看免费www的网站| 亚洲精品久久久久久婷婷小说| 性插视频无遮挡在线免费观看| 亚洲在线自拍视频| 国产黄频视频在线观看| 中文乱码字字幕精品一区二区三区 | 97在线视频观看| 亚洲精品乱码久久久v下载方式| 久久热精品热| 久久国内精品自在自线图片| 黄色配什么色好看| 一级二级三级毛片免费看| 老女人水多毛片| 亚洲真实伦在线观看| 欧美xxxx黑人xx丫x性爽| 亚洲av中文字字幕乱码综合| 亚洲自偷自拍三级| 精品亚洲乱码少妇综合久久| 亚洲成人精品中文字幕电影| 午夜精品在线福利| 欧美日韩综合久久久久久| 亚洲av电影在线观看一区二区三区 | 赤兔流量卡办理| 菩萨蛮人人尽说江南好唐韦庄| 亚洲人与动物交配视频| 亚洲国产高清在线一区二区三| 搡老妇女老女人老熟妇| 亚洲av中文字字幕乱码综合| 久久久久久久亚洲中文字幕| 国产在视频线在精品| 日韩人妻高清精品专区| 最近手机中文字幕大全| 精品酒店卫生间| 欧美另类一区| 黄色欧美视频在线观看| 免费播放大片免费观看视频在线观看| 免费在线观看成人毛片| 三级国产精品欧美在线观看| 色播亚洲综合网| 午夜免费男女啪啪视频观看| 一边亲一边摸免费视频| 99热这里只有是精品50| 日韩精品有码人妻一区| 国产v大片淫在线免费观看| 国产精品一区二区性色av| 在线观看免费高清a一片| 久久久精品免费免费高清| 国产成人91sexporn| a级毛色黄片| 欧美日韩一区二区视频在线观看视频在线 | 亚洲成人精品中文字幕电影| 亚洲精品第二区| 免费观看av网站的网址| av卡一久久| 中国美白少妇内射xxxbb| 亚洲精品色激情综合| 国产精品一二三区在线看| 尾随美女入室| 人妻夜夜爽99麻豆av| 国产又色又爽无遮挡免| 久久久久久九九精品二区国产| 天堂√8在线中文| 亚洲国产精品成人综合色| 又爽又黄无遮挡网站| 久久久久国产网址| 亚洲丝袜综合中文字幕| 丝袜喷水一区| 插逼视频在线观看| 一级毛片 在线播放| 日韩av在线大香蕉| 国产成人一区二区在线| 男女边吃奶边做爰视频| 精品国内亚洲2022精品成人| 97超视频在线观看视频| 国产久久久一区二区三区| 国产av在哪里看| 成年人午夜在线观看视频 | 亚洲伊人久久精品综合| 女的被弄到高潮叫床怎么办| 街头女战士在线观看网站| 中文乱码字字幕精品一区二区三区 | a级毛片免费高清观看在线播放| 久久精品国产鲁丝片午夜精品| 日本色播在线视频| 天堂俺去俺来也www色官网 | 国产精品久久久久久精品电影小说 | 日日啪夜夜爽| 精品久久久久久电影网| 色尼玛亚洲综合影院| 91av网一区二区| 久久久久性生活片| 2022亚洲国产成人精品| 免费在线观看成人毛片| 午夜福利在线在线| 一级毛片电影观看| 国产综合懂色| 亚洲国产精品专区欧美| 高清在线视频一区二区三区| 搡女人真爽免费视频火全软件| 欧美3d第一页| 国产三级在线视频| 免费在线观看成人毛片| 亚洲av国产av综合av卡| 久久人人爽人人片av| 国产高潮美女av| 美女cb高潮喷水在线观看| 国产精品一区二区三区四区免费观看| 校园人妻丝袜中文字幕| 免费看光身美女| 精品一区在线观看国产| av专区在线播放| 五月玫瑰六月丁香| 夜夜爽夜夜爽视频| 午夜激情久久久久久久| 精品久久久噜噜| 色综合亚洲欧美另类图片| 欧美 日韩 精品 国产| 亚洲av国产av综合av卡| 秋霞在线观看毛片| 男人舔奶头视频| 能在线免费观看的黄片| 纵有疾风起免费观看全集完整版 | 黄色配什么色好看| 69av精品久久久久久| 麻豆成人av视频| 欧美性猛交╳xxx乱大交人| 精品久久久久久久人妻蜜臀av| 久久人人爽人人爽人人片va| 一级毛片aaaaaa免费看小| 欧美精品国产亚洲| 成人一区二区视频在线观看| 久久久久久国产a免费观看| 中文字幕久久专区| 色哟哟·www| 欧美一级a爱片免费观看看| 美女国产视频在线观看| 丝袜喷水一区| 国产亚洲91精品色在线| 夜夜爽夜夜爽视频| 一级a做视频免费观看| 亚洲人成网站高清观看| 一级片'在线观看视频| 丝袜喷水一区| 国产亚洲91精品色在线| 日韩一区二区视频免费看| 高清av免费在线| 成人综合一区亚洲| 高清av免费在线| 精品国内亚洲2022精品成人| 久久久久久久午夜电影| 亚洲人成网站在线观看播放| 99九九线精品视频在线观看视频| 色5月婷婷丁香| 18禁在线播放成人免费| 国产伦在线观看视频一区| 亚洲av电影在线观看一区二区三区 | 国产成人精品福利久久| 性插视频无遮挡在线免费观看| 网址你懂的国产日韩在线| 国精品久久久久久国模美| 亚洲最大成人中文| 嫩草影院入口| 你懂的网址亚洲精品在线观看| 亚洲精品国产av成人精品| 亚洲成人av在线免费| 国产精品久久久久久av不卡| 最近中文字幕2019免费版| 好男人视频免费观看在线| 久久久久久久久久久免费av| 18禁裸乳无遮挡免费网站照片| 人体艺术视频欧美日本| 精品欧美国产一区二区三| 国产免费一级a男人的天堂| 99久国产av精品国产电影| 国产免费福利视频在线观看| 亚洲成人久久爱视频| 亚洲精品乱码久久久久久按摩| 亚洲精品第二区| 国产极品天堂在线| 久热久热在线精品观看| 日韩伦理黄色片| 成人亚洲精品av一区二区| 久久久久精品久久久久真实原创| 别揉我奶头 嗯啊视频| 亚洲伊人久久精品综合| 免费看日本二区| 中文字幕av成人在线电影| 亚洲婷婷狠狠爱综合网| 91精品一卡2卡3卡4卡| 99久久精品热视频| 国产黄色视频一区二区在线观看| 三级男女做爰猛烈吃奶摸视频| 床上黄色一级片| 熟妇人妻不卡中文字幕| 国产永久视频网站| 国产精品久久视频播放| 久久这里有精品视频免费| 日本与韩国留学比较| 亚洲国产精品成人久久小说| 国内精品一区二区在线观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 一级毛片久久久久久久久女| 最近最新中文字幕大全电影3| 国产黄频视频在线观看| 在线观看免费高清a一片| 成年女人在线观看亚洲视频 | 国产 亚洲一区二区三区 | 天堂影院成人在线观看| 国产欧美另类精品又又久久亚洲欧美| 日韩欧美一区视频在线观看 | 黄色日韩在线| 婷婷色综合www| 国产女主播在线喷水免费视频网站 | 国产老妇女一区| 午夜精品在线福利| 婷婷色麻豆天堂久久| 干丝袜人妻中文字幕| 日本色播在线视频| 丝瓜视频免费看黄片| 菩萨蛮人人尽说江南好唐韦庄| 国产色爽女视频免费观看| 高清av免费在线| 国产精品人妻久久久久久| 18禁在线播放成人免费| 久久久久久久久久久免费av| 日本欧美国产在线视频| 2022亚洲国产成人精品| 中文字幕免费在线视频6| 亚洲av.av天堂| 国产一区有黄有色的免费视频 | 国产在视频线在精品| 国产精品人妻久久久影院| 日韩欧美一区视频在线观看 | 亚洲国产高清在线一区二区三| 国产淫片久久久久久久久| 神马国产精品三级电影在线观看| 女的被弄到高潮叫床怎么办| 国产精品国产三级国产av玫瑰| 啦啦啦韩国在线观看视频| 男女国产视频网站| 黄色配什么色好看| 国产亚洲av嫩草精品影院| 精品国产露脸久久av麻豆 | 看免费成人av毛片| 午夜福利在线观看吧| 你懂的网址亚洲精品在线观看| 在线 av 中文字幕| 日产精品乱码卡一卡2卡三| 亚洲精品中文字幕在线视频 | 久热久热在线精品观看| 我的女老师完整版在线观看| 欧美变态另类bdsm刘玥| 欧美zozozo另类| 国产男女超爽视频在线观看| 国产片特级美女逼逼视频| 国产亚洲5aaaaa淫片| 欧美极品一区二区三区四区| 亚洲国产欧美在线一区| 人妻一区二区av| 69av精品久久久久久| 亚洲电影在线观看av| 久久精品夜色国产| 在线观看一区二区三区| 欧美97在线视频| 午夜免费激情av| 22中文网久久字幕| 晚上一个人看的免费电影| 免费电影在线观看免费观看| 99久久精品一区二区三区| 乱人视频在线观看| 国产黄片美女视频| 简卡轻食公司| 欧美日韩精品成人综合77777| 老女人水多毛片| 丰满乱子伦码专区| 白带黄色成豆腐渣| 亚洲欧美中文字幕日韩二区| 日本午夜av视频| 精品久久久久久久久久久久久| 国产黄a三级三级三级人| 熟妇人妻久久中文字幕3abv| 亚洲欧美清纯卡通| 亚洲av中文av极速乱| 国产v大片淫在线免费观看| 国产 亚洲一区二区三区 | 精品国内亚洲2022精品成人| 搡老妇女老女人老熟妇| 国产午夜福利久久久久久| 日韩一区二区三区影片| www.色视频.com| 日韩欧美一区视频在线观看 | 两个人的视频大全免费| 国产精品伦人一区二区| 日韩三级伦理在线观看| 欧美+日韩+精品| 国语对白做爰xxxⅹ性视频网站| 久久精品人妻少妇| 久久久久久久大尺度免费视频| 国产欧美另类精品又又久久亚洲欧美| 国产成人一区二区在线| 最后的刺客免费高清国语| 亚洲精品亚洲一区二区| 亚洲图色成人| 麻豆国产97在线/欧美| 久久久久国产网址| 精品一区二区三区视频在线| 亚洲欧美日韩无卡精品| 国产亚洲av片在线观看秒播厂 | 免费少妇av软件| 亚洲av一区综合| 国产精品1区2区在线观看.| 精品人妻熟女av久视频| 免费播放大片免费观看视频在线观看| 可以在线观看毛片的网站| 日本av手机在线免费观看| 九九在线视频观看精品| 一级黄片播放器| 久久久成人免费电影| 国产欧美日韩精品一区二区| 亚洲国产精品成人综合色| 91久久精品电影网| 精品久久久久久成人av| 久久精品综合一区二区三区| 亚洲经典国产精华液单| 97精品久久久久久久久久精品| 欧美bdsm另类| 天堂av国产一区二区熟女人妻| 色播亚洲综合网| 午夜福利高清视频| 亚洲精品乱码久久久v下载方式| 夫妻午夜视频| 国产精品一区二区在线观看99 | 成年女人看的毛片在线观看| 天堂俺去俺来也www色官网 | av线在线观看网站| 日韩强制内射视频| 夫妻性生交免费视频一级片| 麻豆乱淫一区二区| 丰满人妻一区二区三区视频av| 国产午夜精品论理片| 高清欧美精品videossex| 国产午夜精品久久久久久一区二区三区| 在线观看人妻少妇| 视频中文字幕在线观看| 国产精品一区二区性色av| 特大巨黑吊av在线直播| 国产成人精品婷婷| 亚洲欧美日韩东京热| 亚洲国产色片| 亚洲av成人精品一二三区| 中文字幕av在线有码专区| 欧美日韩国产mv在线观看视频 | 日韩一本色道免费dvd| 中文字幕免费在线视频6| 97在线视频观看| 久久久久久久久久人人人人人人| 最近的中文字幕免费完整| 国产精品久久久久久精品电影| 中国美白少妇内射xxxbb| 免费观看的影片在线观看| 亚洲成人中文字幕在线播放| 久久这里有精品视频免费| 18禁在线无遮挡免费观看视频| 汤姆久久久久久久影院中文字幕 | 亚洲av成人精品一二三区| 久久久欧美国产精品| 欧美成人一区二区免费高清观看| 国产精品福利在线免费观看| 精品久久久噜噜| 一二三四中文在线观看免费高清| 日韩av在线免费看完整版不卡| 极品少妇高潮喷水抽搐| 视频中文字幕在线观看| 纵有疾风起免费观看全集完整版 | 美女大奶头视频| 夜夜爽夜夜爽视频| 午夜福利视频精品| 国产av在哪里看| 午夜精品一区二区三区免费看| 久久99热这里只频精品6学生| 我的老师免费观看完整版| 亚洲国产成人一精品久久久| 综合色av麻豆| 精品熟女少妇av免费看| 国产精品久久久久久久久免| 久久久久久久午夜电影| 国产精品久久久久久久电影| 干丝袜人妻中文字幕| 国产成人福利小说| 亚洲经典国产精华液单| 成年免费大片在线观看| 99久久中文字幕三级久久日本| 日韩国内少妇激情av| 国产美女午夜福利| 我的女老师完整版在线观看| 人妻制服诱惑在线中文字幕| 国产精品美女特级片免费视频播放器| 亚洲怡红院男人天堂| 久久久久久久久久成人| 熟女人妻精品中文字幕| 国产 亚洲一区二区三区 | 搞女人的毛片| 国产精品美女特级片免费视频播放器| 国产精品一及| 日本熟妇午夜| 秋霞在线观看毛片| 国产男人的电影天堂91| 欧美最新免费一区二区三区| 国产精品伦人一区二区| 一级毛片电影观看| 精品熟女少妇av免费看| 丝瓜视频免费看黄片| 观看美女的网站|