• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Stock Market Trading Based on Market Sentiments and Reinforcement Learning

    2022-11-09 08:15:12AmeenSuhailSyamSankarAshokKumarTsafackNestorNaglaaSolimanAbeerAlgarniWalidElShafaiandFathiAbdElSamie
    Computers Materials&Continua 2022年1期

    K.M.Ameen Suhail,Syam Sankar,Ashok S.Kumar,Tsafack Nestor,Naglaa F.Soliman,Abeer D.Algarni,Walid El-Shafai and Fathi E.Abd El-Samie,

    1Department of Computer Science&Engineering,NSS College of Engineering,Palakkad,678008,Kerala,India

    2Department of Electronics and Communication Engineering,NSS College of Engineering,Palakkad,678008,Kerala,India

    3Unité de Recherche de Matière Condensée,D’Electronique et de Traitement du Signal(URAMACETS),Department of Physics,University of Dschang,P.O.Box 67,Dschang,Cameroon

    4Department of Information Technology,College of Computer and Information Sciences,Princess Nourah Bint Abdulrahman University,Riyadh,84428,Saudi Arabia

    5Department Electronics and Electrical Communications,Faculty of Electronic Engineering,Menoufia University,Menouf,32952,Egypt

    Abstract: Stock market is a place,where shares of different companies are traded.It is a collection of buyers’and sellers’stocks.In this digital era,analysis and prediction in the stock market have gained an essential role in shaping today’s economy.Stock market analysis can be either fundamental or technical.Technical analysis can be performed either with technical indicators or through machine learning techniques.In this paper,we report a system that uses a Reinforcement Learning (RL) network and market sentiments to make decisions about stock market trading.The system uses sentiment analysis on daily market news to spot trends in stock prices.The sentiment analysis module generates a unified score as a measure of the daily news about sentiments.This score is then fed into the RL module as one of its inputs.The RL section gives decisions in the form of three actions:buy,sell,or hold.The objective is to maximize long-term future profits.We have used stock data of Apple from 2006 to 2016 to interpret how sentiments affect trading.The stock price of any company rises,when significant positive news become available in the public domain.Our results reveal the influence of market sentiments on forecasting of stock prices.

    Keywords: Deep learning;machine learning;daily market news reinforcement learning;stock market

    1 Introduction

    The stock market is a platform consisting of sellers and buyers of stocks.Some shares of public companies are available for public trading.Stock markets are in electronic form,where people can access stock exchanges through their computers to conduct transactions,mainly selling and buying of stocks [1].Participants can buy or sell at their convenience.Stock exchanges facilitate trading.They provide a real-time interface between buyers and sellers,allowing a systemized matching process between willing buyers,and sellers [2].

    Investments in stocks that are present in the stock market can lead to financial gains.The choice to buy,sell or hold a stock is of great importance,and hence there is a need for analysis of stocks.This analysis leads to forecasting future stock prices,which helps in decision making.Efficient and accurate stock price prediction may lead to enormous profits.

    The stock market is often referred to as a peak investment outlet,due to trading of large amounts of shares through it.Studying the behavior of financial markets is a great challenge.According to the Efficient Market Hypothesis (EMH) [3],there is no space for predicting the stock market.One of the factors that aid for stock market analysis is to develop well-researched and accurate perspectives,which include both directional view and information such as price of stocks,expected risk,and expected reward.

    The fundamental analysis,as well as the technical analysis [3],helps in developing a good view of the market.In fundamental analysis,few companies are researched,and based on their performance,future predictions are made,and decisions are taken,accordingly.On the other hand,in technical analysis,the current market trends are considered,and hence the market is scanned for opportunities.Technical indicators can be used for this analysis.

    One must correctly understand the difference between the two types of analysis to succeed in the stock market.Fundamental analysis is preferred in long-term investments,and technical analysis is chosen for short-term investments.An investor can earn a small yet consistent profit,frequently,in a short term by choosing trading decisions with technical analysis.The application domains of both methods of analysis also differ.Technical analysis can be applied to all asset classes like equities,commodities,income,etc.In contrary,fundamental analysis is specific for each asset class.For example,when dealing with agricultural commodities,the fundamental analysis includes rainfall,harvest,and inventory analysis.On the other hand,for metal commodities,different factors are considered.In technical analysis,regardless of the asset,the same technical indicators such as Moving Average (MA),Moving Average Convergence Divergence (MACD),and Relative Strength Index (RSI) can be used.

    To find stock chart patterns for technical analysis,predict market price and make decisions,machine learning methods are widely used.Deep learning can be used to find hidden patterns in stock data.Deep learning technology is used to train the models on the data for making predictions.Here,there are different layers of abstraction [4] through which the learned features of data are passed.An appropriate model for the multivariate time series analysis [5] is the Deep Learning Neural Network (DLNN),due to its built-in properties.This model is robust to noise in input data.It can support learning and predictions even with missing values.It can learn both linear and non-linear relationships in data.

    Even if we can predict the prices by certain patterns and make the decisions by using the above-mentioned methods,there is no way to learn from the decisions made,and to make better decisions in the future through learning from past actions.That is why RL comes into the picture of stock market trading systems.A machine learning technique in which the system learns from the environment and tries to maximize the rewards is known as an RL technique [6].

    The main contributions of this work are:

    (1) The inclusion of market sentiments in an RL-based system that is used for stock market trading.

    (2) The performance comparison of an RL network with sentiments and an RL network without sentiments.

    (3) The investigation of the influence of market news on deciding stock prices.

    The remainder of this paper is organized as follows.In Section 2,a review of the related work is provided.Section 3 gives an overview of sentiment analysis and RL.Section 4 gives an explanation of the suggested system,and Section 5 gives the investigation outcomes.The conclusion is provided in Section 6.

    2 Related Work

    This section provides an overview of the different existing stock market forecasting methods.One of the classic stock price forecasting methods is the Autoregressive Integrated Moving Average (ARIMA) explained by Ariyuo et al.[7].The combination of past errors and past values is considered in ARIMA to form the future value.This model depends on the close price,high price,low price,and open price to predict future values.Even though the ARIMA model is efficient in forecasting time series data,it cannot be used if the data contains a seasonal component.The time-series data is said to have a seasonal component if it contains repetitive cycles.For modelling of such data,an advanced ARIMA model known as SARIMA was proposed by Lee et al.[8],and Chong et al.[9].The SARIMA model is effective in financial forecasting,specifically for the short and medium ranges.

    The traditional models such as ARIMA and SARIMA can be used effectively only on stationary time series data.In a live trading scenario,implementing these models is a great challenge,because the new coming data may not always be stationary time series data.To overcome this limitation,novel techniques in deep learning can be used.A system that comprises technical analysis,Artificial Neural Network (ANN),and sentiment analysis has been used to make decisions as explained by Bhat et al.[10].Technical indicators such as Exponential Moving Average (EMA),Moving Average Convergence and Divergence (MACD),and Relative Strength Index (RSI) have been used for technical analysis.The combination of ARIMA and Artificial Neural Network (ANN) has been discussed by Zhang [11].This model is capable of capturing more parameters than those of the individual models.Both linear and non-linear patterns of the input data can be studied using this model.Unfortunately,it cannot be effectively applied in all scenarios.To overcome this problem,another hybrid structure of ARIMA model and Support Vector Machine (SVM) was presented by Chen et al.[12].Unlike other neural networks,the SVM depends on the Structured Risk Minimization (SRM) principle to minimize a generalized error instead of an empirical error.An ARIMA model,which makes use of the Generalized Autoregressive Conditional Heteroscedasticity (GARCH),was presented by Guresen et al.[13] &Wang et al.[14].Past error terms are considered in calculating the variance of current error terms.The square of the previous error is considered in this model.

    A combination of machine learning models has also been used for stock price forecasting.A model has been built by combining deep learning for feature extraction and financial time series data analysis to predict price movements.It is known as the Multi-Filter Neural Network(MFNN),and it was proposed by Long et al.[15].This model is a combination of a CNN and an RNN,and it has a structure of multiple filters.Nayak et al.[16] explained a model based on supervised machine learning,logistic regression,boosted decision tree and SVM for predicting daily stock prices and monthly stock prices.This model depends on company sentiments and historical data for predicting the daily stock prices,and the historical data for predicting the monthly stock prices.A combination of ANN and Random Forest (RF) classifier was discussed by Vijh et al.[17] for predicting the daily stock prices for five sector companies.This method introduced new variables to improve the prediction.A hybrid model containing RNN and Long Short-Term Memory (LSTM) was presented by Hiransha et al.[18].The day-wise closing price of two types of stocks in the company is considered to predict the future stock price.

    A hybrid structure of SVM and KNN was presented by Chen et al.[19].A feature-weighted SVM and a KNN were used.The relative importance of each feature was considered.Stock market prediction based on social media data and news about stocks using various machine learning classifiers was discussed by Khan et al.[20].Gaussian naive Bayes,multinomial naive Bayes,SVM,KNN,logistic regression,RF,regression tree,AdaBoost,extra tree,gradient boosting and linear discriminant analysis classifiers were considered.A multi-layer perceptron was also used to improve the prediction results.Jena et al.[21] presented a distributed architecture that can be utilized to forecast stock prices using real-time streaming of stock data.Machine learning models such as decision tree,polynomial linear regression,support vector regression,and RF were built using historical data to predict stock price from the continuous flow of stock data.The CNN for stock market prediction was discussed by Chen et al.[22].The stock price movement is the output of the model represented as zero or one.Zero denotes that the stock is down,and one denotes that the stock is up.

    Various research works have been conducted to assess the fluctuating financial market behavior within a deep learning framework with a prime focus on predicting the stock price and thereby minimizing the losses of stockholders [23].The authors of [24] combined both stock data and news headline sentiments to devise a better trading system.In [25],the authors used a hybrid deep neural network architecture to forecast stock price to enhance prediction accuracy.In a very recent study [26],the authors used return prediction to optimize portfolio formation.In [27],a multilayer and multi-ensemble stock trader was discussed,and the authors validated their approach in a real-world trading context.

    A technique based on CNN,called CNNPred,was introduced by Hoseinzade et al.[28] for stock market price prediction.It depends on an advanced CNN model that can be applied in various market conditions,as it supports heterogeneous data.Unsupervised learning methods such as Principal Component Analysis (PCA),Auto Encoder (AE),and Restricted Boltzmann Machine(RBM) have been applied to high-frequency intra-day stocks to predict the future market scenarios as proposed by Chong et al.[9].Besides,information about past stock returns is considered.

    The RL is an efficient tool for stock market modeling,representing and learning of future rewards and immediate rewards.Moody and Saffella proposed a direct RL algorithm that can be applied for stock market trading.This algorithm allows a sort of Direct Reinforcement Trading(DRT).It considers the decision at the last time to give the current time decision with a singlelayer Recurrent Neural Network (RNN).The problem with this algorithm is the lack of feature learning of input data.Deng et al.[29] presented a deep neural network with an AE to learn the input vector deep representation.This algorithm depends on a fuzzy network to work in uncertain market conditions.A combination of a Genetic Network Programming (GNP) model with RL and an MLP was presented by Ramezanian et al.[30].

    The classification of data and its time series modelling are performed to forecast the stock state.Various technical indicators using GNP programming and MLP are aggregated to forecast the daily stock return.A classic RL algorithm,called Q-learning,was presented to optimize policies by Du et al.[31].Value functions including interval profit,sharp ratio,and sharp derivative ratio were maximized,and the model performance was analyzed.Jeong et al.[32] proposed a deep Q-learning network to determine the number of shares used in prediction.Q values for all actions are calculated at each time step,and the action with the highest Q value is selected.Instead of using a single agent in the decision-making,a multi-agent Q-learning approach was presented by Lee et al.[33].Four agents are used including Buy Signal agent,Sell Signal agent,Buy Order agent,and Sell Order agent.

    The above-mentioned methods that use RL for decision-making depend only on market statistics.The stock market is a highly volatile environment that does not always depend only on previous statistics.It depends also on several other factors including market sentiment obtained from stock market news or rumors.Stock market news is an essential tool in forming market sentiment.The news can have an abrupt impact on stock prices,irrespective of how the stock performed,previously.The proposed system uses stock market news and market statistics for decision making based on RL.The system advantage is the ability to have an agent that makes better decisions in stock market trading and considers volatility of the market due to market sentiments.

    3 Sentiment Analysis and RL

    3.1 Sentiment Analysis

    Sentiment analysis is the procedure of computationally distinguishing and classifying opinions conveyed in a piece of text,particularly to find out whether the writer’s attitude towards a specific topic,or product is positive,negative,or neutral.It implies to the use of text analysis,natural language processing,biometrics,and computational linguistics to systematically identify,quantify,extract,and study effective states and subjective information.Open-source software tools and a range of paid and free sentiment analysis tools employ statistics,natural language processing,and machine learning techniques to automate sentiment analysis on large collections of text.An open-source sentiment analysis tool known as VADER (Valence Aware Dictionary and sentiment Reasoner) is used in this system.It is a rule-based and lexicon sentiment analysis tool that is precisely attuned to sentiments expressed in social media,and it works well on text from other domains.The VADER determines the sentiment score,whether positive or negative.The VADER system input is a piece of text,and the output is a positive score,a negative score,and a compound score.The positive score tells how much positive a piece of text is.Its value ranges from 0 to 1.The negative score tells how much negative a piece of text is.Its value ranges from 0 to 1.The compound score is a normalized score that ranges from-1 (too negative) to +1 (too positive).

    Stock market news is a large amount of text from multiple sources,which has an unpredictable velocity on the Internet.Hence,it is required to have a sentiment analysis system that can analyze the text faster,as the timing for a decision is an essential aspect in stock market trading.The reason for choosing the VADER system for sentiment analysis in stock market news is its advantages [34],which are given below:

    (1) It does not require any training data,but it is constructed from a generalizable,valencebased,human-curated gold standard sentiment lexicon.

    (2) It works exceedingly well on the social media text,yet it readily generalizes to multiple domains.It does not severely suffer from a speed-performance trade-off.

    (3) It is fast enough to be used online with streaming data.

    3.2 Reinforcement Learning(RL)

    The RL is an area of machine learning.It aims to take suitable actions to maximize reward in a particular situation.The components of an RL network are environment,agent,reward,and policy.The policy is the solution for the RL problem.Policy maps each state of the environment to a specific action.

    Fig.1 gives an overview of the RL technique.An RL agent cooperates with its environment in discrete periods.The agent obtains an observation (St) at every time (t),which predictably involves the reward (Rt).It subsequently selects an action from a set of existing actions,which is consequently forwarded to the environment that shifts toS(t+1),which is a new state.The rewardR(t+1)related to the shift (At,St,S(t+1)) is defined.The purpose of an RL agent is to accumulate as much incentive as conceivable.

    Figure 1:Reinforcement learning

    The mathematical formulation of RL problems is called the Markov Decision Process (MDP),where the current state depends only on the previous state.The solution for an MDP is defined using policies.The policy is a set of actions that the agent takes to reach a goal [35].It is denoted asπ.The policy can be represented asπ(s)→aπ,wherea∈Aands∈S.The policy can be stochastic,and it is denoted asπ(s,a)→P(s).TheP(a|s)is the probability of taking actionaon states,wherea∈Aands∈S.An optimal policy,which maximizes the reward,is taken as the solution.The solution to an MDP is denoted asπ*,which has the optimal actions for all states to minimize future reward.To find the optimal actions for each state,an algorithm called Q-learning is used.

    3.3 Q-Learning

    In Q-learning,we define a function,Q(s,a),representing the discounted future reward,when we perform actionain states,and continue optimally from that point on.

    Eq.(1) gives the Q-value function,and it represents the quality of a certain action in a given state,i.e.,the best possible reward at the end of the game after performing actionaon statesat timet,by following a policyπ.This helps in making decisions at each state.An optimal policy is to select the actions that have maximum Q-values at that state,thereby maximizing rewards.For State (s),Action (a),Reward (r),and Next State (s′),the optimal Q-value at each state is approximated using Bellman equation.

    This equation is defined by the maximum future reward for this state (s).Action (a) is the immediate reward (r) plus a maximum future reward for the following state (s′).γis the discount factor that determines how much weight should be given to future rewards,while calculating Q-function for the state.The main idea in Q-learning is that we can iteratively approximate the Q-function using Bellman equation.

    The basic Q-learning algorithm is given below:

    · InitializeQ(numstates,numactions) arbitrarily.

    · Observe initial states.

    · Repeat.

    · Select and carry out an actiona.

    · Observe rewardrand new state

    s′.Q[s,a]=r+γ(maxa′Q[s′,a′]).

    s=s′,until terminated.

    The Q-values of states and actions are initially random,and they converge after several iterations.The Q-value is updated accurately in each iteration.We use the concept of Q-learning by using a deep neural network and the experience replay algorithm,which uses an agent experience to update the Q-values.The training of the agents using this algorithm is explained in the next section.

    4 Proposed System of RL with Sentiment Analysis

    The proposed system trains an agent to decide to buy,sell or hold a stock,when given a state as input.Along with the statistical dimensions,the input vector also has a trend dimension,which results from the sentiment analysis of news regarding that specific stock on the corresponding date.The block diagram of the proposed system is shown in Fig.2.

    The proposed system contains two essential modules:

    (a)Sentiment analysis module.This module takes the news of the stock for each day and converts them to a sentiment score.

    (b)RL module.This module decides whether to buy,sell,or hold (do nothing) a stock given a state as input.

    4.1 Input

    Stock data of a company from a stock API and news about stocks of that company constitute the system input.

    4.1.1 Stock Data

    The company stock data consists of fields: Open,Close,High,Low,and Volume of the stock.“High” refers to the highest price in the considered period.“Low” refers to the lowest price in the period considered.“Volume” refers to the number of shares traded in a given period.“Open”refers to the opening price in the period considered.“Close” refers to the closing price in the considered period.Here,the considered period is one day.Daily stock data of a company for 10 years is used as input.The stock data is pre-processed to find the missing values and to find moving averages.

    Figure 2:Proposed system

    4.1.2 Stock News

    Stock news for a company is taken from a stock news API.The stock news is used to predict the trend of the stock movement for that company.A sentiment analysis model is used to model the impact of daily news on the stock market.

    4.2 Sentiment Analysis

    The input to the sentiment analysis module is the stock news.It can be either companyspecific stock news or market-specific stock news.News undergoes initial pre-processing,where tokenization,removal of unnecessary symbols and lowercasing processes are performed.The processed input is passed to the VADER sentiment analysis tool.The VADER output is in the form of polarity score values that are positive score,negative score,and compound score.A positive score indicates how positive the piece of text is.A negative score indicates how negative the piece of text is.The compound score is a normalized score between-1 (too negative) and+1 (too positive).The compound score is used as an input to the RL module.The VADER implementation in the sentiment analysis module of the proposed system is shown in Fig.3.

    Figure 3:Sentiment analysis module

    4.3 RL Network

    To formulate stock trading as an MDP and tackle it with RL,an RN network is constructed,consisting of:

    (a)Environment:Stock data of a company.

    (b)State:Stock information at a day,along with the sentiment score of the news on that corresponding day.

    (c)Actions:Sell,Buy or Hold.

    (d)Rewards:Rewards are feedback signals given back to an agent,when that agent takes an action.For each action,a positive reward is given when profit is made,and a negative reward is given when an agent action leads to loss.

    4.4 Deep Neural Network

    A vital part of this module is a deep neural network containing an input layer,three fullyconnected hidden layers,and an output layer.The input layer has a size corresponding to the size of the state.The output layer has 3 neurons corresponding to the actions Buy,Sell and Hold(Nothing).The Rectified Linear Unit (ReLU) is used as an activation function for hidden layers,and a linear activation function is used in the output layer.

    The problem comes down to finding the optimal actions at each state of the environment to increase future rewards.We use a deep neural network and an experience replay algorithm to find the optimal action at each state.

    4.5 Training of RL Agent

    The methodology for training of a stock market trading agent is given below.The agent starts with initial open cash and several stocks.Batch size is defined.A batch is the set of tuples,which contains State,Action,Reward,and Next Size.It is used in the experience replay function for updating the network.Discount factorγis also defined.

    For each day in the dataset,

    (1) The state is defined with stock opening price,5-day moving average price of the stock,stocks held,cash held,and sentiment score of the previous day.

    (2) Action is taken,giving the state as an input to the deep neural network.

    (3) The reward is calculated according to the action taken using the price change percentage given by Eq.(3).

    (4) Portfolio value is the net worth of the investor.It is calculated by Eq.(4).

    (5) Assets (cash held and stocks held) are recalculated according to the action taken,and then the next state is defined.

    (6) Tuple (State,Action,Reward,Next state) is added to the agent memory.

    (7) If the agent memory is greater than the defined batch size,the experience replay function is called with the agent memory as input to update the deep neural network.The experience replay algorithm is given below.

    4.6 Experience Replay Algorithm

    This algorithm is used for updating the weights and biases of the deep neural network to optimize the Q-values of states and actions.Input to the algorithm is the agent memory that consists of the tuples,where each tuple consists of State,Action Taken,Reward,and Next State.A batch of tuples is selected from the agent memory.

    For each tuple (State (s),Action (a),Reward (r),and Next State (s′)) in the batch:

    (1) The Q-value of the action (a) for the current state is calculated using the Bellman equation.

    where the valueQ(s′,a′)is calculated by giving the Next States′as input to the network.

    (2) The Q-value of an Action (a) for State (s) is updated in the deep neural network.

    (3) The above procedures (Steps 1 to 7) are continued for several episodes (iterations) to allow the Q-values of states and actions to converge.The neural model at the end of training is saved and used for testing.An overview of the RL module is shown in Fig.4.

    Figure 4:Reinforcement learning module

    The input given to the RL module includes open price,five-day moving average,cash held,sentiment score,and stocks held.The outputs obtained from the module are Action,Reward,and Next State.The Action can be Buy,Sell or Hold (to do nothing).These outputs are used to update the deep neural network weights and biases to take optimal actions for states.

    5 Results and Discussion

    5.1 Dataset

    The initial dataset contains the stock information on each market day for Apple from 2006 to 2016 [36].Each day stock information contains Date,Open Price (price of stock at the start of that day),Close Price (price of stock at the end of that day),High Price (highest price of stock in that day),Low Price (Lowest price of stock on that day),Adjusted Close Price,Volume(Number of stocks traded on that day),News (News about Apple and stock market news on that day).

    The open price of Apple stock plotted against each market day is shown in Fig.5.The X-axis represents the year,and the Y-axis represents the price in dollars.It is seen that there are rise and fall in stock prices across the years.

    Figure 5:Price chart of Apple stock

    5.2 Sentiment Analysis

    This step aims to find the sentiment of news for each day.News for each day is preprocessed.Pre-processing step contains conversion of news to lowercase,and removal of stop words,lemmatization and unnecessary symbols.

    Each day pre-processed news is converted into sentiment scores that have values ranging from-1 to +1.VADER tool of Natural Language Toolkit (NLTK) is used for sentiment analysis.Input to the VADER tool is a piece of text news,and output is in the form of polarity scores that are positive,negative and compound scores.The positive score indicates how positive the piece of text is.The negative score indicates how negative the piece of text is.The compound score indicates the normalized score between-1 (too negative) and +1 (too positive).The compound score is used as an input to the RL module.News for each day in the initial dataset is replaced with a corresponding sentiment score.

    5.3 Reinforcement Learning(RL)

    The dataset after sentiment analysis is given as input to the RL module.The dataset is split into training and testing sets.The training dataset consists of 1500 rows and the testing dataset consists of 1000 rows.Three scenarios are considered for comparison with the proposed system.

    (1) Benchmark model (emulation of manual trading).

    (2) RL without sentiment input.

    (3) RL with sentiment input.

    These scenarios are compared in performance with the proposed system using a portfolio value at each time.The portfolio value is the investor net value of assets,which is given by adding cash and market value of stocks held by the investor.

    5.4 Benchmark Model on Testing Dataset

    This model is used for the emulation of manual trading.The model starts with an initial investment and buys a fixed number of stocks using that cash.The initial cash is $10000,the number of stocks brought is 33.The remaining cash after buying the stocks on the starting day is $7500.The model sells 10% of initial stocks held at fixed intervals.Portfolio values at each interval are calculated.

    5.4.1 Training with RL Without Sentiment Input

    The parameters used for training are:

    (a) No.of episodes=50.

    (b) Batch size (for updating the model parameters)=50.

    (c) State size=5.

    (d) State=(open price,five-day moving average price,portfolio value,cash held,stocks held).

    (e) Discount Factor=0.995.

    Training is performed on a training dataset with the given training parameters.All rows in the dataset are iterated for training for 50 episodes.Batch size is the number of tuples containing the State,Action,Reward,and Next State used for updating weights and biases of the model.The model is saved after training for the testing purpose.

    5.4.2 Testing with RL Without Sentiment Input

    The model generated from the training phase is used in the testing phase.The model starts with initial cash and buys a fixed number of stocks using that cash as in the benchmark model testing.The initial cash is $10000,and the number of stocks brought initially is 33.The remaining cash after buying the stocks on the starting day is $7500.The graph for the portfolio values is plotted and compared against that of the benchmark model.

    The comparison graph of trading with the RL model and the benchmark model is illustrated in Fig.6.The X-axis of the graph represents the days,and the Y-axis represents the portfolio value in dollars.

    Figure 6:RL model vs. benchmark model

    5.4.3 Training with RL with Sentiment Input

    The parameters used for training are:

    (a) No.of episodes=50.

    (b) Batch size (for updating the model parameters)=50.

    (c) State size=6.

    (d) State=(open price,five-day moving average price,portfolio value,cash held,stocks held,sentiment score).

    (e) Discount factor=0.995.

    Training is performed on a training dataset with the given training parameters.All rows in the dataset are iterated for training for 50 episodes.Batch size is the number of tuples containing State,Action,Reward,and Next State used for updating weights and biases of the model.The model is saved after training for testing purposes.

    5.4.4 Testing with RL with Sentiment Input

    The model generated from the training phase is used in the testing phase.The model starts with an initial cash and a fixed number of stocks like the benchmark model.The portfolio value on each day is calculated.A graph for portfolio values is plotted and compared with that of the benchmark model.The comparison of trading with the benchmark model,the RL model without sentiment,and the RL model with sentiment is illustrated in Fig.7.The X-axis of the graph represents the days and the Y-axis represents the portfolio value in dollars.

    Figure 7:RL model with sentiment vs. RL model without sentiment and benchmark model

    5.4.5 Inference

    The three models start with a fixed amount of initial cash and a fixed number of trading stocks.From the plot of portfolio values generated for the three scenarios: benchmark,RL without sentiment input,and RL with sentiment input in Fig.7,it can be seen that RL without sentiment generates more portfolio values than those of the benchmark model.It can also be seen that RL with sentiment as input generates more portfolio values than those of RL without sentiment.It can be inferred that the agent on stock market trading makes better decisions and makes more profits with the sentiment as input along with statisticsn.

    6 Conclusions and Future Works

    Forecasting the stock market conditions as well as predicting the stock price has always been a challenging task.Different techniques can be used to predict stock prices and make appropriate decisions.Naive approaches,deep learning techniques,and RL techniques have been studied for stock market forecasting.The RL is an efficient technique for making automated decisions in stock market,but it needs high-end processors for efficient real-time performance.The proposed system depends on RL along with market sentiment for decision making.The results show that the inclusion of the sentiment helps in making better decisions than those of trading based on statistics only.

    The proposed system works for a single stock (company).In the future work,the proposed system could be extended to in the case of multiple stocks.Segregation of news for different stocks should be performed,and sentiment analysis should be implemented in a distributed manner.A serious challenge is the detection of fake news for more efficient performance.The proposed system can be extended to intra-day trading with real-time sentiment analysis of news.

    Acknowledgement: The authors would like to thank the support of the Deanship of Scientific Research at Princess Nourah Bint Abdulrahman University.

    Funding Statement: This research was funded by the Deanship of Scientific Research at Princess Nourah Bint Abdulrahman University through the Fast-track Research Funding Program.

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲美女视频黄频| 久久精品国产a三级三级三级| 久久精品熟女亚洲av麻豆精品| 大香蕉久久网| 亚洲精品视频女| 色吧在线观看| 18禁在线无遮挡免费观看视频| 干丝袜人妻中文字幕| 久久久精品免费免费高清| 夫妻性生交免费视频一级片| 又大又黄又爽视频免费| 国产av国产精品国产| 国产美女午夜福利| 联通29元200g的流量卡| 免费av不卡在线播放| 久久精品国产自在天天线| 深爱激情五月婷婷| 亚洲国产精品成人久久小说| 精品久久久噜噜| 精品国产露脸久久av麻豆| 九九久久精品国产亚洲av麻豆| 久久综合国产亚洲精品| 日韩一区二区三区影片| 看黄色毛片网站| 成人毛片a级毛片在线播放| 亚洲一级一片aⅴ在线观看| 亚洲高清免费不卡视频| 建设人人有责人人尽责人人享有的 | 菩萨蛮人人尽说江南好唐韦庄| 久久精品久久精品一区二区三区| 热re99久久精品国产66热6| 国产成人精品一,二区| 亚洲最大成人中文| 久久久久久九九精品二区国产| 97人妻精品一区二区三区麻豆| 日韩制服骚丝袜av| 精品午夜福利在线看| 男女边吃奶边做爰视频| 亚洲国产欧美在线一区| 亚洲精品乱久久久久久| 国产精品人妻久久久影院| 97超视频在线观看视频| 免费不卡的大黄色大毛片视频在线观看| 99久久精品热视频| 国产精品人妻久久久影院| 国产精品秋霞免费鲁丝片| 丰满人妻一区二区三区视频av| 日韩大片免费观看网站| 午夜亚洲福利在线播放| 国产精品蜜桃在线观看| 免费大片黄手机在线观看| 国产真实伦视频高清在线观看| 青春草亚洲视频在线观看| 在线免费观看不下载黄p国产| 亚洲人成网站高清观看| 美女被艹到高潮喷水动态| 婷婷色av中文字幕| 超碰av人人做人人爽久久| 男男h啪啪无遮挡| 男人和女人高潮做爰伦理| 免费看日本二区| 直男gayav资源| 男女无遮挡免费网站观看| 亚洲精华国产精华液的使用体验| 免费大片黄手机在线观看| 男的添女的下面高潮视频| 精品国产乱码久久久久久小说| xxx大片免费视频| 久久综合国产亚洲精品| 丝袜脚勾引网站| 91精品国产九色| 成人综合一区亚洲| 可以在线观看毛片的网站| 国产综合懂色| 99热这里只有是精品50| 亚洲三级黄色毛片| 国产探花在线观看一区二区| 另类亚洲欧美激情| 男女边吃奶边做爰视频| 欧美性感艳星| 一区二区三区免费毛片| 国产人妻一区二区三区在| 大又大粗又爽又黄少妇毛片口| 肉色欧美久久久久久久蜜桃 | 麻豆精品久久久久久蜜桃| 免费av观看视频| 亚洲精品久久久久久婷婷小说| 国产亚洲午夜精品一区二区久久 | 日韩av在线免费看完整版不卡| 麻豆国产97在线/欧美| 下体分泌物呈黄色| 成年女人在线观看亚洲视频 | 欧美成人精品欧美一级黄| 成人高潮视频无遮挡免费网站| 亚洲欧美日韩无卡精品| 精品少妇久久久久久888优播| 国产精品久久久久久精品古装| 免费看光身美女| 日韩伦理黄色片| 色播亚洲综合网| 免费黄网站久久成人精品| 国产毛片a区久久久久| 亚洲成人中文字幕在线播放| 91在线精品国自产拍蜜月| 极品教师在线视频| 少妇高潮的动态图| 一区二区av电影网| 国模一区二区三区四区视频| tube8黄色片| 免费观看av网站的网址| 亚洲欧洲日产国产| 97精品久久久久久久久久精品| 国产精品国产av在线观看| 日韩av在线免费看完整版不卡| 精品一区在线观看国产| 久久99热6这里只有精品| 亚洲精品一二三| 国产精品成人在线| 亚洲av一区综合| 成人鲁丝片一二三区免费| 高清视频免费观看一区二区| 日本午夜av视频| 色播亚洲综合网| av线在线观看网站| 三级男女做爰猛烈吃奶摸视频| 99热网站在线观看| 日本一二三区视频观看| 成人一区二区视频在线观看| 婷婷色麻豆天堂久久| 超碰av人人做人人爽久久| 国产v大片淫在线免费观看| 黄色配什么色好看| 日日撸夜夜添| 男人和女人高潮做爰伦理| 美女被艹到高潮喷水动态| 亚洲av中文av极速乱| 青春草亚洲视频在线观看| 超碰97精品在线观看| 不卡视频在线观看欧美| 精品久久久久久电影网| 在现免费观看毛片| 国产在线男女| 久久国产乱子免费精品| 欧美xxxx黑人xx丫x性爽| 欧美一级a爱片免费观看看| 亚洲一区二区三区欧美精品 | 久久久久久久久久人人人人人人| 国产男人的电影天堂91| 亚洲精品国产av成人精品| 亚洲美女搞黄在线观看| 亚洲av中文字字幕乱码综合| 欧美区成人在线视频| 亚洲欧美成人综合另类久久久| 观看美女的网站| 97精品久久久久久久久久精品| 秋霞伦理黄片| 国产有黄有色有爽视频| 国产精品久久久久久精品电影| 国产高清有码在线观看视频| 蜜桃亚洲精品一区二区三区| 又黄又爽又刺激的免费视频.| 国产一区二区三区综合在线观看 | 成人综合一区亚洲| 亚洲国产精品999| 亚洲在久久综合| 我的老师免费观看完整版| eeuss影院久久| 一级片'在线观看视频| 国产成人免费无遮挡视频| 午夜爱爱视频在线播放| 亚洲久久久久久中文字幕| 在线播放无遮挡| 成人毛片a级毛片在线播放| 国产男女内射视频| 女人被狂操c到高潮| 色网站视频免费| 麻豆国产97在线/欧美| 久热久热在线精品观看| 欧美三级亚洲精品| 18禁在线播放成人免费| 最近最新中文字幕大全电影3| 亚洲精品亚洲一区二区| 亚洲精品影视一区二区三区av| 欧美一级a爱片免费观看看| 免费看a级黄色片| 天天躁日日操中文字幕| 欧美变态另类bdsm刘玥| 久久精品久久久久久噜噜老黄| 国内精品宾馆在线| 一个人看视频在线观看www免费| 亚洲伊人久久精品综合| 亚洲人与动物交配视频| 亚洲欧美日韩东京热| 黄片无遮挡物在线观看| 久久久久久久午夜电影| 午夜福利视频1000在线观看| 又爽又黄无遮挡网站| 精品99又大又爽又粗少妇毛片| 日韩欧美一区视频在线观看 | 一级黄片播放器| 男女那种视频在线观看| 国产毛片a区久久久久| 亚洲av国产av综合av卡| 夜夜看夜夜爽夜夜摸| 亚洲欧美精品专区久久| 男女国产视频网站| 久久久久网色| 国产日韩欧美在线精品| 肉色欧美久久久久久久蜜桃 | 最后的刺客免费高清国语| 免费黄色在线免费观看| 国产大屁股一区二区在线视频| 亚洲色图av天堂| 在线观看一区二区三区| 在线观看国产h片| 国产男人的电影天堂91| 69av精品久久久久久| 中文字幕人妻熟人妻熟丝袜美| 欧美区成人在线视频| 高清日韩中文字幕在线| 国产精品一区www在线观看| 精品视频人人做人人爽| 肉色欧美久久久久久久蜜桃 | 国产中年淑女户外野战色| 80岁老熟妇乱子伦牲交| 日本色播在线视频| av一本久久久久| 婷婷色av中文字幕| 黄色一级大片看看| 亚洲国产av新网站| 一级毛片我不卡| 亚洲精品乱码久久久v下载方式| 久久这里有精品视频免费| 成年女人看的毛片在线观看| 日韩精品有码人妻一区| 一级毛片久久久久久久久女| 精品一区二区三区视频在线| 91精品一卡2卡3卡4卡| 免费高清在线观看视频在线观看| 欧美精品国产亚洲| 插阴视频在线观看视频| 欧美成人午夜免费资源| 日韩国内少妇激情av| av在线观看视频网站免费| 久久久国产一区二区| 亚洲精品国产av成人精品| 欧美高清性xxxxhd video| 特级一级黄色大片| 国产精品秋霞免费鲁丝片| 女人十人毛片免费观看3o分钟| 高清毛片免费看| 亚洲电影在线观看av| 青青草视频在线视频观看| 国产一区有黄有色的免费视频| 国产精品偷伦视频观看了| 日本欧美国产在线视频| 人人妻人人澡人人爽人人夜夜| 狂野欧美激情性xxxx在线观看| 亚洲精品乱码久久久v下载方式| 日日啪夜夜撸| 国产老妇伦熟女老妇高清| 白带黄色成豆腐渣| 舔av片在线| 99热这里只有是精品50| 一区二区av电影网| 男人狂女人下面高潮的视频| 亚洲精品视频女| 国产亚洲av嫩草精品影院| 岛国毛片在线播放| 搡老乐熟女国产| 国产日韩欧美在线精品| 精品一区二区免费观看| 国产一级毛片在线| 2022亚洲国产成人精品| 中文乱码字字幕精品一区二区三区| 久久久久国产网址| 偷拍熟女少妇极品色| 男女无遮挡免费网站观看| 男女下面进入的视频免费午夜| 国产中年淑女户外野战色| 日本黄大片高清| 国产黄片视频在线免费观看| 91精品一卡2卡3卡4卡| 日韩大片免费观看网站| 水蜜桃什么品种好| 国产黄片美女视频| 激情 狠狠 欧美| 亚洲成人一二三区av| av福利片在线观看| 亚洲,一卡二卡三卡| 18+在线观看网站| 各种免费的搞黄视频| 久久久久久久大尺度免费视频| 日韩人妻高清精品专区| 毛片女人毛片| 插逼视频在线观看| 少妇猛男粗大的猛烈进出视频 | 久久午夜福利片| 内射极品少妇av片p| 91久久精品国产一区二区三区| 国产日韩欧美在线精品| 丰满人妻一区二区三区视频av| 中文字幕免费在线视频6| 蜜臀久久99精品久久宅男| 精品一区二区三区视频在线| 欧美xxⅹ黑人| 国语对白做爰xxxⅹ性视频网站| www.av在线官网国产| 偷拍熟女少妇极品色| 赤兔流量卡办理| 国产精品人妻久久久影院| av在线app专区| 日韩精品有码人妻一区| 一级黄片播放器| 日本一二三区视频观看| 岛国毛片在线播放| 色哟哟·www| 新久久久久国产一级毛片| 欧美日韩精品成人综合77777| 免费在线观看成人毛片| 国产女主播在线喷水免费视频网站| 欧美xxxx黑人xx丫x性爽| 岛国毛片在线播放| 综合色丁香网| 精品少妇黑人巨大在线播放| 听说在线观看完整版免费高清| 青春草视频在线免费观看| 国产成人91sexporn| 亚洲久久久久久中文字幕| 国产 一区精品| 亚洲精品成人av观看孕妇| 永久免费av网站大全| 国内精品宾馆在线| 国产午夜福利久久久久久| 欧美成人一区二区免费高清观看| 国产高清不卡午夜福利| videossex国产| 岛国毛片在线播放| 色播亚洲综合网| 看免费成人av毛片| 免费大片18禁| 亚洲无线观看免费| 色视频在线一区二区三区| 欧美日本视频| 久久久精品欧美日韩精品| 久久这里有精品视频免费| 国产av不卡久久| 国产免费又黄又爽又色| 国产精品秋霞免费鲁丝片| 亚洲精品国产av蜜桃| 欧美日韩在线观看h| a级毛色黄片| 亚洲欧美精品自产自拍| 国产黄片视频在线免费观看| 老女人水多毛片| 国产女主播在线喷水免费视频网站| 黄色怎么调成土黄色| 国产色婷婷99| 亚洲精品成人久久久久久| 国产片特级美女逼逼视频| 人妻制服诱惑在线中文字幕| 国产国拍精品亚洲av在线观看| 中文字幕亚洲精品专区| 免费观看在线日韩| 2018国产大陆天天弄谢| 97精品久久久久久久久久精品| 色综合色国产| 国产av不卡久久| 国产欧美日韩一区二区三区在线 | 看非洲黑人一级黄片| 国产亚洲精品久久久com| 亚洲在久久综合| 欧美精品国产亚洲| 中文字幕亚洲精品专区| 日韩欧美 国产精品| 国产高清有码在线观看视频| 国产成人精品一,二区| 日韩欧美精品v在线| 久久久色成人| 久久久久国产网址| 亚洲欧洲日产国产| av在线app专区| 黄片wwwwww| 蜜桃亚洲精品一区二区三区| 少妇丰满av| 久久久久久久大尺度免费视频| 26uuu在线亚洲综合色| 欧美成人精品欧美一级黄| av免费观看日本| 亚洲av不卡在线观看| 欧美bdsm另类| 99久久人妻综合| 国产成人91sexporn| av在线app专区| av免费在线看不卡| 亚洲三级黄色毛片| 狂野欧美激情性xxxx在线观看| 91精品一卡2卡3卡4卡| 成人漫画全彩无遮挡| 色5月婷婷丁香| 深爱激情五月婷婷| 女人久久www免费人成看片| 天天躁日日操中文字幕| 老司机影院毛片| 18禁裸乳无遮挡免费网站照片| 亚洲第一区二区三区不卡| 春色校园在线视频观看| av在线蜜桃| 国产美女午夜福利| 亚洲自拍偷在线| 久热久热在线精品观看| 久久久久精品久久久久真实原创| 亚洲天堂国产精品一区在线| 久久人人爽人人爽人人片va| 亚洲欧美成人精品一区二区| 91在线精品国自产拍蜜月| 色综合色国产| 99久久精品国产国产毛片| av在线蜜桃| 国产黄色视频一区二区在线观看| 亚洲真实伦在线观看| 美女视频免费永久观看网站| 少妇人妻 视频| 涩涩av久久男人的天堂| 久久人人爽人人爽人人片va| 欧美日本视频| 亚洲aⅴ乱码一区二区在线播放| 一个人看的www免费观看视频| 能在线免费看毛片的网站| 身体一侧抽搐| 免费在线观看成人毛片| 少妇被粗大猛烈的视频| 中文在线观看免费www的网站| 五月玫瑰六月丁香| 久久精品国产鲁丝片午夜精品| 国产真实伦视频高清在线观看| 一级黄片播放器| 久久热精品热| 久久久久网色| 你懂的网址亚洲精品在线观看| 欧美zozozo另类| 欧美老熟妇乱子伦牲交| 777米奇影视久久| 女的被弄到高潮叫床怎么办| 亚洲av欧美aⅴ国产| 欧美日韩国产mv在线观看视频 | 亚洲色图av天堂| 久久人人爽av亚洲精品天堂 | 欧美激情国产日韩精品一区| 精品久久国产蜜桃| 最近最新中文字幕大全电影3| 男女下面进入的视频免费午夜| 欧美丝袜亚洲另类| 在线a可以看的网站| 国精品久久久久久国模美| 亚洲精品一二三| 国产亚洲91精品色在线| 99久久人妻综合| 黄片wwwwww| 又爽又黄无遮挡网站| 亚洲精品自拍成人| 国产精品久久久久久精品电影小说 | 国产精品一区二区三区四区免费观看| 有码 亚洲区| 久久精品国产亚洲av天美| 自拍偷自拍亚洲精品老妇| 少妇裸体淫交视频免费看高清| 亚洲人成网站高清观看| 中文字幕av成人在线电影| 一区二区三区四区激情视频| 乱码一卡2卡4卡精品| 欧美日韩精品成人综合77777| 黄色视频在线播放观看不卡| 内地一区二区视频在线| 国产女主播在线喷水免费视频网站| 在线观看免费高清a一片| 99re6热这里在线精品视频| 夫妻午夜视频| 80岁老熟妇乱子伦牲交| 日本爱情动作片www.在线观看| 久久久久久久久久久免费av| 亚洲精品国产av成人精品| 国产免费福利视频在线观看| 99热全是精品| 久久久久精品久久久久真实原创| 国产高清三级在线| 国产亚洲午夜精品一区二区久久 | 亚洲精品久久午夜乱码| 色综合色国产| 国语对白做爰xxxⅹ性视频网站| 国产女主播在线喷水免费视频网站| 亚洲精品第二区| 亚洲成人精品中文字幕电影| 久久久久久久久久久丰满| 亚洲精品乱久久久久久| 日韩欧美一区视频在线观看 | 有码 亚洲区| 亚洲,欧美,日韩| 性插视频无遮挡在线免费观看| 国产有黄有色有爽视频| 免费在线观看成人毛片| 免费av毛片视频| 人妻 亚洲 视频| 国产av国产精品国产| 精品熟女少妇av免费看| 日韩一区二区三区影片| 亚洲av成人精品一二三区| 精品一区二区三卡| 免费观看性生交大片5| 精品国产露脸久久av麻豆| 久久久久国产精品人妻一区二区| 99热国产这里只有精品6| 国产免费又黄又爽又色| 一级毛片久久久久久久久女| 熟女电影av网| 麻豆乱淫一区二区| 国产精品久久久久久精品电影| 人妻 亚洲 视频| 一二三四中文在线观看免费高清| 中文天堂在线官网| 特大巨黑吊av在线直播| 国产成人精品婷婷| 免费大片黄手机在线观看| 色婷婷久久久亚洲欧美| 成人黄色视频免费在线看| 最近最新中文字幕大全电影3| 国产高清国产精品国产三级 | 美女国产视频在线观看| 18+在线观看网站| 欧美成人一区二区免费高清观看| 男人舔奶头视频| 亚洲图色成人| av一本久久久久| 三级男女做爰猛烈吃奶摸视频| 狂野欧美白嫩少妇大欣赏| 26uuu在线亚洲综合色| 亚洲丝袜综合中文字幕| 欧美激情在线99| 1000部很黄的大片| 久久久久久久国产电影| 色哟哟·www| 五月开心婷婷网| 亚洲国产色片| 亚洲欧美成人综合另类久久久| 可以在线观看毛片的网站| 久久97久久精品| 1000部很黄的大片| 午夜精品国产一区二区电影 | av黄色大香蕉| 听说在线观看完整版免费高清| 插逼视频在线观看| 99热全是精品| 爱豆传媒免费全集在线观看| 欧美+日韩+精品| 亚洲四区av| av国产免费在线观看| 一级毛片aaaaaa免费看小| 国产成人免费无遮挡视频| 亚洲精品aⅴ在线观看| 亚洲av不卡在线观看| 亚洲欧洲日产国产| 人人妻人人澡人人爽人人夜夜| 亚洲人与动物交配视频| 久久久久久久久大av| av在线亚洲专区| 亚洲va在线va天堂va国产| 亚洲欧美日韩无卡精品| 亚洲人成网站高清观看| 91在线精品国自产拍蜜月| 亚洲精品aⅴ在线观看| 亚洲精品乱码久久久v下载方式| 综合色av麻豆| 在线天堂最新版资源| 亚洲成人一二三区av| 高清av免费在线| 成人免费观看视频高清| 狂野欧美激情性bbbbbb| 一级毛片 在线播放| 欧美人与善性xxx| 好男人在线观看高清免费视频| 精华霜和精华液先用哪个| 99久久精品一区二区三区| 一区二区三区免费毛片| 久久99精品国语久久久| 一级爰片在线观看| 亚洲欧美日韩卡通动漫| 久久久亚洲精品成人影院| 高清毛片免费看| 深爱激情五月婷婷| 麻豆久久精品国产亚洲av| 熟妇人妻不卡中文字幕| 国产视频首页在线观看| 欧美三级亚洲精品| 久久久久久久大尺度免费视频| 欧美另类一区| av.在线天堂| 嘟嘟电影网在线观看| 国产成人福利小说| 精品国产一区二区三区久久久樱花 | 最近手机中文字幕大全| 成人黄色视频免费在线看| 午夜免费观看性视频| 一级毛片我不卡| 欧美日韩视频精品一区| 一级毛片aaaaaa免费看小| av国产久精品久网站免费入址| 91aial.com中文字幕在线观看| 国产淫语在线视频| av国产久精品久网站免费入址| 国产真实伦视频高清在线观看| 久久这里有精品视频免费| 性色avwww在线观看| 日韩中字成人| 成人综合一区亚洲| 精品久久久久久久末码|