• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Mobile Memory Management System Based on User’s Application Usage Patterns

    2021-12-14 06:06:52JaehwanLeeandSangohPark
    Computers Materials&Continua 2021年9期

    Jaehwan Lee and Sangoh Park

    School of Computer Science and Engineering,Chung-Ang University,Dongjak-gu,Seoul,06974,Korea

    Abstract:Currently,the number of functions to improve user convenience in smartphone applications is increasing.In addition,more mobile applications are being loaded into mobile operating system memory for faster launches,thus increasing the memory requirements for smartphones.The memory used by applications in mobile operating systems is managed using software;allocated memory is freed up by either considering the usage state of the application or terminating the least recently used (LRU) application.As LRU-based memory management schemes do not consider the application launch frequency in a low memory situation,currently used mobile operating systems can lead to the termination of a frequently executed application,thereby increasing its relaunch time.This study proposes a memory management system that can efficiently utilize the main memory space by analyzing the application usage information.The proposed system reduces the application launch time by leaving the most frequently used or likely to be run applications in the main memory for as long as possible.The performance evaluation conducted utilizing actual smartphone usage records showed that the proposed memory management system increases the number of times the applications resume from the main memory compared with the conventional memory management system,and that the average application execution time is reduced by approximately 17%.

    Keywords:Mobile environment;memory management;machine learning;neural nets;user-centered design

    1 Introduction

    Various types of mobile applications are emerging with the development and widespread use of smartphones.Currently,the number of mobile applications registered in the application marketplace for mobile operating systems such as Android and iOS is approximately 2 million [1,2].Smartphone users typically install dozens,and sometimes,even hundreds of applications on their devices;additional features are being included in applications to improve user convenience.Accordingly,requirements such as improvements in main memory capacity and computational processing performance are increasing.The increasing demand for main memory can be accommodated with the use of hardware or software.The hardware approach involves expanding the main memory of the smartphone,whereas the software approach involves using the main memory efficiently by following carefully designed memory management policies.

    Mobile operating systems,such as Android or iOS,implement application life cycle management [3],which partially frees up memory used by applications based on the execution state of the application and thus accelerates application launch requests under conditions of limited main memory capacity.Moreover,it caches as many applications as possible in the main memory so that they are loaded rapidly when switched or relaunched.To prevent main memory shortages,cached applications that are not frequently used are terminated.This prevents users from experiencing reduced system performance when they relaunch frequently used applications.Swap techniques that use secondary storage as part of the main memory space have been considered [4-6]to efficiently utilize the main memory of smartphones.However,owing to wear-out problems of NAND flash memory-based storage devices,ZRAM [7],which compresses memory space and uses it as swap space,or ZSWAP [8],which is used as swap cache,can be used for smartphone memory management.In existing memory management techniques applied to smartphones [9]it is difficult to distinguish between frequently and infrequently used applications,because these techniques consider that the most recently executed application is more likely to be executed again than other applications [10];this problem is exacerbated as users install and use more applications.

    In this paper,we propose a memory management system to efficiently utilize the main memory and swap space by analyzing the application usage patterns and application execution probabilities of the user.The proposed system reduces the execution time of applications by keeping in the main memory the applications that are frequently used or that present a high probability of being relaunched.In low-memory situations,the main memory space allocated to applications is reclaimed by terminating the least likely to be reused application.Consequently,frequently used applications remain in the memory.

    The remainder of this article is organized as follows.In Section 2,we summarize the existing studies on memory management for mobile operating systems.In Section 3,we introduce the proposed memory management system for efficiently managing the main memory of mobile operating systems.In Section 4,we evaluate and analyze the application launch performance of the proposed and existing memory management systems.Finally,in Section 5,we present conclusions and future research directions.

    2 Related Work

    Unlike PC and HPC-based operating systems,mobile operating systems deployed in smartphones follow memory management policies that are suitable for mobile environments.Mobile applications can be cached in the main memory to reduce their launch time without the need for a high-performance processor or storage.The number of applications cached in the main memory can be increased with the use of compressed memory.Currently,users typically install dozens,and sometimes,even hundreds of applications on their smartphones [11];some of these applications are frequently used,such as messengers and browsers,whereas others are rarely used [12].Mobile operating systems allow users to switch screens by launching other applications,maintaining the previously used application in memory for as long as possible;this ensures that the application resumes from memory when launched again.

    Android is an operating system that holds approximately 70% of the mobile operating system market share [13],making it one of the most installed operating systems on smart devices.It includes an operating system and middleware for mobile devices and operates based on Linux kernel.In addition to the existing functionalities for Linux kernel,Android specific functions such as power management and memory management,are also included in this system.

    Android can host applications written in Java and Kotlin in separate processes on the Android runtime virtual machine.It implements an application life cycle management policy [3]that performs memory allocation and release based on the execution state of the applications.Running applications remain in the main memory with sufficient main memory unless explicitly terminated.Theapplication management service(AMS) in the Android framework manages and tracks application status and records the status information in processes that host the application.When the system experiences memory shortage,Android’s memory manager runs thelow memory killer(LMK) [14-16]driver to terminate the applications,executing the least recently used (LRU) [9]method,thus reclaiming memory.This reclaiming task is repeated until sufficient free space has been reclaimed by the system.

    The commonly used LRU-based memory management policies assume that recently used applications are likely to be launched again [12].However,the effectiveness of these policies decreases with the increase in the number of applications that a user installs.Frequently used applications can be selected for termination if several infrequently but recently used applications are present.In this case,the application launch time will increase because frequently used applications are launched from the device storage.

    Other software approaches for memory management feature swap techniques utilize a part of the main memory as compressed storage space.The swap techniques use a part of the free space from the secondary storage as swap area.This allows for the use of memory resources beyond the physical memory limit of the system.When the main memory space is not sufficient to allocate a page,the memory manager migrates a page in the main memory to the swap area.The swappedout page is brought back to the main memory when a process refers the page again.Another page is then selected to be swapped out.The swap-in and out task is frequently performed if the main memory keeps running low.Therefore,NAND flash memory based mobile devices do not present a swap feature that employs secondary storage to prevent storage wear-outs.Main memory-based swap techniques,such as ZRAM or ZSWAP,are utilized to compress and store swap data using a part of the main memory space as a block device.However,these swap techniques present compression and decompression overhead;the overhead increases with the amount of swap out/in data to/from the compressed area.

    In a study using ZRAM and storage as the swap area [17],the swap cost of a page was calculated to swap out frequently referenced pages into ZRAM while swapping out infrequently used pages to storage.Taking compression ratio [18]or application behavior [19]into consideration to estimate the cost of a page showed benefits for swapping out pages,extended storage lifespan and higher application launch speeds.In a study that proposed memory management using the average reuse distance [10,14],which indicates the number of times other applications were launched between the launch and relaunch of an application,it was considered that an application was more likely to be relaunched when its average reuse distance was smaller.Moreover,the total number of swaps was reduced by preventing the swap out of applications with a small average reuse distance and allowing swap out of applications with a high average reuse distance.A cloud-based memory expansion scheme [20],which is a device-reserved memory management scheme for mobile devices [21],was also investigated to accommodate the increasing memory demands.However,existing memory management techniques present limitations when reflecting application usage patterns that are appropriate for the user.This is because they do not consider the application launch data and the correlation between applications.

    In this study,we present an approach for estimating the application launch probability by collecting users’application usage information and training a long short-term memory(LSTM) [22-24]model considering the association between the various usage patterns of the users.Based on the analysis of the usage pattern information,our proposed memory management system predicts the application to be launched,such that the system can determine the application that should be left in the main memory for a relaunch and that which should be terminated.Our approach utilizes the main memory and swap space more efficiently than existing memory management methods,when the least likely to be relaunched applications are terminated in low memory situations.

    3 AMMS:An Application-Prediction-Based Memory Management System

    In this paper,we proposeAMMS,an application-prediction-based memory management system that analyzes the user’s application usage patterns and launch probabilities.The overall architecture ofAMMSis shown in Fig.1.Theactivity/process context generatoris a module added to the existingactivity manager serviceto generate application launch information;it passes the generated information to thecontext managementmodule.When theapplication usage predictorof thecontext managementmodule receives the application launch information,it uses the LSTM network to predict the launch probability of the next application.Theapplication usage traineris a module that receives and stores application launch information and utilizes this information to train the LSTM network.In addition,the launch probability of each application is recorded as process information,which is managed by theprocess managementmodule in the Linux kernel.TheAMMS interfaceinsidefile system managementis the interface module that transfers data from the Android framework to the Linux kernel.TheAMMS reclaimermodule ofmemory managementsearches for the process with the lowest launch probability when a main memory shortage is experienced;then,it terminates the process and frees up memory space.This immediately increases the resuming frequency of the most likely to be launched applications without removing them from memory.In summary,AMMSimproves the application launch speed by utilizing the main memory more efficiently than existing methods.

    3.1 Activity/Process Context Generator

    Theactivity/process context generatorgenerates contextual information when an application is launched,paused,and terminated in a mobile operating system,by collecting usage information such as the package name of the application,time of launch,process creation information generated for the application’s execution,process ID,and application package name to which each process belongs.This module is located in theactivity manager servicethat is responsible for launching and terminating applications.When an application is launched or a process is created,theactivity/process context generatordelivers application usage information to theapplication usage trainerand theapplication usage predictorincontext management.In addition,when a process creation task for launching an application occurs,the relevant context information is transferred to theapplication usage predictor.

    3.2 LSTM Network

    The LSTM network is located within thecontext managementmodule.It learns the user’s application usage pattern and predicts application launch probability.The parameters associated with the LSTM network are defined in Tab.1.LSTM is designed to solve the problem of gradient vanishing in recurrent neural networks (RNNs) and learn the correlation between long-term and short-term data [25].The functions forget gateft,input gategt,input gateit,memory cellct,and output gateotconstituting the LSTM model are defined,as follows,in Eqs.(1)-(6).

    Figure 1:System architecture

    The parameters used by the LSTM network for performing application launch probability prediction are defined in Tab.2.The launch probability of applicationuiat timet+1 with regard to application launch historyvtis defined asP(ui_t+1|Vt).The data for the application launched at timetcan be converted into the input of the LSTM model using Eq.(7).The functionlstm(xt),which performs operations according to the LSTM cell defined in Eqs.(1)-(6),can be expressed as in Eq.(8).

    Thesoftmaxfunction [26]presented in (9) is used to transform the outputs oflstm(xt)to a probability forui_t+1.

    Table 1:Definition of parameters related to long short-term memory

    3.3 Application Usage Predictor

    Theapplication usage predictormodule estimates the launch probability of an application using the LSTM network and delivers the obtained probability value to the Linux kernel;application usage information is passed through theactivity/process context generator,following which theapplication usage predictorinputs the information to the LSTM network to obtain the application launch probability.

    Theapplication usage predictorreads the application mapping tables stored in permanent storage,using the format presented in Tab.3 and then creates an application process information list calledAppInfo,as described in Tab.4.Application launch time and application process creation are different on mobile operating systems such as Android;therefore,theapplicationusage predictorreceives application launch and application process creation information from theactivity/process context generator.

    Table 2:Definition of parameters related to application usage prediction

    Table 3:Mapping table of application name to ID

    Table 4:AppInfo managed by application usage predictor

    Algorithm 1:Executed when an application has been selected to run 1:procedure on Application Launched (ui_t)2:Generate xt according to (7)3: ht ←lstm(xt) according to (8)4:for each ui ∈U do 5:ui’s AppInfo.prob ←P(ui_t+1|Vt) according to (9)6:for each pid in ui’s A ppInfo.p.probidlist do 7:if pid exists then 8:p ←ui’s AppInfo 9:i ←p×maximum_integer_value10:write i through procfs 11:end if 12:end for 13:end for 14:end procedure

    Algorithm 1 describes the task performed at timetwhen theapplication usage predictorreceives the application launch information.Application relaunch probability acquired from the LSTM network as shown in lines 2-3 is stored in theprobfield by traversing theAppInfolist as shown in lines 4-5.If the application’s process has already been created,the probability value is stored in the kernel’s process data structure throughfile system management,as in lines 6-14.Since the Linux kernel does not support floating point operations,it converts the probability value to an integer and stores it as in line 10.

    Algorithm 2:Executed when a process for an application has been created 1:pr ocedure on Process Created (ui_t,pid)2:add pid in ui’s AppInfo.pidlist 3:for each pid in ui’s AppInfo.pidlist do 4:if pid exists then 5:p ←ui’s AppInfo.prob 6:i ←p×maximum_integer_value7:write i through procfs 8:end if 9:end for 10:end procedure

    Algorithm 2 describes the task performed when the created application process information is received.When a process is created,the correspondingpidis stored in thepidlistof the application’sAppInfoto which the process belongs.

    3.4 Application Usage Trainer

    Theapplication usage trainermodule trains the LSTM network using application usage information;it receives and stores application launch information from theactivity/process context generator.The unique IDs corresponding to applications are stored chronologically.The LSTM network is unfolded according to the length of the data and becomes a feed-forward network.A backpropagation through time (BPTT) algorithm is used to update the weights of the network.The LSTM network is trained to minimize the mean squared error (MSE),which is defined as the difference between the predicted and actual values.MSE,in combination with the application launch probability,is defined in Eq.(10).

    The training of the LSTM network by theapplication usage trainerrequires a significant number of computations.Therefore,it only trains the network when the smartphone is charging and not in use.

    3.5 AMMS Interface

    TheAMMS interfaceprovides an interface for information delivery so that theapplication usage predictorcan provide application launch probability information to a process data structure inside the Linux kernel.The information delivery interface is provided byprocfs[27].System information,such as process information,is presented in a file-like structure usingprocfsin Unixlike operating systems;procfsis used to change system parameters at runtime through a common file I/O interface.In the proposed system,theapplication usage predictorcan record the launch probability of an application in the process data structure managed by process management viaprocfs.

    3.6 Process Probability

    Process probability is a data structure for recording the launch probability of an application’s process.Theprobfield is added to the task data structure managed byprocess managementas shown in Tab.5.Theapplication usage predictoronly estimates the probability of user launched applications since the termination of a system process may cause system crash or failure.Theprobvalue for system processes is initialized with a negative integer value to exclude them from memory reclaim candidates.

    Table 5:Process information managed by process management

    3.7 AMMS Reclaimer

    Algorithm 3:Executed for low memory event 1:procedure on Low Memory 2: tgproc ←nulloc ←mi 3: minprnimum_integer_value 4:for each proc in running_process_list do 5:if 0≤proc.prob and minprob≤proc.prob then 6:tgproc ←proc 7:minprob ←proc.prob 8:end if 9:end for 10:terminate tgproc and reclaim memory 11:end procedure

    AMMS reclaimeris a module that terminates the process of the application,which is least likely to be launched next,when the available main memory starts becoming low.Algorithm 3 describes the memory reclamation process performed byAMMS reclaimer.The variables for designating the process with minimum probability are initialized as shown in lines 2-3.The process list is traversed as in line 4,followed by the selection of the process with minimum probability in lines 5-8.

    4 Performance Evaluation

    4.1 Design of LSTM Network

    We designed an LSTM model for AMMS by changing its hyperparameters to identify the most efficient structure for predicting mobile application usage.To establish an efficient structure of the LSTM model,the number of computations as well as the prediction performance of the model should be considered.The model validation accuracy was determined by changing the number of layers of the LSTM network from 2 to 4 and the number of LSTM neurons per layer from 10 to 100.The number of application usage records varied from 100 to 10000 in this experiment that helped determine the robustness of the model prediction accuracy.The application usage records were randomly selected from LiveLab Research’s real-world usage data [28].

    The model validation accuracy and training loss for each number of layers and neurons is shown in Figs.2 and 3,respectively.As shown in Fig.2,the validation accuracy increased as the number of neurons increased.However,the accuracy decreased as the number of layers increased.The accuracy of layer 2 was the highest with the number of neurons from 20 to 100.The training loss for each number of neurons and layers was determined to identify if overfitting occurred.As shown in Fig.3,the training loss increased as the number of layers increased.If overfitting had occurred,the training loss would have decreased along with the validation accuracy.Therefore,it can be concluded that overfitting did not occur.The experimental results also show that the deeper the LSTM network structure,the more difficult it will be to learn the application usage of a user.

    Figure 2:Comparison of validation accuracy

    To consider a model for the mobile environment,it is necessary to identify the most computationally efficient model.We employ a method used in [29]to estimate the number of computations required to perform a single prediction task of an LSTM network.The number of computations is calculated in terms ofL,N,m,which represent the number of layers,the number of LSTM neurons,and the number of elements ofxt,respectively.

    Unlike the previous work,the number of application types in the usage dataset,represented bym,varies depending on the dataset.The number of applications according to the number of samples in a dataset is shown in Fig.4.The number of applications increases with the number of samples in a dataset.Furthermore,the accuracy varies as the number of applications changes.The validation accuracy according to the number of samples is collected for each number of layers and neurons.Note that models with 4 layers are excluded since they have been shown to be not appropriate for the application usage dataset.Figs.2 and 5 show that the validation accuracy tends to increase as the number of neurons is increased;however,the same trend is not observed with increase in the number of layers.The accuracy is not always higher for one model compared to another one,e.g.,for the modelsl2n90 andl3n20.The accuracy for modell2n90 is lower than that forl3n20 with the number of samples from 100 to 200,and it becomes higher for modell2n90 when the number of samples is larger than 600.

    Figure 3:Comparison of training loss

    Figure 4:Number of applications vs.the number of samples in the dataset

    Figure 5:Validation accuracy of 2-layer model vs.the number of samples

    Based on the experimental results presented in Figs.2,5,and 6,it is concluded that a model should not be chosen unconditionally although the accuracy is higher for a specific sample range,nor the number of computations is smaller.Therefore,we propose an evaluation method to identify a model that provides high efficiency and good performance.The number of computations required to perform a single prediction task of an LSTM network is determined using Eq.(11) [29].

    Figure 6:Validation accuracy of a 3-layer model vs.the number of samples

    Algorithm 4:Determination of model score 1:procedure get Model Score (l′,n′)2: score ←0 3:for each ?l ∈Layers,?n ∈Neurons,?s ∈Samples do 4:m ←the number of apps from Fi g.4 wit2mh s 5:compthis ←l′images/BZ_1188_702_2649_726_2695.png4n′2+5n′m+8n′images/BZ_1188_1052_2649_1077_2695.png+6:accthis ←validation accuracy of a model with l′,n′

    (Continued)

    7:comptarget ←limages/BZ_1189_710_411_733_457.pngimages/BZ_1189_719_375_737_420.png4n2+5nm+8nimages/BZ_1189_1031_375_1049_420.png+2m 8:acctarget ←validation accuracy of a model with l,n 9:if compthis<comptarget then 10:if accthis≥acctarget then 11:score ←score+1 12:end if 13:end if 14:end for 15:return score 16:end procedure

    Algorithm 4 depicts the proposed method to evaluate an LSTM model considering its performance and computational efficiency simultaneously.To evaluate a model withl′layers andn′neurons,its validation accuracy for each number of samplessis compared to that of all other models.A model’s score indicates how many other models with higher complexity and lower accuracy exist.Line 3 is executed to search through all other models withllayers,nneurons,andsnumber of samples.The computation amount of the model for which we want to calculate the score is expressed ascompthisin line 4.The accuracy of the model is expressed asaccthisin line 5.The computation amount and the accuracy of a model that needs to be compared with are expressed ascomptargetandacctarget,respectively,in lines 7-8.As shown in lines 9-13,if there is a model for which the number of computations is larger thancompthisand the accuracy is equal to or lower thanaccthis,the score count is increased by 1.For example,with 200 samples,ifaccthisofl2n10 is higher thanacctargetofl3n10 andcompthisofl2n10 is lower thancomptargetofl3n10,the score ofl2n10 is increased by 1.

    Fig.7 shows the score of each model for different numbers of layers and neurons.Models with 2 layers outperformed other models and thel2n20 model was rated a score of 261,the highest score among all models.Score 261 means that there were 261 cases in whichl2n20 performed more accurately and was computationally more efficient.In the case ofl2n10,since it involves the smallest number of computations,there were 167 cases in which the model performed better.The score decreased as the number of computations increased.Therefore,the proposed AMMS was implemented with thel2n10 model based on the model evaluation results.

    Figure 7:Comparison of the number of computations

    4.2 Evaluation of AMMS

    We implementedAMMSon an Android smartphone to evaluate the performance of the proposed system.The launch time and characteristics of mobile applications were collected and analyzed.The device used for performance evaluation was a Google Nexus 6P smartphone [30]with a Snapdragon 810 processor,as shown in Tab.6.

    Table 6:Target device specification

    Algorithm 5:Determination of application launch type 1:procedure get Launch Type (ui_t)2: type ←WarmLaunch 3: time ←ui_t’s launchime is null thentype ←HotLaun time from Android framework 4:if t 5:ch 6:else if ui_t’s AppInfo.pidlist is empty then 7:type ←ColdLaunch 8:end if 9:return type 10:end procedure

    There are three types of application launches:cold launch,warm launch,and hot launch [31].Hot launch occurs when all the application processes have been already created and there is no need to create new ones to execute the application.In warm launch,some processes have been created,but additional ones need to be created to execute the application.In cold launch,processes have not been created yet,so all the necessary processes need to be created.The performance measurement data were randomly selected from the actual application usage data collected by LiveLab Research [28].The total number of applications in the application usage history was 35 (see Tab.7).Ten-fold cross validation was performed with 90% (4500) of the 5000 application usage records for training and 10% (500) of the records for validation.

    Android logs the time interval from application launch to application screen rendering,i.e.,it provides cold and warm launch information.The time is not logged for hot launches because the process is already created,and the application is already rendered.Therefore,we compared the proposed and existing methods in terms of the application launch time information provided by Android to measure the number of launch types.Algorithm 5 describes the manner in which application launch type is determined.Hot launch occurs when the Android framework does not provide the launch time of an application,as shown in lines 3-5.If the framework provides the launch time but no data is left in the main memory,cold launch occurs,as shown in lines 6-7.Otherwise,warm launch occurs.The existing memory management system is labeledAndroid+ZRAM+LRUto indicate that the system is built from the source code of the Android platform and its kernel,and the proposed memory management system is labeledAndroid+ZRAM+AMMS.

    In order to evaluate application launch performance,the validation dataset of 500 application records was executed in order,and the number of hot,warm,and cold launch occurrences was measured for the existing and proposed systems.Tab.8 shows the average number of launches of each type.The average number of cold launches was 76.4 forAndroid+ZRAM+AMMS,which was approximately 10% lower than the average of 85 forAndroid+ZRAM+LRU.Moreover,Android+ZRAM+AMMSreduced the average number of warm launches,in which application data and processes remained in the main memory,by approximately 11% (from 14.4 to 12.8),in comparison withAndroid+ZRAM+LRU.These results indicate that our proposed memory management system predicts the application likely to be launched next,more accurately than the existing system.

    Table 8:Average launch time for each type

    Next,we measured application launch time,which is the time required to completely load an application,create processes and activities,and render the initial screen;we evaluated how this parameter is affected by the application prediction accuracy of the existing and proposed systems.Application launch time was measured using the measurement tool provided by the Android framework [30].Since the measurement tool provides the launch time for cold and warm launches,hot launches correspond to the data and processes cached in main memory;hot launch time was not measured in this study and thus treated as 0 ms.

    Tab.9 shows the cumulative average launch times for cold and warm launches obtained using the aforementioned validation dataset with 500 application records.The cumulative average launch time for warm launches decreased by about 11% (from 7095.8 ms for the existing system to 6302.8 ms for the proposed system).In addition,the launch time for cold launches reduced by about 18% (from 106128.6 ms for the existing system to 86810 ms for the proposed system).The total cumulative launch time (including warm and cold launches) for the proposed system was approximately 17% less than that for the existing system.Since warm and cold launches take more time than hot launches [30],a decrease in the launch time for cold and warm launches implies a decrease in the total launch time.

    Table 9:Average launch time in milliseconds

    The results shown in Tab.9 are graphically compared in Fig.8a.An increase in the cumulative launch time implies that warm or cold launches have occurred;otherwise,it implies that hot launches have occurred.Fig.8b depicts the difference of cumulative average launch times between the existing and proposed systems.In Fig.8b,there are sections where the cumulative launch time increased rapidly,e.g.,around the launch of the 100th application or the 200th application.According to the results,application launch time improved in sections where warm and cold launches occurred frequently.Applications that are likely to be launched again remained in memory as much as possible onAMMS;warm and cold launches were turned into hot launches so that the cumulative launch time decreased.However,there are sections where the cumulative launch time forAMMSwas more than that for the existing system.Since the memory reclamation policy ofAMMSis different from the existing systems’LRU policy,a cold launch occurred in the section where a hot launch occurred in the existing system.The increased launch time in these sections is negligible compared to the increased time in other sections.

    Figure 8:Comparison of cumulative launch time performances

    In summary,the proposed system reclaims the memory used by the applications with the lowest launch probability in a low-memory situation,thus increasing the launch speed of more frequently used applications.Consequently,the proposed system utilizes the main memory more efficiently than the existing system.

    5 Conclusion

    As smartphone users install and use performance-demanding applications,significant amount memory is needed.Techniques such as ZRAM swapping or swapping using storage were developed to accommodate main memory demands;however,these techniques present limitations such as compression and decompression overhead for ZRAM and NAND flash memory wear-out problems for memory swapping using storage,thereby rendering them relatively slower than the main memory.Moreover,the performance of the existing application memory management system using LRU decreases when the number of applications used increases.

    In this paper,we proposed a memory management system—AMMS—that utilizes the main memory more efficiently.The proposed system predicts application launch probability more accurately by collecting and analyzing the user’s application usage information.Frequently used applications reside in the memory for as long as possible,thereby reducing the average launch time of the applications.Performance evaluation using actual smartphone usage records showed that the proposed system increases the number of launches from main memory (hot launches) by approximately 10%,while reducing the average launch time of applications by approximately 17%.This indicates that the proposed system is superior to the existing system at resuming frequently used applications from the main memory.In future,we plan to study an online learning version of the prediction model and a generalized prediction model using application categories to improve prediction performance.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF) Grant funded by the Korea Government (MSIT) under Grant 2020R1A2C1005265.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    天天躁日日操中文字幕| 亚洲乱码一区二区免费版| 亚洲久久久久久中文字幕| 亚洲av.av天堂| 免费av观看视频| 国产日本99.免费观看| 深夜精品福利| 变态另类成人亚洲欧美熟女| 欧美另类亚洲清纯唯美| 最近视频中文字幕2019在线8| 日本色播在线视频| 免费av观看视频| 久久6这里有精品| 欧美日韩精品成人综合77777| 热99re8久久精品国产| 国产精品人妻久久久影院| 亚洲欧美日韩东京热| 欧美成人一区二区免费高清观看| 97超视频在线观看视频| 免费观看在线日韩| 亚洲中文日韩欧美视频| 日韩亚洲欧美综合| 淫秽高清视频在线观看| 99久久久亚洲精品蜜臀av| 乱码一卡2卡4卡精品| 嫩草影院入口| 久久精品国产清高在天天线| 日韩人妻高清精品专区| 天天一区二区日本电影三级| 人妻少妇偷人精品九色| av在线天堂中文字幕| 丰满人妻一区二区三区视频av| 波野结衣二区三区在线| 国产在线精品亚洲第一网站| 午夜精品久久久久久毛片777| 午夜福利视频1000在线观看| 久久天躁狠狠躁夜夜2o2o| 在现免费观看毛片| 欧美xxxx黑人xx丫x性爽| 国产国拍精品亚洲av在线观看| 日本免费一区二区三区高清不卡| 两性午夜刺激爽爽歪歪视频在线观看| 99热这里只有是精品在线观看| 久久久久九九精品影院| 欧美成人性av电影在线观看| 亚洲av.av天堂| 国产不卡一卡二| 国产一区二区三区av在线 | 国产老妇女一区| 国产精品亚洲美女久久久| 亚洲av成人精品一区久久| aaaaa片日本免费| 国产淫片久久久久久久久| 久久久久久久精品吃奶| 综合色av麻豆| 国产精品一及| 婷婷六月久久综合丁香| 午夜影院日韩av| 国产一区二区三区视频了| av天堂中文字幕网| 日日摸夜夜添夜夜添小说| 非洲黑人性xxxx精品又粗又长| 成年人黄色毛片网站| 男女视频在线观看网站免费| 俄罗斯特黄特色一大片| 国产成人福利小说| 日本色播在线视频| 国产三级在线视频| 日本免费一区二区三区高清不卡| 中文亚洲av片在线观看爽| 99久久九九国产精品国产免费| 国产在线男女| 国产色爽女视频免费观看| 一级av片app| 在线免费十八禁| 两个人视频免费观看高清| 丰满人妻一区二区三区视频av| 日本免费一区二区三区高清不卡| 99久久九九国产精品国产免费| 日本三级黄在线观看| 女人十人毛片免费观看3o分钟| 麻豆国产97在线/欧美| 悠悠久久av| 在线播放无遮挡| 亚洲人成网站在线播| av视频在线观看入口| 国产精品一及| 男人舔奶头视频| 中文字幕av在线有码专区| 国产精品久久电影中文字幕| 偷拍熟女少妇极品色| 国产真实伦视频高清在线观看 | 无遮挡黄片免费观看| 亚洲中文字幕一区二区三区有码在线看| 欧美精品啪啪一区二区三区| 欧美最黄视频在线播放免费| 亚洲天堂国产精品一区在线| 国产单亲对白刺激| 一本久久中文字幕| 亚洲av熟女| 亚洲国产欧洲综合997久久,| 丰满的人妻完整版| 欧美又色又爽又黄视频| 三级国产精品欧美在线观看| 国产黄片美女视频| 亚洲三级黄色毛片| 成熟少妇高潮喷水视频| 国产成人av教育| 两个人视频免费观看高清| 国产精品人妻久久久影院| 黄片wwwwww| 久久热精品热| 伦理电影大哥的女人| 99久久精品一区二区三区| 日本三级黄在线观看| 美女 人体艺术 gogo| 麻豆成人午夜福利视频| 欧美成人a在线观看| 成人特级av手机在线观看| 深夜精品福利| 一a级毛片在线观看| 12—13女人毛片做爰片一| 国产精品一区二区三区四区免费观看 | 亚洲精华国产精华精| 女生性感内裤真人,穿戴方法视频| www.色视频.com| 18禁黄网站禁片免费观看直播| 日韩人妻高清精品专区| 久久香蕉精品热| 国产 一区 欧美 日韩| 国产日本99.免费观看| 极品教师在线免费播放| 国国产精品蜜臀av免费| 亚洲美女视频黄频| 亚洲 国产 在线| 亚洲精品乱码久久久v下载方式| 国产成人av教育| 1024手机看黄色片| 日韩欧美三级三区| 国产精品电影一区二区三区| 99久久精品国产国产毛片| 两个人视频免费观看高清| 男女下面进入的视频免费午夜| 色在线成人网| 别揉我奶头 嗯啊视频| 中文资源天堂在线| 久久九九热精品免费| 国产黄a三级三级三级人| 国产私拍福利视频在线观看| 国产精品,欧美在线| 深夜精品福利| 91久久精品国产一区二区三区| a级毛片a级免费在线| 九九热线精品视视频播放| 国产精品日韩av在线免费观看| 国产一区二区亚洲精品在线观看| 亚州av有码| 国产一区二区亚洲精品在线观看| 亚洲av一区综合| 深夜精品福利| 欧美日韩中文字幕国产精品一区二区三区| 日日撸夜夜添| 人人妻人人澡欧美一区二区| 深夜a级毛片| 亚洲美女黄片视频| www日本黄色视频网| 99久久中文字幕三级久久日本| 国产一区二区激情短视频| 麻豆国产97在线/欧美| av在线天堂中文字幕| 国产伦人伦偷精品视频| 欧美zozozo另类| 国产亚洲精品久久久com| 少妇人妻精品综合一区二区 | 国产激情偷乱视频一区二区| 午夜视频国产福利| 亚洲色图av天堂| 永久网站在线| 日本免费一区二区三区高清不卡| 午夜福利在线观看免费完整高清在 | 欧美日韩瑟瑟在线播放| 人妻制服诱惑在线中文字幕| 国语自产精品视频在线第100页| 国内精品一区二区在线观看| 国产毛片a区久久久久| 亚洲专区国产一区二区| 亚洲经典国产精华液单| 久久久成人免费电影| 悠悠久久av| 动漫黄色视频在线观看| 91久久精品电影网| 久久精品人妻少妇| 成人三级黄色视频| 18禁黄网站禁片免费观看直播| 精品一区二区三区视频在线观看免费| 在线国产一区二区在线| netflix在线观看网站| 美女 人体艺术 gogo| 国产精品国产三级国产av玫瑰| 国产精品一区二区三区四区久久| 久久久久久大精品| 久久99热这里只有精品18| 啦啦啦观看免费观看视频高清| 亚洲不卡免费看| 久久精品国产亚洲网站| 永久网站在线| 精品福利观看| 国产高清不卡午夜福利| 日韩人妻高清精品专区| 精品人妻视频免费看| 99久久精品热视频| 国产精品日韩av在线免费观看| 天美传媒精品一区二区| 久久亚洲真实| 1024手机看黄色片| 成年免费大片在线观看| 亚洲精品乱码久久久v下载方式| 大又大粗又爽又黄少妇毛片口| 亚洲午夜理论影院| 99久久九九国产精品国产免费| 国产免费一级a男人的天堂| 亚洲欧美日韩无卡精品| 九九久久精品国产亚洲av麻豆| 欧美精品国产亚洲| 偷拍熟女少妇极品色| 最后的刺客免费高清国语| 精品人妻熟女av久视频| 亚洲精品456在线播放app | 神马国产精品三级电影在线观看| 国产真实伦视频高清在线观看 | 久久久国产成人免费| 特大巨黑吊av在线直播| 国产综合懂色| 国产精品野战在线观看| 白带黄色成豆腐渣| 国产探花极品一区二区| 欧美不卡视频在线免费观看| 国产高清视频在线播放一区| 婷婷精品国产亚洲av在线| 久久久久久伊人网av| 天堂动漫精品| 欧美一区二区亚洲| 亚洲va在线va天堂va国产| 18禁黄网站禁片午夜丰满| 男人的好看免费观看在线视频| 免费不卡的大黄色大毛片视频在线观看 | 一区二区三区激情视频| 99国产极品粉嫩在线观看| 婷婷六月久久综合丁香| 国产精品日韩av在线免费观看| 亚洲国产精品成人综合色| 国产成人aa在线观看| 亚洲avbb在线观看| 看黄色毛片网站| 熟女电影av网| 国产男靠女视频免费网站| 国产精品人妻久久久影院| 制服丝袜大香蕉在线| 最近在线观看免费完整版| 免费看av在线观看网站| 淫妇啪啪啪对白视频| 熟妇人妻久久中文字幕3abv| 欧美日韩黄片免| 国产精品无大码| 国产亚洲91精品色在线| 99久久精品国产国产毛片| 男人狂女人下面高潮的视频| 久久久久精品国产欧美久久久| 亚洲午夜理论影院| 亚洲一区二区三区色噜噜| 国产精品福利在线免费观看| 88av欧美| 狠狠狠狠99中文字幕| 国产真实乱freesex| 欧美另类亚洲清纯唯美| 国产av麻豆久久久久久久| 国产精品1区2区在线观看.| 欧美+亚洲+日韩+国产| 日韩欧美一区二区三区在线观看| 九色成人免费人妻av| 成人午夜高清在线视频| 97热精品久久久久久| 久久亚洲真实| 日韩 亚洲 欧美在线| 精品日产1卡2卡| 亚洲中文日韩欧美视频| 国产亚洲精品久久久久久毛片| 少妇人妻一区二区三区视频| 男女边吃奶边做爰视频| 欧美成人一区二区免费高清观看| 成人鲁丝片一二三区免费| 久久久久久久亚洲中文字幕| 国内精品一区二区在线观看| 日日啪夜夜撸| 深夜精品福利| 欧美黑人巨大hd| 国产亚洲欧美98| 日韩一区二区视频免费看| 国产三级中文精品| 欧美精品国产亚洲| 最后的刺客免费高清国语| 日韩欧美在线二视频| 亚洲国产精品合色在线| 自拍偷自拍亚洲精品老妇| 国国产精品蜜臀av免费| 好男人在线观看高清免费视频| 高清在线国产一区| 国产黄片美女视频| www日本黄色视频网| 天美传媒精品一区二区| 色视频www国产| 色综合婷婷激情| 人妻夜夜爽99麻豆av| 色吧在线观看| 日本一二三区视频观看| 午夜久久久久精精品| 国产精品一区二区三区四区免费观看 | 高清在线国产一区| 99久久九九国产精品国产免费| 18禁黄网站禁片免费观看直播| 制服丝袜大香蕉在线| xxxwww97欧美| av专区在线播放| 又紧又爽又黄一区二区| 一a级毛片在线观看| 日日干狠狠操夜夜爽| 97超级碰碰碰精品色视频在线观看| 99久国产av精品| 日韩在线高清观看一区二区三区 | 亚洲中文字幕一区二区三区有码在线看| eeuss影院久久| 一夜夜www| 老司机午夜福利在线观看视频| 久久久久免费精品人妻一区二区| 久久精品人妻少妇| 亚洲精品色激情综合| 欧美色视频一区免费| 国产探花在线观看一区二区| 欧美一区二区精品小视频在线| 国产精品三级大全| 无人区码免费观看不卡| 小蜜桃在线观看免费完整版高清| 欧美一区二区精品小视频在线| 97超视频在线观看视频| 俺也久久电影网| 老师上课跳d突然被开到最大视频| 亚洲国产高清在线一区二区三| 国产精品爽爽va在线观看网站| 午夜激情福利司机影院| 小蜜桃在线观看免费完整版高清| 久久精品91蜜桃| 亚洲av.av天堂| 男人的好看免费观看在线视频| 在线播放国产精品三级| 搡老熟女国产l中国老女人| 波多野结衣高清无吗| 九九热线精品视视频播放| 很黄的视频免费| 亚洲av中文字字幕乱码综合| 91午夜精品亚洲一区二区三区 | 亚洲一区高清亚洲精品| 中文资源天堂在线| 无遮挡黄片免费观看| 免费大片18禁| 蜜桃亚洲精品一区二区三区| 国产精品乱码一区二三区的特点| 日韩欧美精品v在线| 亚洲av第一区精品v没综合| 久久久国产成人精品二区| 天天一区二区日本电影三级| 日韩欧美精品v在线| 日本a在线网址| 精品久久久久久成人av| 伦理电影大哥的女人| 国产精品久久久久久精品电影| videossex国产| 大又大粗又爽又黄少妇毛片口| ponron亚洲| 日韩强制内射视频| 一进一出抽搐gif免费好疼| 少妇被粗大猛烈的视频| 精品久久久久久久末码| 久久国产精品人妻蜜桃| 悠悠久久av| 日韩强制内射视频| 一级av片app| 老熟妇乱子伦视频在线观看| 免费电影在线观看免费观看| 日韩亚洲欧美综合| 中文字幕高清在线视频| 国产午夜精品久久久久久一区二区三区 | 国产av一区在线观看免费| 亚洲中文字幕日韩| 天堂√8在线中文| 久9热在线精品视频| 亚洲av中文字字幕乱码综合| 亚洲欧美日韩无卡精品| 亚洲五月天丁香| 久久久久久久亚洲中文字幕| 在线观看av片永久免费下载| 美女黄网站色视频| 国国产精品蜜臀av免费| 国产精品无大码| 18禁黄网站禁片午夜丰满| 国国产精品蜜臀av免费| 美女xxoo啪啪120秒动态图| 嫩草影院新地址| 好男人在线观看高清免费视频| 午夜福利在线观看吧| 亚洲成a人片在线一区二区| 赤兔流量卡办理| 精品一区二区三区视频在线观看免费| 久久亚洲精品不卡| 精品免费久久久久久久清纯| 丰满的人妻完整版| 熟女人妻精品中文字幕| 国产精品一区二区性色av| 免费观看的影片在线观看| 久久香蕉精品热| 最后的刺客免费高清国语| 老熟妇乱子伦视频在线观看| 一区二区三区免费毛片| 一区二区三区激情视频| av女优亚洲男人天堂| 国产极品精品免费视频能看的| 亚洲av免费高清在线观看| 国产一级毛片七仙女欲春2| 日本欧美国产在线视频| 成人毛片a级毛片在线播放| 桃红色精品国产亚洲av| 日本黄大片高清| 精品一区二区三区视频在线| 成人国产一区最新在线观看| 国产69精品久久久久777片| 亚洲 国产 在线| 精品无人区乱码1区二区| av天堂在线播放| 国产亚洲欧美98| 97人妻精品一区二区三区麻豆| 国产精品久久久久久久久免| 久久精品久久久久久噜噜老黄 | 黄片wwwwww| 亚洲avbb在线观看| 久久久久久伊人网av| 乱码一卡2卡4卡精品| 免费看av在线观看网站| 日本黄大片高清| 一进一出抽搐动态| 久久人人精品亚洲av| 男人狂女人下面高潮的视频| 搡女人真爽免费视频火全软件 | 欧美成人免费av一区二区三区| 欧美日韩精品成人综合77777| 亚洲av不卡在线观看| 最新中文字幕久久久久| 久久国产精品人妻蜜桃| 国产精品三级大全| 国产精品亚洲美女久久久| 久久久久久久精品吃奶| 欧美日韩综合久久久久久 | 不卡视频在线观看欧美| 性插视频无遮挡在线免费观看| 国产精品国产高清国产av| 午夜老司机福利剧场| 国内精品美女久久久久久| 国产美女午夜福利| av国产免费在线观看| 欧洲精品卡2卡3卡4卡5卡区| 99热只有精品国产| 看片在线看免费视频| 黄色视频,在线免费观看| 久久久久国产精品人妻aⅴ院| 性插视频无遮挡在线免费观看| av.在线天堂| 午夜精品久久久久久毛片777| 日韩精品青青久久久久久| 啦啦啦啦在线视频资源| www.色视频.com| 天堂网av新在线| a级毛片a级免费在线| 久久久久久久久久成人| 性插视频无遮挡在线免费观看| 999久久久精品免费观看国产| 成人性生交大片免费视频hd| 亚洲自拍偷在线| 高清在线国产一区| 国产欧美日韩精品亚洲av| 韩国av一区二区三区四区| 成人午夜高清在线视频| 国产熟女欧美一区二区| 国产美女午夜福利| 午夜福利欧美成人| 十八禁国产超污无遮挡网站| 色综合站精品国产| 少妇熟女aⅴ在线视频| 亚洲五月天丁香| 日本黄大片高清| 两人在一起打扑克的视频| 久久精品久久久久久噜噜老黄 | 免费看美女性在线毛片视频| h日本视频在线播放| 国产欧美日韩一区二区精品| 精品人妻视频免费看| 日本三级黄在线观看| 97超视频在线观看视频| 看黄色毛片网站| 91在线精品国自产拍蜜月| 麻豆成人av在线观看| 麻豆成人午夜福利视频| 一级a爱片免费观看的视频| 级片在线观看| 男女边吃奶边做爰视频| 不卡视频在线观看欧美| 成年人黄色毛片网站| 精品人妻1区二区| 蜜桃久久精品国产亚洲av| 国产精品一区二区三区四区久久| 亚洲精华国产精华液的使用体验 | av在线天堂中文字幕| 欧美不卡视频在线免费观看| 亚洲av免费高清在线观看| 国产精品福利在线免费观看| 日韩欧美国产在线观看| 色综合站精品国产| 国产白丝娇喘喷水9色精品| 国内精品宾馆在线| 亚洲成人精品中文字幕电影| 一本精品99久久精品77| 九九热线精品视视频播放| 婷婷亚洲欧美| 乱系列少妇在线播放| 搡老岳熟女国产| 日本一本二区三区精品| 听说在线观看完整版免费高清| 亚洲色图av天堂| 国产淫片久久久久久久久| 国产 一区精品| 国产aⅴ精品一区二区三区波| 色综合站精品国产| 国产精品98久久久久久宅男小说| 99久久无色码亚洲精品果冻| 天堂av国产一区二区熟女人妻| 亚洲精品日韩av片在线观看| 韩国av在线不卡| 婷婷六月久久综合丁香| 久久久久久久久中文| 色精品久久人妻99蜜桃| 久久精品人妻少妇| 别揉我奶头 嗯啊视频| 国产一区二区激情短视频| 国产精品乱码一区二三区的特点| 亚洲最大成人手机在线| 亚洲美女黄片视频| 国产大屁股一区二区在线视频| 九九热线精品视视频播放| av天堂中文字幕网| 无人区码免费观看不卡| 免费av毛片视频| 人人妻人人看人人澡| 国产精品久久久久久av不卡| 国内少妇人妻偷人精品xxx网站| 精品日产1卡2卡| 长腿黑丝高跟| 婷婷亚洲欧美| 美女 人体艺术 gogo| 亚洲美女视频黄频| 成人精品一区二区免费| 中亚洲国语对白在线视频| 午夜福利在线在线| 级片在线观看| 免费观看人在逋| 国产一区二区三区视频了| 国产单亲对白刺激| 亚洲av成人av| 欧美激情久久久久久爽电影| 麻豆成人午夜福利视频| 国产高清激情床上av| 赤兔流量卡办理| 日韩高清综合在线| 最好的美女福利视频网| 久久国产精品人妻蜜桃| а√天堂www在线а√下载| 国产精品一区二区性色av| 校园春色视频在线观看| 国产精品三级大全| 小说图片视频综合网站| 欧美中文日本在线观看视频| 亚洲熟妇熟女久久| 老熟妇乱子伦视频在线观看| 国产中年淑女户外野战色| 国产精品久久久久久精品电影| 日韩欧美在线乱码| 久久国内精品自在自线图片| 美女大奶头视频| 免费在线观看日本一区| 日韩欧美免费精品| 国内精品美女久久久久久| 亚洲avbb在线观看| 国产一区二区三区在线臀色熟女| 国产精品亚洲一级av第二区| 欧美丝袜亚洲另类 | 午夜视频国产福利| 一边摸一边抽搐一进一小说| 久久人人精品亚洲av| 欧美最黄视频在线播放免费| 欧美黑人巨大hd| 国产av在哪里看| 久久久精品大字幕| 色播亚洲综合网| 2021天堂中文幕一二区在线观| 久久国产精品人妻蜜桃| 成人午夜高清在线视频| 神马国产精品三级电影在线观看| av在线老鸭窝| 丝袜美腿在线中文|