Jiansheng Yao , Chunguang Ma * , Haitao Yu , Yanling Liu,Qi Yuan,3
1 College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China
2 College of Tourism, Guilin University of Technology, Guilin 541006, China
3 College of Communication and Electronic Engineering, Qiqihar University, Qiqihar 161006, China
* The corresponding author, email: machunguang@hrbeu.edu.cn
Opportunistic networks [1] have emerged as one of the most interesting evolutions of MANETs. In MANETs a continuous link between message source and destination had to be established before a message was forwarded. Unfortunately, in some real MANETs scenario, the end-to-end path might never exist due to node mobility and the intrinsic wireless link instability, etc. To address this challenging scenario, opportunistic networks [2,3] turn the obstacle, i.e., mobility, into an opportunity by a new store-carry-and-forward paradigm which allows nodes to carry the messages with them while they move, until a suitable next hop appears. Therefore, opportunistic networks implement the communications between mobile nodes even if a complete link connecting them never exists, which has important signi ficance for Internet of Things and pervasive computing [4,5].
In opportunistic networks, efficient buffer management policies are necessary [6]. First,in order to carry messages to traverse disconnected parts of network, a node needs to store messages in its permanent buffer for long periods of time, until an appropriate forwarding opportunity arises. Second, to further increase delivery probability, multi-copy transmission technologies are often adopted in opportunistic networks. Therefore, the combination of longterm storage and message replication result in high storage and bandwidth overhead.
Currently, there are some routing protocols have taken advantage of scheduling and dropping policies to improve their performance[7,8,9]. However, more and more realistic applications in opportunistic networks, such as diffusing advertisements, propagating news and disseminating traffic information, etc.,have no speci fic destination addresses in data packets. In fact, data dissemination, which completely decouples data producers and consumers in time, space and control flow, is more suitable for the content-centric and intermittent connection network [10, 11]. Therefore, it is more meaningful to design buffer management policies for data dissemination.
In this paper, a utility-based buffer management (UBM) policy was proposed for data dissemination in opportunistic networks. In UBM, we first compute utility value of caching data based on node`s interest and data delivery probability, and then according to the utility design the overall buffer management policies including caching policy, passive and proactive dropping policy, and scheduling policy. Speci fically, (a) the caching policy means that receivers decide which messages should be cached for improving caching efficiency;(b) the passive dropping policy means that which messages should be discarded when node buffers overflow; (c) a message will be dropped proactively to vacate buffer when its utility are reduced to a threshold; (d) a scheduling policy for senders is indirectly implemented by receivers via the forwarding priority of messages from senders. Simulated results show that, compared with some existing classical buffer management policies, UBM can obtain higher delivery ratio and lower delay latency at the lower network cost.
The rest of this paper is organized as follows. Section Ⅱ describes the data dissemination model in opportunistic networks. In section Ⅲ, we propose the utility-based buffer management policy (UBM) based on the model in Section Ⅱ . Section Ⅳ evaluates the performance of UBM via simulation.
In this paper, the authors design a utility-based buffer management policy for data dissemination in opportunistic networks.
Definition 1.Channelis a predefined theme, denoted asC. LetΩdenote the set of channels, namelyΩ= {C1, C2, ..., CS}, whereSis the number of the prede fined channels.
De finition 2.Data attributiondescribes the correlation between data content and the prede fined channels. The attribute of datamis denoted by vectorAm,Am= [p1,p2, ...,pS], wherepi(1≤i≤S, 0≤pi≤1) is a probability value that depicts the relativity of datamand channelCi.
De finition 3. Node interestdescribes how much a node is interested in the predefined channel. The interest of nodevis denoted by vectorIv,Iv= [p1,p2, ...,pS], wherepi(1≤i≤S,0≤pi≤1) describes the degree which nodevis interested in channelCi.
Definition 4. Interest degreeof nodev
In this paper, we use the similarity formula,
In the system model, we assume that nodes know the predefined channel set. Publishers generate data but do not need to know their subscribers. Thus, they only create the data attribute for their data and inject them into networks.Subscribers also do not know where is their interest data and only propagate their interests in the network. Relays make caching strategies and forward the data close to their subscribers according to node interest and data attribute.
In this section, we first propose the utility ac-cording node interest and data delivery probability. Second, we give the calculation method of metrics used by the utility. Finally, we propose a buffer management policy according to the utility.
Most existing interest propagation methods are mainly based on a gossip method. Despite of simplicity, the gossip wastes network resources and storage space, and what is important, it has little help for making cache decisions. As shown in Fig.2, nodeufrequently en counters nodeswandv, but nodewandvrarely meet.If nodewcaches datamin which nodevis interested, nodeucan decide to cache messagemfrom nodewfor nodev. However, how nodewknow that it caches datamfor nodevand forward datamvia nodeuif nodewdoes not know that nodeufrequently meets nodev.
Fig. 1 The system model of data dissemination in opportunistic networks
Fig. 2 An illustration of interest propagation
As shown in Fig.2, after receiving the aggregation interest of nodeu, nodewcan also cache data for nodevwhen it caches data for nodeu.In the aggregation method of propagating interest, node interests are only propagated among the frequent contact nodes in one-hop. It decreases the network cost and storage space but conveys the more effective information in favor of designing efficient buffer strategies.
Although the aggregation interest can help nodes to cache the interest data for other nodes, it may still not improve the performance of data dissemination if the interest data delivery failure. To further improve the performance, we propose the data delivery probability.
In our early work [12], we have given the method of computing the data delivery probability for routing scenarios in opportunistic networks. In this section, we simply depict some key parts for the integrity of this paper.
1) Contact Probability
The process of node contact follows the Poisson distribution [5,6,19], so contact probability of nodesuandvwithin timetis
2) Caching Probability
3) Dropping probability
Based on the utility value of caching data in sectionA, we implement the overall buffer management policies as follows: caching policy, scheduling policy, passive and proactive dropping policy.
1) Caching policy
In caching policy, in order to optimize caching efficiency, receivers make decision which messages to be download from their meeting node according to their utility values.Speci fically, we give the protocol of exchange data when nodesuandvmeet at timet, take node v as an example:
· Nodes exchange data according to the final data list requesting for their contact node in order until the link between two nodes disconnects or one of nodes has no data to exchange.
2) Scheduling policy
In order to design a data scheduling policy driven by receivers, receivers need to know the forwarding priority of messages of senders. Thus, we give the following concepts.
In order that nodevknows the forwarding priority that nodeuforwards messages, node wheremidis the identification of messagemand P (P={0, 1}) is its forwarding priority. If P is equal to 1, it is emergency to forward the message, otherwise it is not urgent.
After knowing the forwarding priority from sender, nodevadjusts the forwarding be forward. Nodevwill forward the messagemin the
3) Dropping policy
In existing buffer management policies based on utility, the utility value is constant.In fact, the purpose of caching messages is to forward them. If the message has been forwarded, the utility value of caching it is also decrease.
In this section, we evaluate the performance of UBM described in Section III. To this aim,we developed a custom simulator that is extended from the ONE [13]. We first describe our simulation settings including simulation models and performance parameters, and then compare the analytical results we obtain with the simulation results.
In order to perform a realistic evaluation of our work, we use real world mobility traces,Infocom2006 [14], in our simulation experiments. We removed the invalid records whose duration is 0 second or that contain external nodes, and merged the records whose duration is overlapped.
Table I The symbols and de finitions of metrics
In the simulation, we compare the protocols based on the three metrics including delivery rate, delay and network cost. The speci fic de finitions and symbols are shown in Table 1.
The calculations of metrics are shown as follows:
To verify the performance of UBM, we implement a data dissemination protocol based on Epidemic routing [15], and employ some different buffer management strategies separately[6-9], such as first in and first out (FIFO), last in first out (LIFO), drop oldest (DropO), drop youngest (DropY), drop random (DropR),drop lest popular (LestP), drop most popular(MostP) and our UBM. We also give the performance curves for reference when the buffer is unlimited (Unlimit). In order to compare the performance of buffer management policies with different buffer size, we simulate the performance curve when buffer size is 4 and 8.The results are shown in Fig.3 and Fig.4.
The ER performance changes slightly with different cache strategies in Fig.3 (TTL≤0.2)and Fig.4 (TTL≤0.4). The reason is that when TTL is small, the buffer size can satisfy the request, the strategies have little impact on ER.
But when TTL is greater than 0.2 days (in Fig.3) and 0.4 days (in Fig.4), the cache strategies show different performance. Because with the increase of TTL, the number of messages in network also increase, and the buffers begin not to meet the demand. Because with the increase of TTL, the number of messages in network also increase, and the buffers begin not to meet the demand. The performance of traditional buffer management strategies drops down sharply when TTL increases continually,it shows that congestion has happened.
As shown in Fig.3 and Fig.4, UBM proposed in this paper has the best performance in the existing buffer management strategies, which is close to that of ER without cache constraints.This is because UBM implements the overall cache management policies, including the caching policy, the passive and proactive dropping policy, and the scheduling policy. These improve the cache efficiency of UBM.
Fig. 3 The comparison of performance when buffer size is equal to 4
Among other cache strategies, the simplest first in first out (FIFO) strategy performs best,while the two worst cache policies are the dropping least popular data (LestP) and the dropping most popular data (MostP). This is because the first arrival data may have been forwarded multiple times by cache nodes, so there are backups in other nodes` buffer. In fact, FIFO also achieves a dynamic cache utility in a certain extent. The first cached data is forwarded most frequently, so its cache utility is reduced, then it will be discarded preferentially. On the contrary, dropping either the most popular or the most unpopular, will result that a class of data loses their forwarding opportunities and then decreases the delivery rate. To avoid these extremes, DropR randomly drops data and obtains better performance than LestP and MostP.
Fig. 4 The comparison of performance when buffer size is equal to 8
In this paper, we design a utility-based buffer management policy for data dissemination in opportunistic networks. The policy according to node interest and data delivery probability implement caching policy, passive and proactive dropping policy, and scheduling policy. Simulation results on the real world mobility traces show that UBM, compared with some classical passive dropping policies, greatly improves the network performance of data dissemination.
This paper is supported by the National Natural Science Fund of China under Grant No.61472097, the Research Fund for the Education Ministry Doctoral Research Foundation of China (20132304110017), and this paper is also funded by the International Exchange Program of Harbin Engineering University for Innovation-oriented Talents Cultivation.
[1] Trifunovic S, Kouyoumdjieva S T, Distl B, et al. “A Decade of Research in Opportunistic Networks:Challenges, Relevance, and Future Directions”[J].IEEE Communications Magazine, vol.55,no.1, pp 168-173, 2017.
[2] Pelusi L, Passarella A, Conti M. “Opportunistic networking: data forwarding in disconnected mobile ad hoc networks”[J].IEEE Communications Magazine, 2006, 44(11).
[3] Xiong Y P, Sun L M, Niu J W, et al. “Opportunistic networks”[J].Journal of Software, 2009, 20(1),pp 124-137.
[4] Ma H D, Yuan P Y, Zhao D. “Research progress on routing problem in mobile opportunistic networks”[J].Journal of Software, 2015, 26(3),pp 600-616.
[5] Mota V F S, Cunha F D, Macedo D F, José M.S.Nogueira, Antonio A.F. Loureiro. “Protocols,mobility models and tools in opportunistic networks: a survey”[J].Computer Communications,vol.48, no.8, pp 5-19, 2014.
[6] Sati S, Probst C, GraffiK. “Analysis of Buffer Management Policies for Opportunistic Networks”[C].IEEE 25th International Conference on Computer Communication and Networks(ICCCN), pp 1-8, 2016.
[7] Mansuri S, Shah H, Kosta Y. “A Survey on Buffer Management Policies in Delay Tolerant Networks”[J].Wireless Communication, vol.4,no.16,pp 940-946, 2012.
[8] Davis J A, Fagg A H, Levine B N. “Wearable Computers as Packet Transport Mechanisms in Highly-Partitioned Ad-hoc Networks”[C].IEEE International Symposium on Wearable Computing (ISWC). pp 141-148, 2001.
[9] Lindgren A, Phanse K S. “Evaluation of Queueing Policies and Forwarding Strategies for Routing in Intermittently Connected Networks”[C].IEEE International Conference on Communication System Software and Middleware(Comsware), pp 1-10, 2006.
[10] Guo D, Cheng G, Zhang Y, et al. “Data Distribution Mechanism over Opportunistic Networks with Limited Epidemic”[J].China Communications, vol.12,no.6, pp 154-163 , 2015.
[11] Cc S, Raychoudhury V, Mar fia G, et al. “A survey of routing and data dissemination in Delay Tolerant Networks”[J]. Journal of Network & Computer Applications, vol.67,no.C, pp 128-146, 2016.
[12] J.H Yao, C.G Ma, Q Yuan. “Utility-based barter trade incentive scheme in opportunistic network”[J].Journal on Communications, vol.37,-no.9, pp 102-110, 2016.
[13] Ker?nen A, Ott J, K?rkk?inen T.The ONE simulator for DTN protocol evaluation[C].ICST the 2nd international conference on simulation tools and techniques (SIMUTools), pp 55-63, 2009.
[14] Hui P, Chaintreau A, Scott J, et al. “Pocket switched networks and human mobility in conference environments”[C].ACM SIGCOMM workshop on Delay-tolerant networking (WDTN),pp 244-251,2005.
[15] Vahdat A, Becker D. “Epidemic routing for partially-connected ad hoc Networks”. Technical Report, Duke University, CS-200006, 2000.