Skip to main content

A Simple Epidemic Routing Scenario

Today, we will play around a bit with the ONE simulator, specifically with the Epidemic routing protocol. We will simulate two scenarios and look at the results. Detailed analysis of the results are left out for the time being.


Here are few common parameters for the simulations.
  • Group.movementModel = RandomWaypoint
  • Group.msgTtl = 300 (5 hours)
  • MovementModel.worldSize = 450, 340
  • Scenario.endTime = 14400 (4 hours)

Stats are collected from MessageStatsReport from a single run.

Scenario #1 

In the  first scenario, we vary the density of the nodes for a single speed range. In particular, we consider:
  • Epidemic routing with (10, 50, 100, 150, 200, 250, 300) nodes
  • Speed: 0.5, 1.5
It may be noted here that node density in a simulation can be controlled in different ways, e.g., by varying
  1. The number of nodes keeping the simulation geography (usually rectangular) size constant
  2. The geography size keeping the number of nodes constant
  3. Both the number of nodes and the size of the geography
In this case, we take the first approach. Certain results from this simulation scenario are presented below.

At first we look at the average delivery latency (in seconds) of the messages as a function of the number of nodes. As it can be observed from the data and the adjoining plot, the latency decreases with the increasing node density. This is due to the reason that with increased number of nodes, a message gets more chance to be delivered and that too quickly. Indeed, the latter set of data and the corresponding plot indicates that the delivery ratio increases with the node density.

Average Latency
10  1032.6966
50  887.4016
100  773.9650
150  653.5246
200  610.9147
250  617.7339
300  573.6033


Delivery Probability
10  0.1813
50  0.2587
100  0.2912
150  0.2485
200  0.2627
250  0.2525
300  0.2464

Scenario #2

In the second scenario, we consider a constant number of nodes, but vary their speeds in different ranges, as enumerated below:
  • Speeds: [0.5,1.5; 1.5,2.5; 2.5,3.5; 3.5,4.5; 4.5,5.5; 5.5,6.5; 6.5,7.5]
  • # of nodes: 100  
Avg latency
1  773.9650
2  440.8467
3  453.5573
4  582.5417
5  678.5824
6  899.6848
7  985.1136

Delivery probability
1  0.2912
2  0.3055
3  0.2668
4  0.2444
5  0.1853
6  0.1874
7  0.1792

The corresponding results -- average delivery latency and delivery probability -- are shown above.

What can you infer from these results?

Revision history:

12 Feb 2014: Expanded the discussion


  1. I'm a beginner and may l ask how to draw those pics such as avg Latency and Delivery probability?

  2. I want more information about The one. Can you give me ? Because my final year project is going on with this.


Post a Comment

Popular posts from this blog

Text Highlighting in Latex

While preparing a manuscript with Latex, it is often useful to highlight the changes made in the current revision with a different color. This can be achieved using the \ textcolor command provided by Latex. For example, \textcolor {red}{Hello World} would display the string "Hello World" in red color. However, the final/published copy of the manuscript does not contain any highlighted text. Therefore, if a large volume of changes were made, it becomes tiresome at the end to find and remove all the individual portions of highlighted text. This can be circumvented by defining a utility command to switch highlighting on and off as desired. In the following, we define a new Latex command, highlighttext , for this purpose. The command takes only a single argument—the text to be highlighted.     \usepackage {color}    % For highlighting changes in this version with red color   \newcommand { \highlighttext }[1] { \textcolor {red}{#1}}   % Remove all text highlighting

Commonly Used Metrics for Performance Evaluation

The following metrics are commonly used when evaluating scenarios related to DTN protocols. Delivery ratio of the messages, Average message delivery latency Overhead ratio (of the underlying routing mechanism) Suppose that $M$ be the set of all messages created in the network and $M_d$ be the set of all messages delivered. Then, the delivery ratio is computed as $|M_d| / |M|$. Now let the $i^{th}$ delivered message was created at time $c_i$ and delivered at time $d_i$. Then the average message delivery latency is computed as $(\sum_{i = 1}^{|M_d|} (d_i - c_i)) / |M_d|$. Note that, in Statistics, mean, median and mode are all the measures of average. But "loosely speaking", unless otherwise specified, we refer to the "mean" value when we say "average." Nevertheless, the MessageStatsReport in the ONE simulator provides a measure of both the mean and median values wherever appropriate. One may refer the above metric as "end-to-end delay.

The ONE KB has a new home

The ONE Knowledge Base is now hosted at If you are unaware, the ONE KB allows you to search the old email archives of the simulator's community. Therefore, if you have any question related to simulation, you may query the existing database at the above link. Chances are good that your question might already have been answered previously. If not, you can still post an email to the community's mailing list. Have you tried the ONE KB already? How was your experience? Was it helpful? Let me know in the comments!