Skip to main content

A Simple Epidemic Routing Scenario


Today, we will play around a bit with the ONE simulator, specifically with the Epidemic routing protocol. We will simulate two scenarios and look at the results. Detailed analysis of the results are left out for the time being.

Parameters

Here are few common parameters for the simulations.
  • Group.movementModel = RandomWaypoint
  • Group.msgTtl = 300 (5 hours)
  • MovementModel.worldSize = 450, 340
  • Scenario.endTime = 14400 (4 hours)

Stats are collected from MessageStatsReport from a single run.



Scenario #1 

In the  first scenario, we vary the density of the nodes for a single speed range. In particular, we consider:
  • Epidemic routing with (10, 50, 100, 150, 200, 250, 300) nodes
  • Speed: 0.5, 1.5
It may be noted here that node density in a simulation can be controlled in different ways, e.g., by varying
  1. The number of nodes keeping the simulation geography (usually rectangular) size constant
  2. The geography size keeping the number of nodes constant
  3. Both the number of nodes and the size of the geography
In this case, we take the first approach. Certain results from this simulation scenario are presented below.

At first we look at the average delivery latency (in seconds) of the messages as a function of the number of nodes. As it can be observed from the data and the adjoining plot, the latency decreases with the increasing node density. This is due to the reason that with increased number of nodes, a message gets more chance to be delivered and that too quickly. Indeed, the latter set of data and the corresponding plot indicates that the delivery ratio increases with the node density.

 
Average Latency
10  1032.6966
50  887.4016
100  773.9650
150  653.5246
200  610.9147
250  617.7339
300  573.6033

 

Delivery Probability
10  0.1813
50  0.2587
100  0.2912
150  0.2485
200  0.2627
250  0.2525
300  0.2464



Scenario #2

In the second scenario, we consider a constant number of nodes, but vary their speeds in different ranges, as enumerated below:
  • Speeds: [0.5,1.5; 1.5,2.5; 2.5,3.5; 3.5,4.5; 4.5,5.5; 5.5,6.5; 6.5,7.5]
  • # of nodes: 100  
 
Avg latency
1  773.9650
2  440.8467
3  453.5573
4  582.5417
5  678.5824
6  899.6848
7  985.1136
 


Delivery probability
1  0.2912
2  0.3055
3  0.2668
4  0.2444
5  0.1853
6  0.1874
7  0.1792



The corresponding results -- average delivery latency and delivery probability -- are shown above.

What can you infer from these results?


Revision history:

12 Feb 2014: Expanded the discussion

Comments

  1. I'm a beginner and may l ask how to draw those pics such as avg Latency and Delivery probability?

    ReplyDelete
  2. I want more information about The one. Can you give me ? Because my final year project is going on with this.

    ReplyDelete

Post a Comment

Popular posts from this blog

Text Highlighting in Latex

While preparing a manuscript with Latex, it is often useful to highlight the changes made in the current revision with a different color. This can be achieved using the \ textcolor command provided by Latex. For example, \textcolor {red}{Hello World} would display the string "Hello World" in red color. However, the final/published copy of the manuscript does not contain any highlighted text. Therefore, if a large volume of changes were made, it becomes tiresome at the end to find and remove all the individual portions of highlighted text. This can be circumvented by defining a utility command to switch highlighting on and off as desired. In the following, we define a new Latex command, highlighttext , for this purpose. The command takes only a single argument—the text to be highlighted.     \usepackage {color}    % For highlighting changes in this version with red color   \newcommand { \highlighttext }[1] { \textcolor {red}{#1}}   % Remove...

Cohere Aya Dataset: Exploring the Split-by-language Collection

A snapshot of the Aya collection (Bengali) . Image taken from HuggingFace. In February 2024, Cohere launched Aya , a multilingual Large Language Model (LLM). Alongside, a set of datasets used to train Aya has also been released. For example, the aya_dataset consists around 205K examples annotated by humans. On the other hand, the recently released aya_collection_language_split is a gigantic dataset with more than 500 million data points spread across more than 100 languages. As the name suggests, this dataset is split by language. For example, all data points in Bengali, irrespective of the underlying task, can be found in a single split. Apart from the original human-annotated examples from the aya_dataset, aya_collection_language_split also contains a lot of translated and templated data. The dataset is released using an Apache-2.0 license, allowing academic and commercial use. The Bengali Language Split Each language split in the Aya collection has three splits. The Bengali split,...

Specifying Source and Destination of Messages

One of the frequently asked questions in the community is how to specify which particular nodes would act as source(s) and destination(s) of the messages created in the ONE simulator. The simulator, in fact, provides a pair of settings (shown below in bold face) aimed for this particular purpose. Let us consider that there are $n + 1$ nodes in an OMN.  Further, let the nodes with addresses from $x$ to $y$, both inclusive, would create messages. The nodes in the range $w$ to $z$, both inclusive, would be the destinations of those messages, where $0 \le x \le y \le n$, and $0 \le w \le z \le n$. Then, the corresponding simulation scenario can be configured as follows. ## Message creation parameters # How many event generators Events.nrof = 1 # Class of the first event generator Events1.class = MessageEventGenerator # (Following settings are specific for the MessageEventGenerator class) # Creation interval in seconds (one new message every 25 to 35 seconds) Events1.interval ...