In OMNeT++/INET, under sensornetwork/omnetpp.ini, the following code is given where packet arrival rate and the rate at which packets are transmitted to the server are considered as same parameter (sendInterval).
*.sensor*.app[0].sendInterval = 1s
*.sensor*.app[0].startTime = exponential(1s)
*.sensor*.app[0].messageLength = 10Byte
But, i need to set the following:
Random packet arrival rate for each node.
Poisson packet arrival rate and the rate at which packets are transmitted to the server are two separate parameters.
Would anyone please suggest?
One cannot control directly arrival rate, only sending rate may be controlled. The arrival rate depends on many factors (e.g. load of links, other traffic in nodes, route selection etc.).
To set a random sending rate write for example:
*.sensor*.app[0].sendInterval = uniform(0.5s, 1.5s)
The available random distributions are listed in OMNeT++ Simulation Manual, Chapter 7.4.
Related
I was able to complete basic tutorial in Veins.
In my simulation, 12 cars broadcast messages to each other. I want to compute the delay associated with every message. I am trying to achieve it in the following manner:
Save the time when the transmission begins and send the packet
...
wsm->setDelayTime(simTime().dbl());
populateWSM(wsm);
sendDelayedDown(wsm, computeAsynchronousSendingTime(1, ChannelType::service));
...
At the Rx side, compute the delay and save it
...
delayVector.record(simTime().dbl()-wsm->getDelayTime());
...
In the picture below you can see the delay w.r.t. node[0]. Two things puzzle me:
Why the delay is in the range of seconds? I would expect it to be in the range of ms.
Why does the delay increase with the simulation time?
Update
I have figured out that since 12 cars broadcast simulatenously, computeAsynchronousSendingTime(1, ChannelType::service) will return bigger delay for subsequent cars. I can circumvent the issue by using sendDown(wsm). However, in this case, not all the messages are delivered, since a car tries to receive a packet while transmitting. So I would like to update the question: how do I simulate the most realistic scenario with the reasonable delay and packet loss?
If somebody comes across the similar issue, computeAsynchronousSendingTime(1, ChannelType::service) returns the absolute simulation time, at which a message should be sent. We are interested in a delay, though. Thus, one would have to run sendDelayedDown(wsm, computeAsynchronousSendingTime(1, ChannelType::service) - simTime());
I have setup my environment using omnet++, sumo and veins in ubuntu. I want to reduce packet loss in an emergency situation among vehicles and improve packet delivery time and cost. My project is about choosing the suitable processing position among cluster head (nodes), road side unit (rsu) and cloud. I want to achieve certain tasks that is need to implement my veins project. I have configured 50 nodes and 4 rsu and provide data rate about 6mbps and assign the packet size upto 2MB.
Therefore, how can I change the behavior of vehicles (nodes), road side unit (rsu) and cloud in order to implement the following parameters?
processing rate of clusters (nodes) = 3 Mbps.
processing rate of RSUs = 7 Mbps.
processing rate of cloud = 10 Mbps.
the range of clusters (nodes) = 60 m.
the range of RSU = 120 m.
the range of cloud = 500 m.
If you could help with building these parameters I will appreciate it.
Thank you
If you are talking about transsmision rate, then you can set the bit rate in the ini file (check veins example) but if you meant processing delay then it is usually simulated by scheduling self messages (check tictoc example). In terms of transsmsion range, veins uses Free Space Propagation model and the related parameters are set in the ini file so you can change them to decide the required range. Finally, I recommand to read more about veins and how it deal with the parameters you asked about. There are alot of answered questions on StackOverFlow about your questions.
Hi I'm creating a xbee network with 1 coordinator and 20 end nodes transmitting data 8 times per second. (Currently i just made one of them talk to the coordinator).
I would like to know how many data packages the coordinator will be able to receive as I'll be transmitting in a high data rate. (20 end index x 8 times per second its 160 data packages per second).
Is that feasible ? Will I face any problems ? What should I be worried with ? For that data rate is there any other protocol I could use?
Thanks
I would say that it isn't feasible. The radio data rate of 802.15.4 networks is 250kbps. Once you're sending that many data packets per second, you'll be getting collisions and retransmissions. It might work if the packets are very small, but you'll still have a lot of overhead for packet headers. If this is a mesh network, you'll be using bandwidth for packet retransmission when a node can't communicate directly with the coordinator.
Is there a reason you need that data frequency? Can the end devices aggregate their data and send a single packet once/second with 8 data samples? Zigbee and 802.15.4 were designed for low data rates and low power.
If you're going to try this out, you'll want to configure the coordinator for at least 230kbps to keep up with the data flow. Do a proof of concept with a single end device (and configure it as a router, since you don't need "sleeping" capability of end devices), and then consider testing with 5 devices sending 4 times as much data (32 packets/second) to see if the coordinator can even keep up with that data flow.
About storm metric. I do not understand the relationship between send queue arrival rate and receive queue arrival rate.
For example, when open ACK, if a spout receive one tuple , and it emit one tuple. whether the RQ arrival rate : SQ arrival rate = 1:2?
Besides, if system not stable. this Equation may be change?
Spout instances in Storm do not have a receive queue (only a send queue)? I assume you are referring to bolts?
Although it is a little old this article by Michael Noll gives a good overview of the internal queues within the workers.
To answer your question. The ratio between the queues will not always be 2:1. The disruptor queues report their metrics averaged over the user configurable topology.builtin.metrics.bucket.size.secs so this will obscure some of the difference. Also all metrics are subject to a sample ratio, set by the topology.stats.sample.rate config variable - which by default is only 20% of transferred tuples, this can also cause the reported numbers to be off.
Also, depending on the code in your bolts, 1 input tuple may produce many output tuples so you would have to take this into account in any ratios you were calculating.
You refer to the stability of an equation in your question. The arrival rate is not based on any queuing theory equation and is simply the number of tuples that are put on the queue in a metric.bucket period divided by the period length in seconds. However, Storm does report a queue sojourn time metric. This is based on a very simple queuing theory equation that is not reliable for unstable queue systems and should be avoided.
I want to compute an overall delivery ratio in function of time using vectors based on Omnet++ signals ? How can i achieve it when there is a multiple source and only one sink.
For example, say that i have 10 mobiles nodes that send data to a fixed AP, the delivery ratio is equal to (received/sent packets), but the AP knows only the amount of received packets.
I declared the following signals and statistics:
For AP:
#signal[receivedBndl](type = "int");
#statistic[receivedBundle](title="ReceivedBundle";source=receivedBndl;record=count,mean,last,vector);
For Nodes:
#signal[sentBndl](type = "int");
#statistic[sentBundle](title="SentBundle";source=sentBndl;record=count,mean,last,vector);
Is it possible to create another #statistics that compute the Delivery Ratio in function of time with this 2 signals ?
Thanks,
This is more like a network wide statistic, than something related to a single node so you have to install your statistic listeners on the top-level network module itself instead of the actual nodes. OMNeT++ signals propagate up on the containment chain so any signal that was sent to a specific node will be delivered also to the containing network module. This makes it possible to install the statistics on the network and get the given signal there (too).
To achieve this I would rewrite the code to actually emit the sent/received cPacket objects (and not the count of them as an integer). You still can count the number of packets using the count() function in the statistics.
For AP:
#signal[receivedBndl](type = cPacket);
For nodes:
#signal[sentBndl](type = cPacket);
As each actual sent/received packet is now emitted to their sending/receiving module (and anything above them), you can install a statistic in the top-level module and combine them into a single statistic:
#statistic[deliveryratio](source=count(receivedBndl)/count(sentBndl); record=last);
This last line will install two signal listeners on the top level module and the statistics will calculate the value each and every time any module generates or receives a packet anywhere in the network.