Simulate a Massive Broadcasting Scenario with a Reasonable Delay and Packet Loss - omnet++

I was able to complete basic tutorial in Veins.
In my simulation, 12 cars broadcast messages to each other. I want to compute the delay associated with every message. I am trying to achieve it in the following manner:
Save the time when the transmission begins and send the packet
...
wsm->setDelayTime(simTime().dbl());
populateWSM(wsm);
sendDelayedDown(wsm, computeAsynchronousSendingTime(1, ChannelType::service));
...
At the Rx side, compute the delay and save it
...
delayVector.record(simTime().dbl()-wsm->getDelayTime());
...
In the picture below you can see the delay w.r.t. node[0]. Two things puzzle me:
Why the delay is in the range of seconds? I would expect it to be in the range of ms.
Why does the delay increase with the simulation time?
Update
I have figured out that since 12 cars broadcast simulatenously, computeAsynchronousSendingTime(1, ChannelType::service) will return bigger delay for subsequent cars. I can circumvent the issue by using sendDown(wsm). However, in this case, not all the messages are delivered, since a car tries to receive a packet while transmitting. So I would like to update the question: how do I simulate the most realistic scenario with the reasonable delay and packet loss?

If somebody comes across the similar issue, computeAsynchronousSendingTime(1, ChannelType::service) returns the absolute simulation time, at which a message should be sent. We are interested in a delay, though. Thus, one would have to run sendDelayedDown(wsm, computeAsynchronousSendingTime(1, ChannelType::service) - simTime());

Related

veins how to analyze packet loss rate and delay

I am currently doing research on the multi-hop broadcast technology of the Internet of Vehicles. I want to use only Veins (5.0) and SUMO to achieve it, but I have encountered problems:
1.Using Veins' example (TraCIDemo11p.cc) to modify the selection of relay nodes, the packet loss rate and delay cannot be counted So I want to know if the packet loss rate and delay can be counted after only modifying the example? It's been two weeks now, I would really appreciate if you could fix my problem.need to use inet?

Measure RTT of WebsocketServer: Single Message or bulk of messages?

I recently changed something in the WebsocketServer implementation of one of my projects, that (presumably) decreases the Round-Trip-Time (RTT) of its messages. I want to measure the RTT of both implementations to compare them. For this, I am sending m messages of s bytes from c different connections.
The thing I am wondering now is: Should the RTT be measured for each message separately - remembering the sending time of each message and when the response arrives -, or should I remember the time the first message was sent and the response to the last message arrives? Which one is more accurate?
I would probably go for the first approach, what made me wondering is that websocket-benchmark seems to use the latter approach. Is this just an oversight or is there a reasoning behind it?

state change and packet loss

Let's say I want to speed up networking in a real-time game by sending only changes in position instead of absolute position. How might I deal with packet loss? If one packet is dropped the position of the object will be wrong until the next update.
Reflecting on #casperOne's comment, this is one reason why some games go "glitchy" over poor connections.
A possible solution to this is as follows:
Decide on the longest time you can tolerate an object/player being displayed in the wrong location - say xx ms. Put a watchdog timer in place that sends location of an object "at least" every xx ms, or whenever a new position is calculated.
Depending on the quality of the link, and the complexity of your scene, you can shorten the value of xx. Basically, if you are not using available bandwidth, start sending current position of the object that has not had an update sent the longest.
To do this you need to maintain a list of items in the order you have updated them, and rotate through it.
That means that fast changes are reflected immediately (if object updates every ms, you will probably get a packet through quite often so there is hardly any lag), but it never takes more then xx ms before you get another chance at having an updated state.

How do udp sockets actually work internally?

I am trying to reduce packets manipulation to its minimum in order to improve efficiency of a specific program i am working on but i am struggling with the time it takes to send through a udp socket using sendto/recvfrom. I am using 2 very basic processes (applications), one is sending, the other one receiving.
I am willing to understand how linux internally works when using these function calls...
Here are my observations:
when sending packets at:
10Kbps, the time it takes for the messages to go from one application to the other is about 28us
400Kbps, the time it takes for the messages to go from one application to the other is about 25us
4Mbps, the time it takes for the messages to go from one application to the other is about 20us
40Mbps, the time it takes for the messages to go from one application to the other is about 18us
When using different CPUs, time is obviously different and consistent with those observations. There must be some sort of setting that enables some queue readings to be done faster depending on the traffic flow on a socket... how can that be controlled?
When using a node as a forwarding node only, going in and out takes about 8us when using 400Kbps flow, i want to converge to this value as much as i can. 25us is not acceptable and deemed to slow (it is obvious that this is way less than the delay between each packet anyway... but the point is to be able to eventually have a greater deal of packets to be processed, hence, this time needs to be shortened!). Is there anything faster than sendto/recvfrom (must use 2 different applications (processes), i know i cannot use a monolitic block, thus i need info to be sent on a socket)?

Howto take latency differences into consideration when verifying location differences with timestamps (anti-cheating)?

When you have a multiplayer game where the server is receiving movement (location) information from the client, you want to verify this information as an anti-cheating measure.
This can be done like this:
maxPlayerSpeed = 300; // = 300 pixels every 1 second
if ((1000 / (getTime() - oldTimestamp) * (newPosX - oldPosX)) > maxPlayerSpeed)
{
disconnect(player); //this is illegal!
}
This is a simple example, only taking the X coords into consideration. The problem here is that the oldTimestamp is stored as soon as the last location update was received by the server. This means that if there was a lag spike at that time, the old timestamp will be received much later relatively than the new location update by the server. This means that the time difference will not be accurate.
Example:
Client says: I am now at position 5x10
Lag spike: server receives this message at timestamp 500 (it should normally arrive at like 30)
....1 second movement...
Client says: I am now at position 20x15
No lag spike: server receives message at timestamp 1530
The server will now think that the time difference between these two locations is 1030. However, the real time difference is 1500. This could cause the anti-cheating detection to think that 1030 is not long enough, thus kicking the client.
Possible solution: let the client send a timestamp while sending, so that the server can use these timestamps instead
Problem: the problem with that solution is that the player could manipulate the client to send a timestamp that is not legal, so the anti-cheating system won't kick in. This is not a good solution.
It is also possible to simply allow maxPlayerSpeed * 2 speed (for example), however this basically allows speed hacking up to twice as fast as normal. This is not a good solution either.
So: do you have any suggestions on how to fix this "server timestamp & latency" issue in order to make my anti-cheating measures worthwhile?
No no no.. with all due respect this is all wrong, and how NOT to do it.
The remedy is not trusting your clients. Don't make the clients send their positions, make them send their button states! View the button states as requests where the clients say "I'm moving forwards, unless you object". If the client sends a "moving forward" message and can't move forward, the server can ignore that or do whatever it likes to ensure consistency. In that case, the client only fools itself.
As for speed-hacks made possible by packet flooding, keep a packet counter. Eject clients who send more packets within a certain timeframe than the allowed settings. Clients should send one packet per tick/frame/world timestep. It's handy to name the packets based on time in whole timestep increments. Excessive packets of the same timestep can then be identified and ignored. Note that sending the same packet several times is a good idea when using UDP, to prevent package loss.
Again, never trust the client. This can't be emphasized enough.
Smooth out lag spikes by filtering. Or to put this another way, instead of always comparing their new position to the previous position, compare it to the position of several updates ago. That way any short-term jitter is averaged out. In your example the server could look at the position before the lag spike and see that overall the player is moving at a reasonable speed.
For each player, you could simply hold the last X positions, or you might hold a lot of recent positions plus some older positions (eg 2, 3, 5, 10 seconds ago).
Generally you'd be performing interpolation/extrapolation on the server anyway within the normal movement speed bounds to hide the jitter from other players - all you're doing is extending this to your cheat checking mechanism as well. All legitimate speed-ups are going to come after an apparent slow-down, and interpolation helps cover that sort of error up.
Regardless of opinions on the approach, what you are looking for is the speed threshold that is considered "cheating".
Given a a distance and a time increment, you can trivially see if they moved "too far" based on your cheat threshold.
time = thisTime - lastTime;
speed = distance / time;
If (speed > threshold) dudeIsCheating();
The times used for measurement are server received packet times. While it seems trivial, it is calculating distance for every character movement, which can end up very expensive. The best route is server calculate position based on velocity and that is the character's position. The client never communicates a position or absolute velocity, instead, the client sends a "percent of max" velocity.
To clarify:
This was just for the cheating check. Your code has the possibility of lag or long processing on the server affect your outcome. The formula should be:
maxPlayerSpeed = 300; // = 300 pixels every 1 second
if (maxPlayerSpeed <
(distanceTraveled(oldPos, newPos) / (receiveNewest() - receiveLast()))
{
disconnect(player); //this is illegal!
}
This compares the players rate of travel against the maximum rate of travel. The timestamps are determined by when you receive the packet, not when you process the data. You can use whichever method you care to to determine the updates to send to the clients, but for the threshold method you want for determining cheating, the above will not be impacted by lag.
Receive packet 1 at second 1: Character at position 1
Receive packet 2 at second 100: Character at position 3000
distance traveled = 2999
time = 99
rate = 30
No cheating occurred.
Receive packet 3 at second 101: Character at position 3301
distance traveled = 301
time = 1
rate = 301
Cheating detected.
What you are calling a "lag spike" is really high latency in packet delivery. But it doesn't matter since you aren't going by when the data is processed, you go by when each packet was received. If you keep the time calculations independent of your game tick processing (as they should be as stuff happened during that "tick") high and low latency only affect how sure the server is of the character position, which you use interpolation + extrapolation to resolve.
If the client is out of sync enough to where they haven't received any corrections to their position and are wildly out of sync with the server, there is significant packet loss and high latency which your cheating check will not be able to account for. You need to account for that at a lower layer with the handling of actual network communications.
For any game data, the ideal method is for all systems except the server to run behind by 100-200ms. Say you have an intended update every 50ms. The client receives the first and second. The client doesn't have any data to display until it receives the second update. Over the next 50 ms, it shows the progression of changes as it has already occurred (ie, it's on a very slight delayed playback). The client sends its button states to the server. The local client also predicts the movement, effects, etc. based on those button presses but only sends the server the "button state" (since there are a finite number of buttons, there are a finite number of bits necessary to represent each state, which allows for a more compact packet format).
The server is the authoritative simulation, determining the actual outcomes. The server sends updates every, say, 50ms to the clients. Rather than interpolating between two known frames, the server instead extrapolates positions, etc. for any missing data. The server knows what the last real position was. When it receives an update, the next packet sent to each of the clients includes the updated information. The client should then receive this information prior to reaching that point in time and the players react to it as it occurs, not seeing any odd jumping around because it never displayed an incorrect position.
It's possible to have the client be authoritative for some things, or to have a client act as the authoritative server. The key is determining how much impact trust in the client is there.
The client should be sending updates regularly, say, every 50 ms. That means that a 500 ms "lag spike" (delay in packet reception), either all packets sent within the delay period will be delayed by a similar amount or the packets will be received out of order. The underlying networking should handle these delays gracefully (by discarding packets that have an overly large delay, enforcing in order packet delivery, etc.). The end result is that with proper packet handling, the issues anticipated should not occur. Additionally, not receiving explicit character locations from the client and instead having the server explicitly correct the client and only receive control states from the client would prevent this issue.

Resources