Vector Recording Intervals in Omnet++ - omnet++

Hope you are doing good.
I would like to enable vector recording on my Omnet++ Simulation to record event logs for each second instead of milliseconds.
Could you please advise which syntax should I use to record for each seconds ?
**.vector-recording-intervals = <syntax>
Awaiting for a response.
Regards,
Azhar

The meaning of the vector-recording-intervals is slightly different. That parameter gives us an opportunity to decide when recording vectors is enabled. It should be noticed, that by default recording vectors is turned on for the whole duration the simulation.
Let's consider an example:
**.vector-recording-intervals = 10s..30s, 100s..150s
It means, that vectors will be recorded only from t=10s to 30s and from t=100s to 150s.
And the frequency of recording values into vector depends on the frequency of calling emit() or record() in your code.

Related

Which timer do use?

In JMeter, we have multiple timers - for instance Uniform Standard Timer, Gaussian Random timer etc.
While researching my query - I found various books and blog which tell me
How to add the timers in JMeter
What is the internal formula/logic between the timers.
I am somehow confused, which one to use? and when?
For instance, if I am trying to wait for the user to log in, which timer is more appropriate? Uniform? or Gaussian?
As per A Comprehensive Guide to Using JMeter Timers
Uniform Random Timer
The Uniform Random Timer pauses the thread by a factor of:
The next pseudorandom uniformly-distributed value in range between 0.0 (inclusive) and 1.0 (exclusive)
Multiplied by “Random Delay Maximum”
Plus “Constant Delay Offset”
Gaussian Random Timer
A Gaussian Random Timer calculates the thread delay time using an approach like a Uniform Random Timer does, but instead of a uniformly-distributed pseudorandom value in 0.0 - 0.9 range, the normal (a.k.a. Gaussian) distribution is being used as the first argument for the formula.
There are several algorithms for generating normally distributed values, in JMeter Marsaglia polar method is used which takes the next 2 random values U and V in -1 to 1 range until the S = U2 + V2 > 1 condition is met. Once S is defined it is used in the formula
to return the next pseudorandom Gaussian (“normally”) distributed value. The first time the method is called it returns X, the second time it will return Y, the third time it starts over and will return the new X, etc.
With regards to "which timer to use and when" - there is no "good" answer which fits all the cases, the timers you've mentioned are used to simulate think time as real users don't hammer the application non-stop, they need some time to "think" between operations. So the decision is up to you:
on one hand real users perform different delays between operations so it would be good to randomize this delay a little bit if you want the test to be as much realistic as possible
on the other hand load test needs to be repeatable so it makes sense to go for Constant Timers to avoid any random factor impacting the test results
maybe a good idea would be using Uniform or Gaussian random timer initially in order to mimic real users and then for regression testing purposes switch to the Constant Timer for results repeatability purposes

SpringBoot - observability on *_max *_count *_sum metrics

Small question regarding Spring Boot, some of the useful default metrics, and how to properly use them in Grafana please.
Currently with a Spring Boot 2.5.1+ (question applicable to 2.x.x.) with Actuator + Micrometer + Prometheus dependencies, there are lots of very handy default metrics that come out of the box.
I am seeing many many of them with pattern _max _count _sum.
Example, just to take a few:
spring_data_repository_invocations_seconds_max
spring_data_repository_invocations_seconds_count
spring_data_repository_invocations_seconds_sum
reactor_netty_http_client_data_received_bytes_max
reactor_netty_http_client_data_received_bytes_count
reactor_netty_http_client_data_received_bytes_sum
http_server_requests_seconds_max
http_server_requests_seconds_count
http_server_requests_seconds_sum
Unfortunately, I am not sure what to do with them, how to correctly use them, and feel like my ignorance makes me miss on some great application insights.
Searching on the web, I am seeing some using like this, to compute what seems to be an average with Grafana:
irate(http_server_requests_seconds::sum{exception="None", uri!~".*actuator.*"}[5m]) / irate(http_server_requests_seconds::count{exception="None", uri!~".*actuator.*"}[5m])
But Not sure if it is the correct way to use those.
May I ask what sort of queries are possible, usually used when dealing with metrics of type _max _count _sum please?
Thank you
UPD 2022/11: Recently I've had a chance to work with these metrics myself and I made a dashboard with everything I say in this answer and more. It's available on Github or Grafana.com. I hope this will be a good example of how you can use these metrics.
Original answer:
count and sum are generally used to calculate an average. count accumulates the number of times sum was increased, while sum holds the total value of something. Let's take http_server_requests_seconds for example:
http_server_requests_seconds_sum 10
http_server_requests_seconds_count 5
With the example above one can say that there were 5 HTTP requests and their combined duration was 10 seconds. If you divide sum by count you'll get the average request duration of 2 seconds.
Having these you can create at least two useful panels: average request duration (=average latency) and request rate.
Request rate
Using rate() or irate() function you can get how many there were requests per second:
rate(http_server_requests_seconds_count[5m])
rate() works in the following way:
Prometheus takes samples from the given interval ([5m] in this example) and calculates difference between current timepoint (not necessarily now) and [5m] ago.
The obtained value is then divided by the amount of seconds in the interval.
Short interval will make the graph look like a saw (every fluctuation will be noticeable); long interval will make the line more smooth and slow in displaying changes.
Average Request Duration
You can proceed with
http_server_requests_seconds_sum / http_server_requests_seconds_count
but it is highly likely that you will only see a straight line on the graph. This is because values of those metrics grow too big with time and a really drastic change must occur for this query to show any difference. Because of this nature, it will be better to calculate average on interval samples of the data. Using increase() function you can get an approximate value of how the metric changed during the interval. Thus:
increase(http_server_requests_seconds_sum[5m]) / increase(http_server_requests_seconds_count[5m])
The value is approximate because under the hood increase() is rate() multiplied by [inverval]. The error is insignificant for fast-moving counters (such as the request rate), just be ready that there can be an increase of 2.5 requests.
Aggregation and filtering
If you already ran one of the queries above, you have noticed that there is not one line, but many. This is due to labels; each unique set of labels that the metric has is considered a separate time series. This can be fixed by using an aggregation function (like sum()). For example, you can aggregate request rate by instance:
sum by(instance) (rate(http_server_requests_seconds_count[5m]))
This will show you a line for each unique instance label. Now if you want to see only some and not all instances, you can do that with a filter. For example, to calculate a value just for nodeA instance:
sum by(instance) (rate(http_server_requests_seconds_count{instance="nodeA"}[5m]))
Read more about selectors here. With labels you can create any number of useful panels. Perhaps you'd like to calculate the percentage of exceptions, or their rate of occurrence, or perhaps a request rate by status code, you name it.
Note on max
From what I found on the web, max shows the maximum recorded value during some interval set in settings (default is 2 minutes if to trust the source). This is somewhat uncommon metric and whether it is useful is up to you. Since it is a Gauge (unlike sum and count it can go both up and down) you don't need extra functions (such as rate()) to see dynamics. Thus
http_server_requests_seconds_max
... will show you the maximum request duration. You can augment this with aggregation functions (avg(), sum(), etc) and label filters to make it more useful.

how to record rssi in veins (omnet++)

How can I record statistic of the RSSI value for a communication in veins ?, i'm using 5.1 version. in the previous version it was a function which calculates the rssi in phy802.11 layer but it doesn't exist anymore.
thank you.
This is addressed in an answer to "How does veins calculate RSSI in a Simple Path Loss Model?":
Taking Veins version 5 alpha 1 as an example, your application layer
can access the ControlInfo of a frame and, from there, its RSS, e.g.,
as follows:
check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm()
The above code returns the absolute receive power (in dBm) measured at the center frequency of the corresponding frame.
Note that, while this gives you "some" indication of received signal strength, it is far from the only way to do that. In fact, vendors are free to implement whatever mechanism they deem fit to derive a number that indicates how strongly a signal was received.

JMeter: Gaussian random timer vs Poisson random timer

I am trying to figure out which timer to use for my loadtests in order to simulate a gradual growth in traffic towards the website.
I had a look ad the Gaussian Random Timer:
To delay every user request for random amount of time use Gaussian
Random Timer with most of the time intervals happening near a specific
value.
and the Poisson random timer:
To pause each and every thread request for random amount of time use
Poisson Random Timer with most of the time intervals occurring close a
specific value.
taken from this source.
Now I don't really understand what's the difference between the two. They both apply a random delay that is more likely to be close to a specific value. So what am I missing? How to they differ in practice?
The difference is in the algorithm used to generate random values:
Poisson is based on this:
http://en.wikipedia.org/wiki/Poisson_distribution
http://www.johndcook.com/blog/2010/06/14/generating-poisson-random-values/
Gaussian uses :
java.util.Random#nextGaussian()
Both add to Constant Delay Offset the value of a random number generated based on either Poisson or Gaussian.
The difference is in underlying algorythm, check the following links for details
Normal (Gaussian) distribution
Poisson Distribution
I would also recommend reading A Comprehensive Guide to Using JMeter Timers article for exhaustive information on JMeter timers.

Varying parameters During a simulation in Simulink

I want to change the parameters of a block when the model is running and simultaneously see the changes in the output.
For eg.I have a sine block connected to a Scope.and when i start the simulation.I want to change the frequency of the sine wave and see the corresponding frequency changed wave on the scope output.I want to do this as i want to see how my model behaves for different frequencies.
Somebody please help me in this regard....Please comment and let me know..I will be grateful to who answer my question...
Whether you can vary a parameter during runtime depends on whether that parameter is tunable. Tunable parameters are those that can be changed after the simulation has started, however, you must pause the simulation to be able to do so.
Run your model simulation, then hit the pause button and open up the Sine block dialog. If the frequency edit box is not disabled you can change the frequency and resume the simulation.
If edit box is disabled it means that the frequency is not a tunable parameter. In this case, you can create your own Sine block using a MATLAB function call back and the sin function, by feeding the blocking the desired frequency.
In this particular case, you could use a chirp signal.

Resources