Calculate a drift value from NTP - time

I have a simple NTP client that gets the value from an NTP server every 15 minutes. All of the values are calculated as per the NTP spec to account for the delay in the request and response.
I am not correcting the system clock at this time but each time getting the offset value of the uncorrected system time to the NTP time. I am getting values (in seconds) like on subsequent polls...
0.0017
0.0006
for offset times.
Is there a formula to calculate a drift of this particular clock at all? In these examples all the drift calculations are +0 so the clock is running fast by a small amount but by how much?
For example say the offset was a consistent 0.9 seconds with a 15 minute NTP poll time?

Related

Unable to analyse Throughput Performance of One API which has been migrated to new platform

I have checking the performance one APIs which is performing in two systems therefore as the api has been migrated to new system i am doing the performance comparison from old system
Statistics as shown below:
 
New System:
 
Thread -25
Ramp-up ~25
Avg -8sec
Median - 7.8
95th percentile  -  8.8 sec
Throughput  - 0.39                  
Old System:
 
Thread -25
Ramp-up ~25
Avg -10 sec
Median - 10
95th percentile - 10
Throughput  - 0.74 
Here we can observe that the New System has taken less time for 25 Threads than old system but throughput is more Old System.
But Old System has Taken more time
I am confused about the throughput which system is more efficient ?
One which has taken less time should have more throughput but here the lesser time taken has less throughput which makes me confused to understand the performance??
can anyone help me here???
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time)
So double check total test duration for both test executions, my expectation is that the "new system test" took longer.
With regards to the reason I cannot state anything meaningful without seeing the full .jtl results files for both executions, I can only assume that it could be one very long request in the "new system" test or you're having a Timer with random think time somewhere in your test.

Why is Average response time is reducing when we are increasing the number of users?

I am using J-Meter to run a performance test with different number of users. With 1 user, the avg response time is 1.4 seconds, but with more number of users, it's logical that the avg response time will go up, but instead it is reducing. Can anyone explain why? The test scenario is that I am interacting a few times (2-3 interactions) with a chat bot.
Please help me understand this confusing results below:
1 user - 30 seconds - 1.3 seconds (average response time)
5 users - 60 seconds - 0.92 seconds (average response time)
10 users - 60 seconds - 0.93 seconds (average response time)
20 users - 120 seconds - 0.92 seconds (average response time)
First iteration of first user often involves some overhead on client side (most commonly DNS resolution), and can have some overhead on server side (server "warmup"). That overhead is not required in the following iterations or users.
Thus what you see as reduction in average time is actually reduction of the impact of the slower "first user first iteration" execution time on overall outcome. This is why it's important to provide a sufficient sample, so that such local spike does not matter that much anymore. My rule of thumb is at least 10000 iterations before looking at any averages, although level of comfort is up to every tester to set.
Also when increasing number of users, you should not expect average to be worse, unless you reached a saturation point: it should be stable rather. So if you expect your app to be able to support not more than 20 users, than your result is surprising, but if you expect application to support 20000 users, you should not have any average degradation at 20 users.
To test if this is what happens, try to run 1 user, but for much longer, so that total number of iterations is similar to running 20 users for example. Roughly you need to increase duration of test with 1 user to 20 min to get to similar number of iterations (i.e. same length of test would be 120 sec, but also x20 iterations with 20 users, giving you rough number of 20 min total for 1 user)

unable to increase average throughput in jmeter

I have set the number of threads and ramp-up time to 1/1 and I am iterating my 1000 records from data.csv for 1800 seconds.
Now given the numbers, I have set the CTT, constant time throughput to 2000 every minute and I expected the average throughput to be 2000/60 = 33.3/sec, but I get 18.7/sec, when I increased the throughput to 4000/60, I still get 18 or 19/sec.
Constant Throughput Timer cannot force threads to execute faster, it can only pause threads to limit JMeter's throughput to the defined value.
Each JMeter thread executes samplers as fast as it can, however next iteration won't start until previous is finished so given you use 1 thread - the throughput won't be higher than application response time.
Also be aware that Constant Throughput Timer is accurate enough on minute level so you can rather manipulate "requests per minute" rather than "requests per second", if your test is shorter than 1 minute - consider using Throughput Shaping Timer
So I would recommend increasing the number of virtual users to i.e. 50.
See How to use JMeter's Constant Throughput Timer for more details.
I guess your application average response time is around 50ms. Which means a single thread can only perform about 20 hits/sec (1 sec / 0.05 sec per hit = 20 hits/sec).
You have 2 solutions:
increase the number of threads to parallelize requests sent,
or make you app response faster (obviously harder).
At some point, when you application can't handle more load, you should see the hits/sec dropping and the average response time increase.
The graph below shows you an example of application which has a steady response time with up to 20 concurrent threads.

JMeter throughput results differ although average is similar

Ok so I ran some stress tests on an application of mine and I came across some weird results compared to last time.
The Throughput was way off although the averages are similar.
The number of Samples did vary, however as I understood the Throughput is calculated by dividing the number of samples by the time it took.
In my understanding if the average time was similar the throughput should be similar even though the samples varied...
This is what I have:
PREVIOUS
RECENT
As you can see the throughput difference is pretty substantial...
Can somebody please explain me if my logic is correct or point me on why that is not the case?
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.
The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
Throughput =(number of requests) / (total time).
Average: This is the Average (Arithmetic mean μ = 1/n * Σi=1…n xi) Response time of your total samples.It is the arithmetic mean of all the samples response time.
Response time is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client.
So these are two different things.
Think of a trip to Disney or your favorite amusement park. Let's
define the capacity of the ride to be the number of people that can
sit on the ride per turn (think roller coaster). Throughput will
be the number of people that exit the ride per unit of time. Let's
define service time -the amount of time you get to sit on the ride.
Response time to be your time queuing for the ride
plus service time.

How Does the Network Time Protocol Work?

The Wikipedia entry doesn't give details and the RFC is way too dense. Does anyone around here know, in a very general way, how NTP works?
I'm looking for an overview that explains how Marzullo's algorithm (or a modification of it) is employed to translate a timestamp on a server into a timestamp on a client. Specifically what mechanism is used to produce accuracy which is, on average, within 10ms when that communication takes place over a network with highly variable latency which is frequently several times that.
(This isn't Marzullo's algorithm. That's only used by the high-stratum servers to get really accurate time using several sources. This is how an ordinary client gets the time, using only one server)
First of all, NTP timestamps are stored as seconds since January 1, 1900. 32 bits for the number of seconds, and 32 bits for the fractions of a second.
The synchronization is tricky. The client stores the timestamp (say A) (all these values are in seconds) when it sends the request. The server sends a reply consisting of the "true" time when it received the packet (call that X) and the "true" time it will transmit the packet (Y). The client will receive that packet and log the time when it received it (B).
NTP assumes that the time spent on the network is the same for sending and receiving. Over enough intervals over sane networks, it should average out to be so. We know that the total transit time from sending the request to receiving the response was B-A seconds. We want to remove the time that the server spent processing the request (Y-X), leaving only the network traversal time, so that's B-A-(Y-X). Since we're assuming the network traversal time is symmetric, the amount of time it took the response to get from the server to the client is [B-A-(Y-X)]/2. So we know that the server sent its response at time Y, and it took us [B-A-(Y-X)]/2 seconds for that response to get to us.
So the true time when we received the response is Y+[B-A-(Y-X)]/2 seconds. And that's how NTP works.
Example (in whole seconds to make the math easy):
Client sends request at "wrong" time 100. A=100.
Server receives request at "true" time 150. X=150.
The server is slow, so it doesn't send out the response until "true" time 160. Y=160.
The client receives the request at "wrong" time 120. B=120.
Client determines the time spend on the network is B-A-(Y-X)=120-100-(160-150)=10 seconds
Client assumes the amount of time it took for the response to get from the server to the client is 10/2=5 seconds.
Client adds that time to the "true" time when the server sent the response to estimate that it received the response at "true" time 165 seconds.
Client now knows that it needs to add 45 seconds to its clock.
In a proper implementation, the client runs as a daemon, all the time. Over a long period of time with many samples, NTP can actually determine if the computer's clock is slow or fast, and automatically adjust it accordingly, allowing it to keep reasonably good time even if it is later disconnected from the network. Together with averaging the responses from the server, and application of more complicated thinking, you can get incredibly accurate times.
There's more, of course, to a proper implementation than that, but that's the gist of it.
The NTP client asks all of its NTP
servers what time it is.
The different servers will give
different answers, with different confidence levels because the
requests will take different amounts
of time to travel from the client to
the server and back.
Marzullo's algorithm will find the smallest
range of time values consistent with
all of the answers provided.
You can be more confident of the accuracy of the answer from this algorithm than of that from any single time servers because the intersection of several sets will likely contain fewer elements than any individual set.
The more servers you query, the more constraints you'll have on the possible answer, and the more accurate your clock will be.
IF you are using timestamps to decide ordering, specific times may not be nessisary. You could use lamport clocks instead, which are less of a pain than network syncronization. It can tell you what came "first", but not the exact difference in times. It doesn't care what the computer's clock actually says.
The trick is that some packets are fast, and the fast packets give you tight constraints on the time.

Resources