response time in apache bench - performance

Which field is the max response time and the min response time and the average response time in apache bench result?
This is an example. In the apache bentch result, "Time per request" is 689.946
Time per request: 689.946 [ms] (mean)
I think that ,
"Time per request" = average response time
min "Total" = smallest response time
max "Total" = largest response time
However, why "Time per request(689.946)" is large than the max total (longest request 680) ?
Concurrency Level: 1000
Time taken for tests: 0.690 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 355000 bytes
HTML transferred: 7000 bytes
Requests per second: 1449.39 [#/sec] (mean)
Time per request: 689.946 [ms] (mean)
Time per request: 0.690 [ms] (mean, across all concurrent requests)
Transfer rate: 502.47 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 10 11.8 0 30
Processing: 7 110 186.3 19 658
Waiting: 6 110 186.4 19 658
Total: 9 120 194.1 20 680
Percentage of the requests served within a certain time (ms)
50% 20
66% 54
75% 243
80% 248
90% 254
95% 675
98% 678
99% 679
100% 680 (longest request)
The "Time per request" and the "Total mean", which one should be the average response time?
Thanks.

Related

Cumulative sum by category with DAX (Without Date dimension)

This is the input data (Let's suppose that I have 14 different products), I need to calculate with DAX, cumulative Total products by Status
ProductID
Days Since LastPurchase
Status
307255900
76
60 - 180 days
525220000
59
30 - 60 days
209500000
20
< 30 days
312969600
151
60 - 180 days
249300000
52
30 - 60 days
210100000
52
30 - 60 days
304851400
150
60 - 180 days
304851600
150
60 - 180 days
314152700
367
> 180 days
405300000
90
60 - 180 days
314692300
90
60 - 180 days
314692400
53
30 - 60 days
524270000
213
> 180 days
524280000
213
> 180 days
Desire ouput:
Status
Cumulative Count
< 30 days
1
> 180 days
4
30 - 60 days
8
60 - 180 days
14
That's trivial: Just take the build in Quick measure "Running total", see screenshot.
The resulting table will look like this:
However, when you think about it, from a data point of view a sort order like the following makes more sense than ordering "status" by alphabet,
and finally you can take it straight away without any crude categorization

Which field in Apache Bench is the Response-Time?

I try to interpret the fields of apache bench, but I can't understand which field is that indicates the Response-time . Can you help me find that?
Document Path: /
Document Length: 45563 bytes
Concurrency Level: 2
Time taken for tests: 3.955 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 4625489 bytes
HTML transferred: 4556300 bytes
Requests per second: 25.29 [#/sec] (mean)
Time per request: 79.094 [ms] (mean)
Time per request: 39.547 [ms] (mean, across all concurrent requests)
Transfer rate: 1142.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 40 53 8.1 51 99
Processing: 12 24 9.4 23 98
Waiting: 5 14 10.6 12 95
Total: 57 77 15.0 75 197
Percentage of the requests served within a certain time (ms)
50% 75
66% 77
75% 80
80% 81
90% 85
95% 92
98% 116
99% 197
100% 197 (longest request)

how the execution time drops sharply (more than expected) as the number of processors increase?

I am executing my programme 5000000 times in parallel using "Parallel.For" from F#.
Average execution time per task is given below.
Number of active cores : Execution Time (microseconds)
2 : 866
4 : 424
8 : 210
12 : 140
16 : 106
24 : 76
32 : 60
provided the fact,
by doubling number of cores, maximum speedup which we can get, should be less than than 2 (ideally it can be 2).
what can be the reason for this sharp speedup.
2 * 866 = 1732
4 * 424 = 1696
8 * 210 = 1680
12 * 140 = 1680
16 * 106 = 1696
24 * 76 = 1824
32 * 60 = 1920
So as you increase parallelism, relative performance improves and then begins to fall. The improvement is possibly due to amortization of overhead costs like JIT compilation or in the algorithm that manages the parallelization.
The degradation as the degree of parallelism increases is often due to some sort of resource contention, excessive context switching, or the like.

How to calculate Total average response time

Below are the results
sampler_label count average median 90%_line min max
Transaction1 2 61774 61627 61921 61627 61921
Transaction2 4 82 61 190 15 190
Transaction3 4 1862 1317 3612 1141 3612
Transaction4 4 1242 915 1602 911 1602
Transaction5 4 692 608 906 423 906
Transaction6 4 2764 2122 4748 1182 4748
Transaction7 4 9369 9029 11337 7198 11337
Transaction8 4 1245 890 2168 834 2168
Transaction9 4 3475 2678 4586 2520 4586
TOTAL 34 6073 1381 9913 15 61921
My question here is how is total average response time is being calculated (which is 6073)?
Like in my result I want to exclude transaction1 response time and then want to calculate Total average response time.
How can I do that?
Total Avg Response time = ((s1*t1) + (s2*t2)...)/s
s1 = No of times transaction 1 was executed
t1 = Avg response time for transaction 1
s2 = No of times transaction 2 was executed
t2 = Avg response time for transaction 2
s = Total no of samples (s1+s2..)
In your case, except transaction1 all other transactions have been executed 4 times. So, simple avg of (82, 1862, 1242...) should give the result you wanted.

What does OpenJDK JMH "score error" exactly mean?

I am using http://openjdk.java.net/projects/code-tools/jmh/ for benchmarking and i get a result like:
Benchmark Mode Samples Score Score error Units
o.a.f.c.j.b.TestClass.test1 avgt 5 2372870,600 210897,743 us/op
o.a.f.c.j.b.TestClass.test2 avgt 5 2079931,850 394727,671 us/op
o.a.f.c.j.b.TestClass.test3 avgt 5 26585,818 21105,739 us/op
o.a.f.c.j.b.TestClass.test4 avgt 5 19113,230 8012,852 us/op
o.a.f.c.j.b.TestClass.test5 avgt 5 2586,413 1949,487 us/op
o.a.f.c.j.b.TestClass.test6 avgt 5 1942,963 1619,967 us/op
o.a.f.c.j.b.TestClass.test7 avgt 5 233,902 73,861 us/op
o.a.f.c.j.b.TestClass.test8 avgt 5 191,970 126,682 us/op
What does the column "Score error" exactly mean and how to interpret it?
This is the margin of error for the score. In most cases, that is a half of confidence interval. Think about it as if there is a "±" sign between "Score" and "Score error". In fact, the human-readable log will show that:
Result: 1.986 ±(99.9%) 0.009 ops/ns [Average]
Statistics: (min, avg, max) = (1.984, 1.986, 1.990), stdev = 0.002
Confidence interval (99.9%): [1.977, 1.995]
# Run complete. Total time: 00:00:12
Benchmark Mode Samples Score Score error Units
o.o.j.s.HelloWorld.hello thrpt 5 1.986 0.009 ops/ns

Resources