Which field in Apache Bench is the Response-Time? - apachebench

I try to interpret the fields of apache bench, but I can't understand which field is that indicates the Response-time . Can you help me find that?
Document Path: /
Document Length: 45563 bytes
Concurrency Level: 2
Time taken for tests: 3.955 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 4625489 bytes
HTML transferred: 4556300 bytes
Requests per second: 25.29 [#/sec] (mean)
Time per request: 79.094 [ms] (mean)
Time per request: 39.547 [ms] (mean, across all concurrent requests)
Transfer rate: 1142.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 40 53 8.1 51 99
Processing: 12 24 9.4 23 98
Waiting: 5 14 10.6 12 95
Total: 57 77 15.0 75 197
Percentage of the requests served within a certain time (ms)
50% 75
66% 77
75% 80
80% 81
90% 85
95% 92
98% 116
99% 197
100% 197 (longest request)

Related

Description ENGINE LOG in IBM ILOG CPLEX

I want to understand engine log of IBM ILOG CPLEX studios for a ILP model. I have checked there documentation also but could not able to get clear idea.
Example of Engine log :
Version identifier: 22.1.0.0 | 2022-03-09 | 1a383f8ce
Legacy callback pi
Tried aggregator 2 times.
MIP Presolve eliminated 139 rows and 37 columns.
MIP Presolve modified 156 coefficients.
Aggregator did 11 substitutions.
Reduced MIP has 286 rows, 533 columns, and 3479 nonzeros.
Reduced MIP has 403 binaries, 0 generals, 0 SOSs, and 129 indicators.
Presolve time = 0.05 sec. (6.16 ticks)
Found incumbent of value 233.000000 after 0.07 sec. (9.40 ticks)
Probing time = 0.00 sec. (1.47 ticks)
Tried aggregator 2 times.
Detecting symmetries...
Aggregator did 2 substitutions.
Reduced MIP has 284 rows, 531 columns, and 3473 nonzeros.
Reduced MIP has 402 binaries, 129 generals, 0 SOSs, and 129 indicators.
Presolve time = 0.01 sec. (2.87 ticks)
Probing time = 0.00 sec. (1.45 ticks)
Clique table members: 69.
MIP emphasis: balance optimality and feasibility.
MIP search method: dynamic search.
Parallel mode: deterministic, using up to 8 threads.
Root relaxation solution time = 0.00 sec. (0.50 ticks)
Nodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap
* 0+ 0 233.0000 18.0000 92.27%
* 0+ 0 178.0000 18.0000 89.89%
* 0+ 0 39.0000 18.0000 53.85%
0 0 22.3333 117 39.0000 22.3333 4 42.74%
0 0 28.6956 222 39.0000 Cuts: 171 153 26.42%
0 0 31.1543 218 39.0000 Cuts: 123 251 20.12%
0 0 32.1544 226 39.0000 Cuts: 104 360 17.55%
0 0 32.6832 212 39.0000 Cuts: 102 456 16.20%
0 0 33.1524 190 39.0000 Cuts: 65 521 14.99%
Detecting symmetries...
0 0 33.3350 188 39.0000 Cuts: 66 566 14.53%
0 0 33.4914 200 39.0000 Cuts: 55 614 14.12%
0 0 33.6315 197 39.0000 Cuts: 47 673 13.77%
0 0 33.6500 207 39.0000 Cuts: 61 787 13.72%
0 0 33.7989 206 39.0000 Cuts: 91 882 13.34%
* 0+ 0 38.0000 33.7989 11.06%
0 0 33.9781 209 38.0000 Cuts: 74 989 10.58%
0 0 34.0074 209 38.0000 Cuts: 65 1043 10.51%
0 0 34.2041 220 38.0000 Cuts: 63 1124 9.99%
0 0 34.2594 211 38.0000 Cuts: 96 1210 9.84%
0 0 34.3032 216 38.0000 Cuts: 86 1274 9.73%
0 0 34.3411 211 38.0000 Cuts: 114 1353 9.63%
0 0 34.3420 220 38.0000 Cuts: 82 1402 9.63%
0 0 34.3709 218 38.0000 Cuts: 80 1462 9.55%
0 0 34.4494 228 38.0000 Cuts: 87 1530 9.34%
0 0 34.4882 229 38.0000 Cuts: 97 1616 9.24%
0 0 34.5173 217 38.0000 Cuts: 72 1663 9.16%
0 0 34.5545 194 38.0000 Cuts: 67 1731 9.07%
0 0 34.5918 194 38.0000 Cuts: 76 1786 8.97%
0 0 34.6094 199 38.0000 Cuts: 73 1840 8.92%
0 0 34.6226 206 38.0000 Cuts: 77 1883 8.89%
0 0 34.6421 206 38.0000 Cuts: 53 1928 8.84%
0 0 34.6427 213 38.0000 Cuts: 84 1982 8.83%
Detecting symmetries...
0 2 34.6427 213 38.0000 34.6478 1982 8.82%
Elapsed time = 0.44 sec. (235.86 ticks, tree = 0.02 MB, solutions = 4)
GUB cover cuts applied: 32
Cover cuts applied: 328
Implied bound cuts applied: 205
Flow cuts applied: 11
Mixed integer rounding cuts applied: 17
Zero-half cuts applied: 35
Gomory fractional cuts applied: 1
Root node processing (before b&c):
Real time = 0.43 sec. (235.61 ticks)
Parallel b&c, 8 threads:
Real time = 0.27 sec. (234.23 ticks)
Sync time (average) = 0.11 sec.
Wait time (average) = 0.00 sec.
------------
Total (root+branch&cut) = 0.71 sec. (469.84 ticks)
Mainly I want to understand what are nodes,left,gap,root node processing, parallel b&c.
I hope anyone of you will give a resource or explain it clearly so that it can be helpful when someone starts using IBM ILOG CPLEX studio in future
Thanks a lot in advance
I am expecting for someone to fill knowledge gaps regarding Engine log of IBMs ILOG CPLEX studio
I recommend
Progress reports: interpreting the node log
https://www.ibm.com/docs/en/icos/12.8.0.0?topic=mip-progress-reports-interpreting-node-log

Cumulative sum by category with DAX (Without Date dimension)

This is the input data (Let's suppose that I have 14 different products), I need to calculate with DAX, cumulative Total products by Status
ProductID
Days Since LastPurchase
Status
307255900
76
60 - 180 days
525220000
59
30 - 60 days
209500000
20
< 30 days
312969600
151
60 - 180 days
249300000
52
30 - 60 days
210100000
52
30 - 60 days
304851400
150
60 - 180 days
304851600
150
60 - 180 days
314152700
367
> 180 days
405300000
90
60 - 180 days
314692300
90
60 - 180 days
314692400
53
30 - 60 days
524270000
213
> 180 days
524280000
213
> 180 days
Desire ouput:
Status
Cumulative Count
< 30 days
1
> 180 days
4
30 - 60 days
8
60 - 180 days
14
That's trivial: Just take the build in Quick measure "Running total", see screenshot.
The resulting table will look like this:
However, when you think about it, from a data point of view a sort order like the following makes more sense than ordering "status" by alphabet,
and finally you can take it straight away without any crude categorization

how the execution time drops sharply (more than expected) as the number of processors increase?

I am executing my programme 5000000 times in parallel using "Parallel.For" from F#.
Average execution time per task is given below.
Number of active cores : Execution Time (microseconds)
2 : 866
4 : 424
8 : 210
12 : 140
16 : 106
24 : 76
32 : 60
provided the fact,
by doubling number of cores, maximum speedup which we can get, should be less than than 2 (ideally it can be 2).
what can be the reason for this sharp speedup.
2 * 866 = 1732
4 * 424 = 1696
8 * 210 = 1680
12 * 140 = 1680
16 * 106 = 1696
24 * 76 = 1824
32 * 60 = 1920
So as you increase parallelism, relative performance improves and then begins to fall. The improvement is possibly due to amortization of overhead costs like JIT compilation or in the algorithm that manages the parallelization.
The degradation as the degree of parallelism increases is often due to some sort of resource contention, excessive context switching, or the like.

How to calculate Total average response time

Below are the results
sampler_label count average median 90%_line min max
Transaction1 2 61774 61627 61921 61627 61921
Transaction2 4 82 61 190 15 190
Transaction3 4 1862 1317 3612 1141 3612
Transaction4 4 1242 915 1602 911 1602
Transaction5 4 692 608 906 423 906
Transaction6 4 2764 2122 4748 1182 4748
Transaction7 4 9369 9029 11337 7198 11337
Transaction8 4 1245 890 2168 834 2168
Transaction9 4 3475 2678 4586 2520 4586
TOTAL 34 6073 1381 9913 15 61921
My question here is how is total average response time is being calculated (which is 6073)?
Like in my result I want to exclude transaction1 response time and then want to calculate Total average response time.
How can I do that?
Total Avg Response time = ((s1*t1) + (s2*t2)...)/s
s1 = No of times transaction 1 was executed
t1 = Avg response time for transaction 1
s2 = No of times transaction 2 was executed
t2 = Avg response time for transaction 2
s = Total no of samples (s1+s2..)
In your case, except transaction1 all other transactions have been executed 4 times. So, simple avg of (82, 1862, 1242...) should give the result you wanted.

response time in apache bench

Which field is the max response time and the min response time and the average response time in apache bench result?
This is an example. In the apache bentch result, "Time per request" is 689.946
Time per request: 689.946 [ms] (mean)
I think that ,
"Time per request" = average response time
min "Total" = smallest response time
max "Total" = largest response time
However, why "Time per request(689.946)" is large than the max total (longest request 680) ?
Concurrency Level: 1000
Time taken for tests: 0.690 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 355000 bytes
HTML transferred: 7000 bytes
Requests per second: 1449.39 [#/sec] (mean)
Time per request: 689.946 [ms] (mean)
Time per request: 0.690 [ms] (mean, across all concurrent requests)
Transfer rate: 502.47 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 10 11.8 0 30
Processing: 7 110 186.3 19 658
Waiting: 6 110 186.4 19 658
Total: 9 120 194.1 20 680
Percentage of the requests served within a certain time (ms)
50% 20
66% 54
75% 243
80% 248
90% 254
95% 675
98% 678
99% 679
100% 680 (longest request)
The "Time per request" and the "Total mean", which one should be the average response time?
Thanks.

Resources