What do you means KB/sec? - jmeter

we have got TOTAL
Label: 10
Average: 1288
Median: 1278
90%: 1525
95%: 1525
99%: 1546
Min: 887
Max: 1546
Throughput: 6.406149903907751
KB/sec: 39.21264413837284
What do means of means KB/sec? please help me understand ot it

According to the Glossary
KB/s(Aggregate Report)
Throughput is measured in bytes and represents the amount of data that the Virtual users received from the server.Throughput KPI is measured in kilobytes(KB) per seconds.
So basically it is average amount of data received by JMeter from the application under test per second.

KB/sec is the speed of a connection.
KB meaning Kilobyte and sec meaning per second
You get faster speeds of MB/sec which is Megabyte and even faster speeds of GB/sec which is Gigabytes
1000 KB = 1 MB
1000 MB = 1 GB
Hope this helps :)

Related

Children vs parent output in iozone

I'm failing to understand what iozone benchmark outputs.
Here I'm launching a basic read with 16 processes, each of them reading a 2048 KiB files all at once.
I've aggressively disabled caching with echo 3 > /proc/sys/vm/drop_caches.
Results are the following:
Run began: Thu Apr 21 22:12:42 2022
File size set to 2048 kB
Record Size 2048 kB
Include close in write timing
Include fsync in write timing
Command line used: iozone -t 16 -s 2048 -r 2048 -ce -i 1
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 16 processes
Each process writes a 2048 kByte file in 2048 kByte records
Children see throughput for 16 readers = 1057899.00 kB/sec
Parent sees throughput for 16 readers = 559102.01 kB/sec
Min throughput per process = 0.00 kB/sec
Max throughput per process = 1057899.00 kB/sec
Avg throughput per process = 66118.69 kB/sec
Min xfer = 0.00 kB
Children see throughput for 16 re-readers = 948555.56 kB/sec
Parent sees throughput for 16 re-readers = 584476.30 kB/sec
Min throughput per process = 0.00 kB/sec
Max throughput per process = 948555.56 kB/sec
Avg throughput per process = 59284.72 kB/sec
Min xfer = 0.00 kB
I don't get why 'children' bandwidth differs so much from 'parent' bandwidth nor why it seems that only one process have been used (Min throughput per process is 0.0 kB/sec and Avg throughput per process is Children see throughput for 16 readers / 16).
This SO question is roughly the same but the only answer is a bit vague.

Latency of accessing main memory is almost the same order of sending a packet

Looking at Jeff Dean's famous latency guides
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250,000 ns 250 us
Round trip within same datacenter 500,000 ns 500 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms
One thing which looks somewhat uncanny to me is the time taken to read 1MB sequentially from disk is only 10 times faster than sending a round trip packet across the Atlantic. Can anyone give me more intuition why this feels right.
Q : 1MB SEQ-HDD-READ ~ 10x faster than a CA/NL trans-atlantic RTT - why this feels right?
Some "old" values ( with a few cross-QPI/NUMA updates from 2017 ) to start from:
0.5 ns - CPU L1 dCACHE reference
1 ns - speed-of-light (a photon) travel a 1 ft (30.5cm) distance
5 ns - CPU L1 iCACHE Branch mispredict
7 ns - CPU L2 CACHE reference
71 ns - CPU cross-QPI/NUMA best case on XEON E5-46*
100 ns - MUTEX lock/unlock
100 ns - CPU own DDR MEMORY reference
135 ns - CPU cross-QPI/NUMA best case on XEON E7-*
202 ns - CPU cross-QPI/NUMA worst case on XEON E7-*
325 ns - CPU cross-QPI/NUMA worst case on XEON E5-46*
10,000 ns - Compress 1 KB with Zippy PROCESS (+GHz,+SIMD,+multicore tricks)
20,000 ns - Send 2 KB over 1 Gbps NETWORK
250,000 ns - Read 1 MB sequentially from MEMORY
500,000 ns - Round trip within a same DataCenter
10,000,000 ns - DISK seek
10,000,000 ns - Read 1 MB sequentially from NETWORK
30,000,000 ns - Read 1 MB sequentially from DISK
150,000,000 ns - Send a NETWORK packet CA -> Netherlands
| | | |
| | | ns|
| | us|
| ms|
Trans-Atlantic Network RTT :
Global optical networks work roughly at a speed of light ( 300.000.000 m/s )
LA(CA)-AMS(NL) packet has to travel not the geodetical "distance", but over a set of continental and trans-atlantic "submarine" cables, the length of which is way longer ( see the map )
These factors do not "improve" - only the transport capacity is growing, with add-on latencies introduced in light-amplifiers, retiming units and other L1-PHY / L2-/L3-networking technologies are kept under control, as small as possible.
So the LA(CA)-AMS(NL) RTT will remain, using this technology, the same ~ 150 ms
Using other technology, LEO-Sat Cubes - as an example - the "distance" will only grow from ~ 9000 km P2P, by a pair of additional GND/LEO segments, plus by a few addition LEO/LEO hops, which introduce "longer" distance, add-on hop/hop re-processing latencies and capacity will not get any close to the current optical transports available, so no magic jump "back to the future" is to be expected ( we still miss the DeLorean ).
The HDD Disk :
HDD-s can have very fast and very short transport-path for moving the data, but the READ-ops have to wait for the physical / mechanical operations of the media-reading heads ( that takes most of the time here, not the actual data-transfer to the host RAM )
HDD-s are rotational devices, the disk has to "align" where to start the read, which costs the first about 10 [ms]
HDD-s devices store data into a static structure of heads( 2+, reading physical signals from the magnetic plates' surfaces ):cylinders( concentric circular zones on the plate, into which a cyl-aligned reading-head gets settled by disk-head micro-controller):sector( angular-sections of the cylinder, each carrying a block of the same sized data ~ 4KB, 8KB, ... )
These factors do not "improve" - all commodity produced drives remain at industry selected angular speeds of about { 5k4 | 7k2 | 10k | 15k | 18k }-spins/min (RPM). This means, that if a well-compacted data-layouts are maintained on such a disk, one continuous head:cylinder aligned reading round the whole cylinder will take:
>>> [ 1E3 / ( RPM / 60. ) for RPM in ( 5400, 7200, 10000, 15000, 18000 ) ]
11.1 ms per CYL # 5k4 RPM disk,
8.3 ms per CYL # 7k2 RPM disk,
6.0 ms per CYL # 10k RPM disk,
4.0 ms per CYL # 15k RPM disk,
3.3 ms per CYL # 18k RPM disk.
Data-density is also limited by the magnetic media properties. Spintronics R&D will bring some more densely stored data, yet the last 30 years have been well inside the limits of the reliable magnetic storage.
More is to be expected from a trick to co-parallel-read from several heads at-once, yet this goes against the design of the embedded microcontrollers, so most of the reading goes but sequentially, from one head after another, into the HDD-controller onboard buffers, best if no cyl-to-cyl heads mechanical re-alignment were to take place ( technically this depends on the prior data-to-disc layout, maintained by the O/S and possible care of disk-optimisers ( originally called disk disk-"compression", which just tried to re-align the known sequences of FAT-described data-blocks, so as to follow the most optimal trajectory of head:cyl:sector transitions, depending most on the actual device's head:head and cyl:cyl latencies ). So even the most optimistic data-layout takes ~ 13..21 [ms] to seek-and-read but one head:cyl-path
Laws of Physics decide
Some numbers from 2020.
Load from L1 is 4 cycles on Intel Coffee Lake and Ryzen (0.8nsec on a 5GHz CPU).
Load from memory is ~215 cycles on Intel Coffee Lake (43nsec on a 5GHz CPU). ~280 cycles on Ryzen.

Increase apache requests per second

I want to increase apache's request per second figure.
I'm using apache benchmark to get it and it's not going over 500.
ab -n 100 -c 100 http://localhost/
this is the command I'm using it gives me 500 RPS
Concurrency Level: 100
Time taken for tests: 0.212 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 17925 bytes
HTML transferred: 900 bytes
Requests per second: 472.05 [#/sec] (mean)
Time per request: 211.843 [ms] (mean)
Time per request: 2.118 [ms] (mean, across all concurrent requests)
Transfer rate: 82.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 9 9 0.2 9 9
Processing: 20 150 36.8 160 200
Waiting: 19 148 36.6 159 200
Total: 30 159 36.8 169 209
Percentage of the requests served within a certain time (ms)
50% 169
66% 176
75% 182
80% 187
90% 200
95% 206
98% 209
99% 209
100% 209 (longest request)
this is the whole coutput.
I'm using worker mpm for this with configs as--
<IfModule mpm_worker_module>
ServerLimit 200
StartServers 200
MaxClients 5000
MinSpareThreads 1500
MaxSpareThreads 2000
ThreadsPerChild 64
MaxRequestsPerChild 0
</IfModule>
I suppose these are pretty high figures never the less I keep increasing it and nothing seems to change.
The application itself doesn't contain anything it only prints 'Hello World' with cherrypy.
I want to increase it to like 2000RPS my Ram is 5GB(using a VM).
The numbers you've set in your configuration look wrong - but the only way to get the right numbers is by measuring how your system behaves with real traffic.
Measuring response time across the loopback interface is not very meaningful. Measuring response time for a single URL is not very meaningful. Measuring response time with a load generator running on the same machine as the webserver is not very meaningful.
Making your site go faster / increasing the capacity is very difficult and needs much more testing, data and analysis than is appropriate for this forum.

Calculating CPU Performance in MIPS

i was taking an exam earlier and i memorized the questions that i didnt know how to answer but somehow got it correct(since the online exam using electronic classrom(eclass) was done through the use of multiple choice.. The exam was coded so each of us was given random questions at random numbers and random answers on random choices, so yea)
anyways, back to my questions..
1.)
There is a CPU with a clock frequency of 1 GHz. When the instructions consist of two
types as shown in the table below, what is the performance in MIPS of the CPU?
-Execution time(clocks)- Frequency of Appearance(%)
Instruction 1 10 60
Instruction 2 15 40
Answer: 125
2.)
There is a hard disk drive with specifications shown below. When a record of 15
Kbytes is processed, which of the following is the average access time in milliseconds?
Here, the record is stored in one track.
[Specifications]
Capacity: 25 Kbytes/track
Rotation speed: 2,400 revolutions/minute
Average seek time: 10 milliseconds
Answer: 37.5
3.)
Assume a magnetic disk has a rotational speed of 5,000 rpm, and an average seek time of 20 ms. The recording capacity of one track on this disk is 15,000 bytes. What is the average access time (in milliseconds) required in order to transfer one 4,000-byte block of data?
Answer: 29.2
4.)
When a color image is stored in video memory at a tonal resolution of 24 bits per pixel,
approximately how many megabytes (MB) are required to display the image on the
screen with a resolution of 1024 x768 pixels? Here, 1 MB is 106 bytes.
Answer:18.9
5.)
When a microprocessor works at a clock speed of 200 MHz and the average CPI
(“cycles per instruction” or “clocks per instruction”) is 4, how long does it take to
execute one instruction on average?
Answer: 20 nanoseconds
I dont expect someone to answer everything, although they are indeed already answered but i am just wondering and wanting to know how it arrived at those answers. Its not enough for me knowing the answer, ive tried solving it myself trial and error style to arrive at those numbers but it seems taking mins to hours so i need some professional help....
1.)
n = 1/f = 1 / 1 GHz = 1 ns.
n*10 * 0.6 + n*15 * 0.4 = 12 ns (=average instruction time) = 83.3 MIPS.
2.)3.)
I don't get these, honestly.
4.)
Here, 1 MB is 10^6 bytes.
3 Bytes * 1024 * 768 = 2359296 Bytes = 2.36 MB
But often these 24 bits are packed into 32 bits b/c of the memory layout (word width), so often it will be 4 Bytes*1024*768 = 3145728 Bytes = 3.15 MB.
5)
CPI / f = 4 / 200 MHz = 20 ns.

Cassandra Amazon EC2 , lots of IOWait

We have the following stats on single node cassandra on Amazon EC2/Rightscale m1.large instance with 2 ephemeral disks with raid0. (7.6 GB Total Memory)
4 GB RAM is allocated to cassandra Heap, 800MB is Heap NEW size.
following stats are from OpsCenter community 2.0
Read Requests 285 to 340 per second
Write Requests 257 to 720 per second
OS Load 15.15 to 17.15
Write Request Latency 293 to 685 micros
OS Sent Network Traffic 18 MB to 30 MB per second
OS Recieved Network Traffic 22 MB to 34 MB per second
OS Disk Queue Size 23 to 26 requests
Read Requests Pending 8 to 20
Read Request Latency 69140 to 92885 micros
OS Disk latency 37 to 42 ms
OS Disk Throughput 12 to 14 Mb per second
Disk IOPs Reads 600 to 740 per second
Disk IOPs Writes 2 to 7 per second
IOWait 60 to 70 % CPU avg
Idle 24 to 30 % CPU avg
Rowcache is disabled.
Are the above stats are satisfying with the provided configuration....OR how could we tweak it more to get less IOWait..........because we think that we are experiencing lots of IOWait.....how could we tweak it to get the best.
Read Requests are mixed.........some are from one super column family and one standard having more than million keys......and varying no. of super columns max 14 with varying no. of subcolumns from 1 to 10000 and varying no. of columns max 14 in standard column family...............subcolumns are very thin in nature with 0 bytes value....8 bytes for name.
Process is removing the data from super column family and writing the processed data on standard one.
Would EBS Disks work better....on Amazon EC2
I'm not positive whether you can tweak your config easily to get more disk performance, but using Snappy compression could help a good deal in making your app need to read less overall. It may also help to use the new composite key layout instead of supercolumns.
One thing I can say for sure: EBS will NOT work better. Stay away from that at all costs if you care about latency.

Resources