I'm failing to understand what iozone benchmark outputs.
Here I'm launching a basic read with 16 processes, each of them reading a 2048 KiB files all at once.
I've aggressively disabled caching with echo 3 > /proc/sys/vm/drop_caches.
Results are the following:
Run began: Thu Apr 21 22:12:42 2022
File size set to 2048 kB
Record Size 2048 kB
Include close in write timing
Include fsync in write timing
Command line used: iozone -t 16 -s 2048 -r 2048 -ce -i 1
Output is in kBytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 kBytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 16 processes
Each process writes a 2048 kByte file in 2048 kByte records
Children see throughput for 16 readers = 1057899.00 kB/sec
Parent sees throughput for 16 readers = 559102.01 kB/sec
Min throughput per process = 0.00 kB/sec
Max throughput per process = 1057899.00 kB/sec
Avg throughput per process = 66118.69 kB/sec
Min xfer = 0.00 kB
Children see throughput for 16 re-readers = 948555.56 kB/sec
Parent sees throughput for 16 re-readers = 584476.30 kB/sec
Min throughput per process = 0.00 kB/sec
Max throughput per process = 948555.56 kB/sec
Avg throughput per process = 59284.72 kB/sec
Min xfer = 0.00 kB
I don't get why 'children' bandwidth differs so much from 'parent' bandwidth nor why it seems that only one process have been used (Min throughput per process is 0.0 kB/sec and Avg throughput per process is Children see throughput for 16 readers / 16).
This SO question is roughly the same but the only answer is a bit vague.
Related
I have a Redis standalone instance in production. Earlier 8 instances of my application, each having 64 Redis connections(total 12*64) at a rate of 2000 QPS per instance would give me a latency of < 10ms(which I am fine with). Due to an increase in traffic, I had to increase the number of application instances to 16, while also decreasing the connection count per instance from 128 to 16 (total 16*16=256). This was done after benchmarking with memtier benchmark as below
12 Threads
64 Connections per thread
2000 Requests per thread
ALL STATS
========================================================================
Type Ops/sec Hits/sec Misses/sec Latency KB/sec
------------------------------------------------------------------------
Sets 0.00 --- --- 0.00000 0.00
Gets 79424.54 516.26 78908.28 9.90400 2725.45
Waits 0.00 --- --- 0.00000 ---
Totals 79424.54 516.26 78908.28 9.90400 2725.45
16 Threads
16 Connections per thread
2000 Requests per thread
ALL STATS
========================================================================
Type Ops/sec Hits/sec Misses/sec Latency KB/sec
------------------------------------------------------------------------
Sets 0.00 --- --- 0.00000 0.00
Gets 66631.87 433.11 66198.76 3.32800 2286.47
Waits 0.00 --- --- 0.00000 ---
Totals 66631.87 433.11 66198.76 3.32800 2286.47
Redis benchmark gave similar results.
However, when I made this change in Production, (16*16), the latency shot up back to 60-70ms. I thought the connection count provisioned was less (which seemed unlikely) and went back to 64 connections (64*16), which as expected increased the latency further. For now, I have half of my applications hitting the master Redis and the other half connected to slave with each having 64 connections (8*64 to master, 8*64 to slave) and this works for me(8-10ms latency).
What could have gone wrong that the latency increased with 256 (16*16) connections but reduced with 512(64*8)connections even though the benchmark says otherwise? I agree to not fully trust the benchmark, but even as a guideline, these are polar opposite results.
Note: 1. Application and Redis are colocated, there is no network latency, memory used is about 40% in Redis and the fragmentation ratio is about 1.4. The application uses Jedis for connection pooling. 2. The latency does not include the overhead of Redis miss, only the Redis round trip is considered.
I've recently studied in my syllabus that Kb refers to Kilo bits where as KB refers to Kilo Bytes. Also I've studied that Kb refers to speed and KB refers to speed. So according to what I've studied I must be able to download 1 MB of file in 8 Seconds at a speed of 1 Mbps as 1 MB equals 8 Mb. But I can download that file in just 1 Second at a speed of 1 Mbps. How is that possible?
You are correct until the last statement.
You can download a 1 MB file in 8 sec at 1 Mb/s or 1 sec at 1 MB/s.
8 Mb = 1 MB.
Can someone let me know what is the unit of maximum resident size in the output below?
/usr/bin/time -l mvn clean package -T 7 -DskipTests
...
real 530.51
user 837.49
sys 64.28
3671834624 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
2113909 page reclaims
26733 page faults
0 swaps
5647 block input operations
26980 block output operations
15 messages sent
25 messages received
687 signals received
406533 voluntary context switches
1319461 involuntary context switches
I am trying to measure peak memory usage of a process as mentioned here.
Environment - Mac OS X Sierra (10.12.5 )
The unit of Maximum Resident Size is bytes.
we have got TOTAL
Label: 10
Average: 1288
Median: 1278
90%: 1525
95%: 1525
99%: 1546
Min: 887
Max: 1546
Throughput: 6.406149903907751
KB/sec: 39.21264413837284
What do means of means KB/sec? please help me understand ot it
According to the Glossary
KB/s(Aggregate Report)
Throughput is measured in bytes and represents the amount of data that the Virtual users received from the server.Throughput KPI is measured in kilobytes(KB) per seconds.
So basically it is average amount of data received by JMeter from the application under test per second.
KB/sec is the speed of a connection.
KB meaning Kilobyte and sec meaning per second
You get faster speeds of MB/sec which is Megabyte and even faster speeds of GB/sec which is Gigabytes
1000 KB = 1 MB
1000 MB = 1 GB
Hope this helps :)
i was taking an exam earlier and i memorized the questions that i didnt know how to answer but somehow got it correct(since the online exam using electronic classrom(eclass) was done through the use of multiple choice.. The exam was coded so each of us was given random questions at random numbers and random answers on random choices, so yea)
anyways, back to my questions..
1.)
There is a CPU with a clock frequency of 1 GHz. When the instructions consist of two
types as shown in the table below, what is the performance in MIPS of the CPU?
-Execution time(clocks)- Frequency of Appearance(%)
Instruction 1 10 60
Instruction 2 15 40
Answer: 125
2.)
There is a hard disk drive with specifications shown below. When a record of 15
Kbytes is processed, which of the following is the average access time in milliseconds?
Here, the record is stored in one track.
[Specifications]
Capacity: 25 Kbytes/track
Rotation speed: 2,400 revolutions/minute
Average seek time: 10 milliseconds
Answer: 37.5
3.)
Assume a magnetic disk has a rotational speed of 5,000 rpm, and an average seek time of 20 ms. The recording capacity of one track on this disk is 15,000 bytes. What is the average access time (in milliseconds) required in order to transfer one 4,000-byte block of data?
Answer: 29.2
4.)
When a color image is stored in video memory at a tonal resolution of 24 bits per pixel,
approximately how many megabytes (MB) are required to display the image on the
screen with a resolution of 1024 x768 pixels? Here, 1 MB is 106 bytes.
Answer:18.9
5.)
When a microprocessor works at a clock speed of 200 MHz and the average CPI
(“cycles per instruction” or “clocks per instruction”) is 4, how long does it take to
execute one instruction on average?
Answer: 20 nanoseconds
I dont expect someone to answer everything, although they are indeed already answered but i am just wondering and wanting to know how it arrived at those answers. Its not enough for me knowing the answer, ive tried solving it myself trial and error style to arrive at those numbers but it seems taking mins to hours so i need some professional help....
1.)
n = 1/f = 1 / 1 GHz = 1 ns.
n*10 * 0.6 + n*15 * 0.4 = 12 ns (=average instruction time) = 83.3 MIPS.
2.)3.)
I don't get these, honestly.
4.)
Here, 1 MB is 10^6 bytes.
3 Bytes * 1024 * 768 = 2359296 Bytes = 2.36 MB
But often these 24 bits are packed into 32 bits b/c of the memory layout (word width), so often it will be 4 Bytes*1024*768 = 3145728 Bytes = 3.15 MB.
5)
CPI / f = 4 / 200 MHz = 20 ns.