We have the following stats on single node cassandra on Amazon EC2/Rightscale m1.large instance with 2 ephemeral disks with raid0. (7.6 GB Total Memory)
4 GB RAM is allocated to cassandra Heap, 800MB is Heap NEW size.
following stats are from OpsCenter community 2.0
Read Requests 285 to 340 per second
Write Requests 257 to 720 per second
OS Load 15.15 to 17.15
Write Request Latency 293 to 685 micros
OS Sent Network Traffic 18 MB to 30 MB per second
OS Recieved Network Traffic 22 MB to 34 MB per second
OS Disk Queue Size 23 to 26 requests
Read Requests Pending 8 to 20
Read Request Latency 69140 to 92885 micros
OS Disk latency 37 to 42 ms
OS Disk Throughput 12 to 14 Mb per second
Disk IOPs Reads 600 to 740 per second
Disk IOPs Writes 2 to 7 per second
IOWait 60 to 70 % CPU avg
Idle 24 to 30 % CPU avg
Rowcache is disabled.
Are the above stats are satisfying with the provided configuration....OR how could we tweak it more to get less IOWait..........because we think that we are experiencing lots of IOWait.....how could we tweak it to get the best.
Read Requests are mixed.........some are from one super column family and one standard having more than million keys......and varying no. of super columns max 14 with varying no. of subcolumns from 1 to 10000 and varying no. of columns max 14 in standard column family...............subcolumns are very thin in nature with 0 bytes value....8 bytes for name.
Process is removing the data from super column family and writing the processed data on standard one.
Would EBS Disks work better....on Amazon EC2
I'm not positive whether you can tweak your config easily to get more disk performance, but using Snappy compression could help a good deal in making your app need to read less overall. It may also help to use the new composite key layout instead of supercolumns.
One thing I can say for sure: EBS will NOT work better. Stay away from that at all costs if you care about latency.
Related
We have been running ProxmoxVE since 5.0 (now in 6.4-15) and we noticed a decay in performance whenever there is some heavy reading/writing.
We have 9 nodes, 7 with CEPH and 56 OSDs (8 on each node). OSDs are hard drives (HDD) WD Gold or better (4~12 Tb). Nodes with 64/128 Gbytes RAM, dual Xeon CPU mainboards (various models).
We already tried simple tests like "ceph tell osd.* bench" getting stable 110 Mb/sec data transfer to each of them with +- 10 Mb/sec spread during normal operations. Apply/Commit Latency is normally below 55 ms with a couple of OSDs reaching 100 ms and one-third below 20 ms.
The front network and back network are both 1 Gbps (separated in VLANs), we are trying to move to 10 Gbps but we found some trouble we are still trying to figure out how to solve (unstable OSDs disconnections).
The Pool is defined as "replicated" with 3 copies (2 needed to keep running). Now the total amount of disk space is 305 Tb (72% used), reweight is in use as some OSDs were getting much more data than others.
Virtual machines run on the same 9 nodes, most are not CPU intensive:
Avg. VM CPU Usage < 6%
Avg. Node CPU Usage < 4.5%
Peak VM CPU Usage 40%
Peak Node CPU Usage 30%
But I/O Wait is a different story:
Avg. Node IO Delay 11
Max. Node IO delay 38
Disk writing load is around 4 Mbytes/sec average, with peaks up to 20 Mbytes/sec.
Anyone with experience in getting better Proxmox+CEPH performance?
Thank you all in advance for taking the time to read,
Ruben.
Got some Ceph pointers that you could follow...
get some good NVMEs (one or two per server but if you have 8HDDs per server 1 should be enough) and put those as DB/WALL (make sure they have power protection)
the ceph tell osd.* bench is not that relevant for real world, I suggest to try some FIO tests see here
set OSD osd_memory_target to at 8G or RAM minimum.
in order to save some write on your HDD (data is not replicated X times) create your RBD pool as EC (erasure coded pool) but please do some research on that because there are some tradeoffs. Recovery takes some extra CPU calculations
All and all, hype-converged clusters are good for training, small projects and medium projects with not such a big workload on them... Keep in mind that planning is gold
Just my 2 cents,
B.
Looking at Jeff Dean's famous latency guides
Latency Comparison Numbers (~2012)
----------------------------------
L1 cache reference 0.5 ns
Branch mispredict 5 ns
L2 cache reference 7 ns 14x L1 cache
Mutex lock/unlock 25 ns
Main memory reference 100 ns 20x L2 cache, 200x L1 cache
Compress 1K bytes with Zippy 3,000 ns 3 us
Send 1K bytes over 1 Gbps network 10,000 ns 10 us
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD
Read 1 MB sequentially from memory 250,000 ns 250 us
Round trip within same datacenter 500,000 ns 500 us
Read 1 MB sequentially from SSD* 1,000,000 ns 1,000 us 1 ms ~1GB/sec SSD, 4X memory
Disk seek 10,000,000 ns 10,000 us 10 ms 20x datacenter roundtrip
Read 1 MB sequentially from disk 20,000,000 ns 20,000 us 20 ms 80x memory, 20X SSD
Send packet CA->Netherlands->CA 150,000,000 ns 150,000 us 150 ms
One thing which looks somewhat uncanny to me is the time taken to read 1MB sequentially from disk is only 10 times faster than sending a round trip packet across the Atlantic. Can anyone give me more intuition why this feels right.
Q : 1MB SEQ-HDD-READ ~ 10x faster than a CA/NL trans-atlantic RTT - why this feels right?
Some "old" values ( with a few cross-QPI/NUMA updates from 2017 ) to start from:
0.5 ns - CPU L1 dCACHE reference
1 ns - speed-of-light (a photon) travel a 1 ft (30.5cm) distance
5 ns - CPU L1 iCACHE Branch mispredict
7 ns - CPU L2 CACHE reference
71 ns - CPU cross-QPI/NUMA best case on XEON E5-46*
100 ns - MUTEX lock/unlock
100 ns - CPU own DDR MEMORY reference
135 ns - CPU cross-QPI/NUMA best case on XEON E7-*
202 ns - CPU cross-QPI/NUMA worst case on XEON E7-*
325 ns - CPU cross-QPI/NUMA worst case on XEON E5-46*
10,000 ns - Compress 1 KB with Zippy PROCESS (+GHz,+SIMD,+multicore tricks)
20,000 ns - Send 2 KB over 1 Gbps NETWORK
250,000 ns - Read 1 MB sequentially from MEMORY
500,000 ns - Round trip within a same DataCenter
10,000,000 ns - DISK seek
10,000,000 ns - Read 1 MB sequentially from NETWORK
30,000,000 ns - Read 1 MB sequentially from DISK
150,000,000 ns - Send a NETWORK packet CA -> Netherlands
| | | |
| | | ns|
| | us|
| ms|
Trans-Atlantic Network RTT :
Global optical networks work roughly at a speed of light ( 300.000.000 m/s )
LA(CA)-AMS(NL) packet has to travel not the geodetical "distance", but over a set of continental and trans-atlantic "submarine" cables, the length of which is way longer ( see the map )
These factors do not "improve" - only the transport capacity is growing, with add-on latencies introduced in light-amplifiers, retiming units and other L1-PHY / L2-/L3-networking technologies are kept under control, as small as possible.
So the LA(CA)-AMS(NL) RTT will remain, using this technology, the same ~ 150 ms
Using other technology, LEO-Sat Cubes - as an example - the "distance" will only grow from ~ 9000 km P2P, by a pair of additional GND/LEO segments, plus by a few addition LEO/LEO hops, which introduce "longer" distance, add-on hop/hop re-processing latencies and capacity will not get any close to the current optical transports available, so no magic jump "back to the future" is to be expected ( we still miss the DeLorean ).
The HDD Disk :
HDD-s can have very fast and very short transport-path for moving the data, but the READ-ops have to wait for the physical / mechanical operations of the media-reading heads ( that takes most of the time here, not the actual data-transfer to the host RAM )
HDD-s are rotational devices, the disk has to "align" where to start the read, which costs the first about 10 [ms]
HDD-s devices store data into a static structure of heads( 2+, reading physical signals from the magnetic plates' surfaces ):cylinders( concentric circular zones on the plate, into which a cyl-aligned reading-head gets settled by disk-head micro-controller):sector( angular-sections of the cylinder, each carrying a block of the same sized data ~ 4KB, 8KB, ... )
These factors do not "improve" - all commodity produced drives remain at industry selected angular speeds of about { 5k4 | 7k2 | 10k | 15k | 18k }-spins/min (RPM). This means, that if a well-compacted data-layouts are maintained on such a disk, one continuous head:cylinder aligned reading round the whole cylinder will take:
>>> [ 1E3 / ( RPM / 60. ) for RPM in ( 5400, 7200, 10000, 15000, 18000 ) ]
11.1 ms per CYL # 5k4 RPM disk,
8.3 ms per CYL # 7k2 RPM disk,
6.0 ms per CYL # 10k RPM disk,
4.0 ms per CYL # 15k RPM disk,
3.3 ms per CYL # 18k RPM disk.
Data-density is also limited by the magnetic media properties. Spintronics R&D will bring some more densely stored data, yet the last 30 years have been well inside the limits of the reliable magnetic storage.
More is to be expected from a trick to co-parallel-read from several heads at-once, yet this goes against the design of the embedded microcontrollers, so most of the reading goes but sequentially, from one head after another, into the HDD-controller onboard buffers, best if no cyl-to-cyl heads mechanical re-alignment were to take place ( technically this depends on the prior data-to-disc layout, maintained by the O/S and possible care of disk-optimisers ( originally called disk disk-"compression", which just tried to re-align the known sequences of FAT-described data-blocks, so as to follow the most optimal trajectory of head:cyl:sector transitions, depending most on the actual device's head:head and cyl:cyl latencies ). So even the most optimistic data-layout takes ~ 13..21 [ms] to seek-and-read but one head:cyl-path
Laws of Physics decide
Some numbers from 2020.
Load from L1 is 4 cycles on Intel Coffee Lake and Ryzen (0.8nsec on a 5GHz CPU).
Load from memory is ~215 cycles on Intel Coffee Lake (43nsec on a 5GHz CPU). ~280 cycles on Ryzen.
I am trying to benchmark an RDS instance (postgres) on AWS.
I created the instance with a 30 GB "general purpose" SSD volume ("gp2"). according to the AWS docs, this should provide a baseline performance of 100 IOPS:
Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum
of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales
linearly at 3 IOPS per GiB of volume size.
but in addition to that, there is burst performance:
When using General Purpose (SSD) storage, your DB instance receives an
initial I/O credit balance of 5.4 million I/O credits, which is enough
to sustain a burst performance of 3,000 IOPS for 30 minutes.
As I'm interested in sustained database performance (= the baseline case), I have to get rid of all I/O credits before starting my tests. I did this by running pgbench.
In the following screenshot, you can see that I start pgbench at 11:00, and around 3 hours later the burst balance is finally used up, and write IOPS drops off:
So far, so good. the timing makes sense -- 3 * 60 * 60 * 600 = 6.48 million (I/O credits are also refilled during the burst).
What I don't understand: why doesn't IOPS drop down to the baseline rate (100), but stay at 380 instead? Is the documented formula for baseline performance not valid any more?
UPDATE: i've shut down this test instance now, but here are the details:
sorry for the delay in my response
Why the extra performance?
With the db.m3.xlarge (which falls under Standard - Previous Edition header) - you have an extra 500 Mbps of additional, dedicated capacity for Amazon Elastic Block Store. This is per the chart and details at this link.
In the first section of Amazon EBS Performance Tips, it says to use EBS optimized instances for increased performance. So, I'd say this was the main reason you were getting the extra IOPS over the 100, after you exhausted your burst credits.
Cost Considerations:
According to the end of the paragraph, having your M3, you will incur extra cost for the extra performance. However, if you were to select the M4, the extra performance incurs no extra cost.
So in sustained database performance cost analysis, I would consider just the base price of the M4 vs. base price of M3 + incurred performance cost the M3 will bring you.
Good luck.
I'm currently facing an annoying problem, I have to read a big data file (500 GO) which is stored on a SSD revodrive 350.
I read the file using fread function as big memory chunks (roughly 17 mo per chunk).
At the beginning of my program everything goes smoothly It takes 10ms for 3 chunks read. Then after 10 sec read time performances collapse and vary between 60 and 90 ms.
I don't know the reason why this is happening and if it is possible to keep read time stable ?
Thank you in advance
Rob
17 mo per chunk, 10 ms for 3 chunks -> 51 mo / 10 ms.
10 sec = 1000 x 10 ms -> 51 GO read after 10 seconds!
How much memory do you have? Is your pagefile on the same disk?
The system may swap memory!
I have the following scenario:
machine 1: receives messages from outside and processes them (via a
Java application). For processing it relies on a database (on machine
2)
machine 2: an Oracle DB
As performance metrics I usually look at the value of processed messages per time.
Now, what puzzles me: none of the 2 machines is working on "full speed". If I look at typical parameters (CPU utilization, CPU load, I/O bandwidth, etc.) both machines look as they have not enough to do.
What I expect is that one machine, or one of the performance related parameters limits the overall processing speed. Since I cannot observe this I would expect a higher message processing rate.
Any ideas what might limit the overall performance? What is the bottleneck?
Here are some key values during workload:
Machine 1:
CPU load average: 0.75
CPU Utilization: System 12%, User 13%, Wait 5%
Disk throughput: 1 MB/s (write), almost no reads
average tps (as reported by iostat): 200
network: 500 kB/s in, 300 kB/s out, 1600 packets/s in, 1600 packets/s out
Machine 2:
CPU load average: 0.25
CPU Utilization: System 3%, User 15%, Wait 17%
Disk throughput: 4.5 MB/s (write), 3.5 MB/s (read)
average tps (as reported by iostat): 190 (very short peaks to 1000-1500)
network: 250 kB/s in, 800 kB/s out, 1100 packets/s in, 1100 packets/s out
So for me, all values seem not to be at any limit.
PS: for testing of course the message queue is always full, so that both machines have enough work to do.
To find bottlenecks you typically need to measure also INSIDE the application. That means profiling the java application code and possibly what happens inside Oracle.
The good news is that you have excluded at least some possible hardware bottlenecks.