Why do we need to increase disk size in Google Cloud to increase perfomance? - performance

My question is about disks in Google Cloud.
I can’t understand why do we need to increase the disk capacity in order to increase performance (transfer/read/write). If the disks are not local, then data is transferred over the network between the VM and the disk, that I understand.
Can someone explain in simple clear words, why do we need to increase the disk capacity from 500GB to 1TB? How does this affect the transfer / read / write speed?
If it is not difficult, could you exmplain some simple example?
Thanks you very much.

This is how GCP is designed.
You can have a look at how the IOPS changes with capacity increase and machine type (like N1, N2, number of CPU's).
Example:
For example, consider a 1,000 GB SSD persistent disk attached to an
instance with an N2 machine type and 4 vCPUs. The read limit based
solely on the size of the disk is 30,000 IOPS. However, because the
instance has 4 vCPUs, the read limit is restricted to 15,000 IOPS.
Also have in mind that:
Performance also depends on the number of vCPUs on your VM instance due to network egress caps on write throughput.
Example:
In a situation where persistent disk is competing with IP traffic for
network egress bandwidth, 60% of the maximum write bandwidth goes to
persistent disk traffic, leaving 40% for IP traffic. Click below to
see an example of how to calculate the maximum persistent disk write
traffic that a VM instance can issue.
To optimize your disk performance you can do the following:
- change disk size (thus increasing IOPS)
- change machine type (to the one with higher network cap limit)
Here you can read how VM type affects GCP network caps.

Related

Performance of instance store vs EBS-optimized EC2 or RAID volumes

As far as I can tell from my own experience and from what I have read, there are very few situations in which one wouldn't want to use EBS over instance store. However, instance store is generally faster for disk read/writes due to it's being physically attached to the EC2. How much faster, and whether it is faster in all cases, I don't know.
So, what I am curious about, if anyone out there has had some experience with any of these, is the relative speed/performance of:
An EC2 using instance store vs a non-storage-optimized EC2 using EBS (of any storage type)
An EC2 using instance store vs a storage-optimized (I3) EC2 using EBS
An EC2 using instance store vs a non-storage-optimized EC2 using some kind of EBS RAIDing
A non-storage-optimized EBS-backed EC2 vs a storage-optimized EC2 vs an EC2 with an EBS RAID configuration
All of the above vs EBS-optimized instances of any type.
The more specific and quantifiable the answers the better -- thanks!
Now Available – I3 Instances for Demanding, I/O Intensive Applications claims that Instance Store on i3 instances:
can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput.
Coming Soon – The I2 Instance Type – High I/O Performance Via SSD claims that Instance Store on i2 instances:
deliver 350,000 random read IOPS and 320,000 random write IOPS.
Amazon EBS Volume Types lists:
General Purpose SSD: Maximum 10,000 IOPS/Volume
Provisioned IOPS SSD: Maximum 20,000 IOPS/Volume
Throughput Optimized HDD: Maximum throughput 500 MiB/s (Optimized for throughput rather than IOPS, good for large, contiguous reads)
We did a thorough set of benchmarks for some of those situations comparing EBS vs. instance store, namely,
EBS SSD general iops (about 3k for a 1TB volume)
EBS SSD provisioned iops (about 50k for a 1TB volume)
instance store local disk (one disk raid 0)
instance store local disk (two disks striped raid 1)
We had the following takeaways,
the local disk options are vastly better at random access than EBS, due to low latency and iops being a limiting factor. Although most applications try to avoid random reading, it’s hard to avoid completely and so good performance in this area is a big plus.
Sequential reads are also vastly better than EBS, mainly due to rate limiting of EBS, specifically the throughput. Generally you are going to get full, unrestricted access to a local disk with much lower latency than network storage (EBS).
Raid1 is (not surprisingly) up to 2x better for reads than the single disk. Writes are the same due to needing to write to both disks. However on larger system, you can have 4+ disks and do raid10 (mirrored striping) which would give improvements to writes as well.
Unfortunately as mentioned at the start, local disk options are ephemeral and will lose your data on a terminate/stop of the instance. Even so, it might be worth considering a high availability architecture to allow using it.
EBS 50K is certainly more performant than 3K, although you generally need to get past 4+ threads to see a real difference (e.g. a database). Single threaded processes are not going to be much faster (e.g. a file copy, zip, etc..). EBS 50k was limited by the instance max iops (30k), so generally be aware the instance size also can be a limiting factor on EBS performance.
It’s possible to raid EBS as well, but keep in mind it’s networked storage and so that will likely be a real bottleneck on any performance gains. Worth a separate test to compare.
full details on the benchmarks can be found at,
https://www.scalebench.com/blog/index.php/2020/06/03/aws-ebs-vs-aws-instance-storelocal-disk-storage/

Scrapy spiders drastically slows down while running on AWS EC2

I am using scrapy to scrape multiple sites and Scrapyd to run spiders.
I had written 7 spiders and each spider processes at least 50 start URLs. I have around 7000 URL's. 1000 URL's for each spider.
As I start placing jobs in ScrapyD with 50 start URL's per job. Initially all spiders responds fine but suddenly they start working really slow. While running those on localhost it gives high performance.
While I run Scrapyd on localhost it gives me very high performance. As I publish jobs on Scrapyd server. Request response time drastically decreases.
Response time for each start URL is really slow after some time on server
Settings looks like this:
BOT_NAME = 'service_scraper'
SPIDER_MODULES = ['service_scraper.spiders']
NEWSPIDER_MODULE = 'service_scraper.spiders'
CONCURRENT_REQUESTS = 30
# DOWNLOAD_DELAY = 0
CONCURRENT_REQUESTS_PER_DOMAIN = 1000
ITEM_PIPELINES = {
'service_scraper.pipelines.MongoInsert': 300,
}
MONGO_URL="mongodb://xxxxx:yyyy"
EXTENSIONS = {'scrapy.contrib.feedexport.FeedExporter': None}
HTTPCACHE_ENABLED = True
We tried changing CONCURRENT_REQUESTS and CONCURRENT_REQUESTS_PER_DOMAIN, but nothing is working. We had hosted scrapyd in AWS EC2.
As with all performance testing, the goal is to find the performance bottleneck. This typically falls to one (or more) of:
Memory: Use top to measure memory consumption. If too much memory is consumed, it might swap to disk, which is slower than RAM. Try adding memory.
CPU: Use Amazon CloudWatch to track CPU. Be very careful with t2 instances (see below).
Disk speed: If the job is disk-intensive, or if memory is swapping to disk, this can impact performance -- especially for databases. Amazon EBS is network-attached disk, so network speed can actually throttle disk speed.
Network speed: Due to the multi-tenant design of Amazon EC2, network bandwidth is intentionally throttled. The amount of network bandwidth available depends upon the instance type used.
You are using a t2.small instance. It has:
Memory: 2GB (This is less than the 4GB on your own laptop)
CPU: The t2 family is extremely powerful, but the t2.small only receives an average 20% of CPU (see below).
Network: The t2.small is rated as Low to Moderate network bandwidth.
The fact that your CPU is recording 60%, while the t2.small is limited to an average 20% of CPU indicates that the instance is consuming CPU credits faster than they are being earned. This leads to an eventual exhaustion of CPU credits, thereby limiting the machine to 20% of CPU. This is highly likely to be impacting your performance. You can view CPU Credit balances in Amazon CloudWatch.
See: T2 Instances documentation for an understanding of CPU Credits.
Network bandwidth is relatively low for the t2.small. This impacts Internet access and communication with the Amazon EBS storage volume. Given that your application is downloading lots of web pages in parallel, and then writing them to disk, this is also a potential bottleneck for your system.
Bottom line: When comparing to the performance on your laptop, the instance in use has less memory, potentially less CPU due to exhaustion of CPU credits and potentially slower disk access due to high network traffic.
I recommend you use a larger instance type to confirm that performance is improved, then experiment with different instance types (both in the t2 family and outside of it) to determine what size machine gives you the best price/performance trade-off.
Continue to monitor the CPU, Memory and Network performance to identify the leading bottleneck, then aim to fix that bottleneck.

statement_mem seems to limit the node memory instead of the segment memory

According to the GreenPlum documentation, GUCs such as statement_mem, gp_vmem_protect_limit should work at segment level. Same thing should happen with a resource queue memory allowance.
On our system we have 8 primary segments per node. So if I set the statement_mem of a query to 2GB I would expect the query to consume (if needed) up to 2GB x 8 = 16GBs of RAM. But it seems that it would only use 2GBs total per node before starting to write into disk (that's it 2GB/8 per segment). I tried with different statement_values and same thing.
max_statement_mem or gp_vmem_protect_limit limits are never reached. RAM usage on nodes have been monitored using various tools (from GP command center to top, free, all the way across Pivotal suggested session_level_memory_consumption view).
EDITED FROM HERE
ADDED two documentation sources where statement_mem is defined per segment and not per host. (#Jon Roberts)
On the GP best practices guide, beginning of page 32, it clearly says that if the statement_mem is 125MB and we have 8 segments on the server, each query will get 1GB allocated per server.
https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwi6sOTx8O3KAhVBKg4KHTwICX0QFggmMAE&url=http%3A%2F%2Fgpdb.docs.pivotal.io%2F4300%2Fpdf%2FGPDB43BestPractices.pdf&usg=AFQjCNGkTqa6143fvJUztYISWAiVyj62dA&sig2=D2ZcJwLDqN0qBzU73NjXNg&bvm=bv.113943164,d.ZWU&cad=rja
On the https://support.pivotal.io/hc/en-us/articles/201947018-Pivotal-Greenplum-GPDB-Memory-Configuration it seems to use statement_mem as segment memory and not host memory. It keeps interrelating statement_mem with the memory limit of the resource queues as well as with the gp_vmem_protect_limit (both parameters defined per segment basis).
This is why I'm getting confused about how to properly manage the memory resources.
Thanks
I incorrectly stated that statement_mem is on a per host and that is not the case. This link is talking about the memory on a segment level:
http://gpdb.docs.pivotal.io/4370/guc_config-statement_mem.html#statement_mem
With the default of "eager_free" gp_resqueue_memory_policy, memory gets re-used so the aggregate amount of memory used may look low for a particular query execution. If you change it to "auto" where the memory isn't re-used, the memory usage is more noticeable.
Run an "explain analyze" of your query and see the slices that are used. With eager_free, the memory gets re-used so you may only have a single slice wanting more memory than available such as this one:
(slice18) * Executor memory: 10399K bytes avg x 2 workers, 10399K bytes max (seg0). Work_mem: 8192K bytes max, 13088K bytes wanted.
And for your question on how to manage the resources, most people don't change the default values. A query that spills to disk is usually an indication that the query needs to be revised or the data model needs some work.

How much memory is available for database use in memsql

I have created memsql cluster on 7 machines. One of the machine shows that out of 62.86 GB only 2.83 is used. So here I am assuming that around 60 GB
memory is available to store data.
But my top command tell another story
Here we can see that about 21.84 GB memory is getting used and free memory is 41 GB.
So
1> How much exact memory is available for database? Is it 60 Gb as per cluster URL or 42 Gb as per top command
Note that:
1>memsql-op is consuming aroung 13.5 g virtual memory.
2> as per 'top' if we subtract buffered and cached memory's total size from used memory, then it comes to 2.83GB which is used memory as per cluster URL
To answer your question, you currently have about 60GB of memory free to be used by any process on your machine including the MemSQL database. Note that MemSQL has some overhead and by default reserves a small percentage of the total memory for overhead. If you visit the status page in the MemSQL Ops UI and view the "Leaf Table Memory" card, you will discover the amount of memory that can be used for data storage within the leaf nodes of your MemSQL cluster.
MemSQL Ops is written in Python which is then embedded into a "single binary" via a packaging tool. Because of this it exhibits a couple of oddities including high VM use. Note that this should not affect the amount of data you can store, as Ops is only consuming 308MB of resident memory on your machine. It should stay relatively constant based on the size of your cluster.

Is there any limitation on EC2 machine or network?

I have 2 instances on Amazon EC2. The one is a t2.micro machine as web cache server, the other is a performance test tool.
When I started a test, TPS (transactions per second) was about 3000. But a few minutes later TPS has been decreased to 300.
At first I thought that the CPU credit balance was exhausted, but it was enough to process requests. During a test, the max outgoing traffic of web cache was 500Mbit/s, usage of CPU was 60% and free memory was more than enough.
I couldn't find any cause of TPS decrease. Is there any limitation on EC2 machine or network?
There are several factors that could be constraining your processes.
CPU credits on T2 instances
As you referenced, T2 instances use credits for bursting CPU. They are very powerful machines, but each instance is limited to a certain amount of CPU. t2.micro instances are given 10% of CPU, meaning they actually get 100% of the CPU only 10% of the time (at low millisecond resolution).
Instances start with CPU credits for a fast start, and these credits are consumed when the CPU is used faster than the credits are earned. However, you say that the credit balance was sufficient, so this appears not to be the cause.
Network Bandwidth
Each Amazon EC2 instance can use a certain throughput of network bandwidth. Smaller instances have 'low' bandwidth, bigger instances have more. There is no official statement of bandwidth size, but this is an interesting reference from Serverfault: Bandwidth limits for Amazon EC2
Disk IOPS
If your application uses disk access for each transaction, and your instance is using a General Purpose (SSD) instance type, then your disk may have consumed all available burst credits. If your disk is small, this could mean it will run slow (speed is 3 IOPS per GB, so a 20GB disk would run at 60 IOPS). Check the Amazon CloudWatch VolumeQueueLength metric to see if IO is queuing excessively.
Something else
The slowdown could also be due to your application or cache system (eg running out of free memory for storing data).

Resources