I would like to tweak my PostgreSQL server but even after reading a few tutorials online I am not getting good performance out of the database.
I've got a server with the following specs:
Windows Server 2012 R2 Datacenter
Intel CPU E5-2670 v2 # 2.50 GHz
64-bit Operating System
512 GB RAM
PostgreSQL 9.3
I would like to use postgres as a data storage / aggregation system for the following tasks:
Read data from various data sources (mostly flat files) (volumes between 100GB and 1TB)
Pre-process / clean data
Aggregate data
Feed aggregated or sampled data into R or python for modelling
Up to 10 concurrent users only
This means, I do not really care about the following:
Update speads (I only bulk-load data)
Failure resistance (in the unlikely event that things break, I can always reload everything from my input files)
Currently, load speeds are fine, but creating indexes and aggregating data takes very long and barely uses any memory.
Here is my current postgres.config: http://pastebin.com/KpSi2zSd
I think the obvious step here is to increase the work_mem and maintenance_work_mem considerably, with the fine detail being "how much"?
If you have control over how many aggregation queries and/or index creations are running at a time then you can be pretty aggressive with these, but you face the risk that with 10 concurrent users and a 30GB setting you could be putting your server under memory pressure.
It would really benefit you to get some execution plans for the slow running queries, as they will tell you that you need so-much memory for "Sort Method: external merge Disk" for example, and you can then adjust your settings while keeping an eye on the total memory usage on the server.
I wouldn't rule out that you have to re-jig your loads so that the most resource intensive run on their own, while less resource intensive operations run at the same time.
However, I think at the moment you are lacking some of the hard metrics that will let you make a good choice on memory allocation.
Related
I have a scenario here,
The Elasticsearch DB with about 1.4 TB of data having,
_shards": {
"total": 202,
"successful": 101,
"failed": 0
}
Each index size is approximately between, 3 GB to 30 GB and in near future, it is expected to have 30GB file size on a daily basis.
OS information:
NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"
The system has 32 GB of RAM and the filesystem is 2TB (1.4TB Utilised). I have configured a maximum of 15 GB for Elasticsearch server.
But this is not enough for me to query this DB. The server hangs for a single query hit on server.
I will be including 1TB on the filesystem in this server so that the total available filesystem size will be 3TB.
also I am planning to increase the memory to 128GB which is an approximate estimation.
Could someone help me calculate how to determine the minimum RAM required for a server to respond at least 50 requests simultaneously?
It would be greatly appreciated if you can suggest any tool/ formula to analyze this requirement. also it will be helpful if you can give me any other scenario with numbers so that I can use that to determine my resource need.
You will need to scale using several nodes to stay efficient.
Elasticsearch has its per-node memory sweet spot at 64GB with 32GB reserved for ES.
https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html#_memory for more details. The book is a very good read if you are using Elasticsearch for serious stuff
If you're here for a rule of thumb, I'd say that on modern ES and Java, 10-20GB of heap per TB of data (I'm thinking of the typical ELK use-case) should be enough. Multiplying by 2, that's 20-40GB of total RAM per TB.
Now for the datailed answer :) There are two types of memory that are relevant here:
JVM heap
OS cache (the OS will use free memory to cache index files)
OS cache is down to your IO requirements (queries do lots of small random IO). If you have a query-intensive use-case (e.g. E-commerce), you'll want to fit your whole index in the OS cache (or at least most of it). For logs and other time-series data, you typically have more expensive, rarer queries. There, if you have a local SSD you can make do with only a fraction of your data in the cache. I've seen servers with 4TB of disk space on 32GB of OS cache.
JVM heap can also be divided in two:
static memory, required even when the server is idle
transient memory, required by ongoing indexing/search operations
You'd see most of the static memory if you hit the _nodes/stats endpoint. It's best if you have these plotted in your Elasticsearch monitoring tool. You'll see it as segments_memory and various caches. For recent versions of Elasticsearch (e.g. 7.7 or higher), there's not a lot of memory like this - at least for most use-cases. I've seen ELK deployments with multiple TB of data definitely using less than 10GB of RAM for static memory. That said, you may reduce it by not storing info that you don't need. For example by not indexing fields you don't search on.
Transient memory will mainly depend on your queries: how often they run and how expensive they are. One-off expensive queries tend to be more dangerous, so avoid using too many levels of aggregations, massive size values, or queries that expand to too many terms (wildcards, fuzzy...). To accommodate those, you simply need heap. How much? It's really a matter of monitor-and-adjust.
Side-note: I don't agree with the general suggestion that you should stay under 32GB at all costs. With Java 11+ and G1GC, I've seen deployments with over 100GB of heap that run just fine. The overhead of uncompressed oops is not 10-20GB at every 30GB, like the docs suggest - that's an extrapolation of a worse-case scenario. In my experience, it's more like a few GB every 30GB - something like 10% for many deployments. This doesn't mean you have to use 100GB of heap, it's just that if you need a lot of heap in your cluster, you don't have to have hundreds of nodes (you can have fewer bigger ones).
Speaking of GC, it may fall behind if you run many queries that aren't terribly expensive. And then you'd run out of heap, even if you have plenty. Monitoring should tell you this, as a full GC will eventually clean up the heap with a big pause (read: cluster instability). Here, Java 11 with G1GC and a low -XX:GCTimeRatio (e.g. 3) should fix the issue.
This gives a good overview of heap sizing and memory management and you will be able to answer yourself.
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://www.elastic.co/guide/en/elasticsearch/guide/master/_limiting_memory_usage.html
We're just starting to investigate using Postgres as the backend for our system which will be used with an OLTP-type workload: > 95% (possibly >99%) of the transactions will be inserting 1 row into 4 separate tables, or updating 1 row. Our test machine is running 9.5.6 (using out-of-the-box config options) on a modest cloud-hosted Windows VM with a 4-core i7 processor, with a conventional 7200 RPM disk. This is much, much slower than our targeted production hardware, but useful right now for finding bottlenecks in our basic design.
Our initial tests have been pretty discouraging. Although the insert statements themselves run fairly quickly (combined execution time is around 2ms), the overall transaction time is around 40ms, due to the commit statement taking 38 ms. Furthermore, during a simple 3-minute load test (5000 transactions), we're only seeing about 30 transactions per second, with pgbadger reporting 3 minutes spent in "commit" (38 ms avg.), and the next highest statements being the inserts at 10 (2ms) and 3 (0.6 ms) respectively. During this test, the cpu on the postgres instance is pegged at 100%
The fact that the time spent in commit is equal to the elapsed time of the test tells me the that not only is commit serialized (unsurprising, given the relatively slow disk on this system), but that it is consuming a cpu during that duration, which surprises me. I would have assumed before the fact that if we were i/o bound, we would be seeing very low cpu usage, not high usage.
In doing a bit of reading, it would appear that using Asynchronous Commits would solve a lot of these issues, but with the caveat of data loss on crashes/immediate shutdown. Similarly, grouping transactions together into a single begin/commit block, or using multi-row insert syntax improves throughput as well.
All of these options are possible for us to employ, but in a traditional OLTP application, none of them would be (you need to have fast, atomic, synchronous transactions). 35 transactions per second on a 4-core box would have unacceptable 20 years ago on other RDBMs running on much slower hardware than this test machine, which makes me think that we're doing this wrong, as I'm sure Postgres is capable of handling much higher workloads.
I've looked around but can't find some common-sense config options that would serve as starting points for tuning a Postgres instance. Any suggestions?
If COMMIT is your time hog, that probably means:
Your system honors the FlushFileBuffers system call, which is as it should be.
Your I/O is miserably slow.
You can test this by setting fsync = off in postgresql.conf – but don't ever do this on a production system. If that improves performance a lot, you know that your I/O system is very slow when it actually has to write data to disk.
There is nothing that PostgreSQL (or any other reliable database) can improve here without sacrificing data durability.
Although it would be interesting to see some good starting configs for OLTP workloads, we've solved our mystery of the unreasonably high CPU during the commits. Turns out it wasn't Postgres at all, it was Windows Defender constantly scanning the Postgres data files. The team that set up our VM that was hosting the test server didn't understand that we needed a backend configuration as opposed to a user configuration.
Do you think using an EC2 instance (Micro, 64bit) would be good for MongoDB replica sets?
Seems like if that is all they did, and with 600+ megs of RAM, one could use them for a nice set.
Also, would they make good primary (write) servers too?
My database is only 1-2 gigs now but I see it growing to 20-40 gigs this year (hopefully).
Thanks
They COULD be good - depending on your data set, but likely they will not be very good.
For starters, you dont get much RAM with those instances. Consider that you will be running an entire operating system and all related services - 613mb of RAM could get filled up very quickly.
MongoDB tries to keep as much data in RAM as possible and that wont be possible if your data set is 1-2 gigs and becomes even more of a problem if your data set grows to 20-40 gigs.
Secondly they are labeled as "Low IO performance" so when your data swaps to disk (and it will based on the size of that data set), you are going to suffer from disk reads due to low IO throughput.
Be aware that micro instances are designed for spiky CPU usage, and you will be throttled to the "low background level" if you exceed the allotment.
The AWS Micro Documentation has good information of what they are intended for.
Between the CPU and not very good IO performance my experience with using micros for development/testing has not been very good. (larger instance types have been fine though), but a micro may work for your use case.
However, there are exceptions for a config or arbiter nodes, I believe a micro should be good enough for these types of machines.
There is also some mongodb documentation specific to EC2 which might help.
I have a 64 bit server, 8 GB RAM, dual quad CPU. No resources are ever hitting 100% (except, I guess, the JVM -- right?).
I need to index several million records for Solr, but the machine is in production. I recognize having a second machine for indexing would be helpful.
Should I dedicate a second instance of the JVM, dedicated to Solr?
Right now, when I run an index, pages which are normally served in 200 milliseconds will serve up in about 1.5 seconds, sometimes more... hitting, even, the dreaded "Service is Unavailable" error.
I adjusted my JVM Heap as follows:
-Xmx1024m
-XX:MaxPermSize256m
In case I'm chasing the wrong solution, allow me to broaden the landscape a bit. It seems that I can't affect the indexing speed of Solr. I had previously been indexing about 150,000 records per hour on a dev server virtualized on a workstation. In a production environment with much more hardware available, I'm indexing at the exact same speed.
Without data to prove it, I think that my JVM adjustments did not speed up the indexing, although it may have allowed the CF server to continue serving pages. I must say, the indexing speed is terribly slow, but I do know that it's not a function of the data access layer. I rewrote it from pure ORM to objects backed by SQL Stored Procedures thinking that was the slowdown (no effect).
use a separate instance for indexing the index, the only trick is getting the running searching instance to re-read the updated index, in which case, you set up a master (the indexer) and slave(the searcher) and do replication. this will both make the searcher not get interrupted, and the indexer will utilize its own JVM including its own share of the resources.
Have you tried these optimization tips?
http://bloggeraroundthecorner.blogspot.com/2009/08/tuning-coldfusion-solr-part-1.html
http://bloggeraroundthecorner.blogspot.com/2009/08/tuning-coldfusion-solr-part-2.html
http://bytestopshere.com/post.cfm/lessons-learned-moving-from-verity-to-solr-part-1
I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64).
The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.)
The problem has the following qualities and resiliences:
Problem persists no matter what the target table is.
RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use.
Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high.
CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe)
Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s)
OLE DB destination and SQL Server Destination both exhibit this problem.
To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem.
I'll gratefully reward any reasonable answers / suggestions.
We finally found a solution... the problem lay in the fact that my client was using VMWare ESX, and despite the VM reporting plenty of free CPU and RAM, the VMWare gurus has to pre-allocate (i.e. gaurantee) the CPU for the SSIS guest VM before it really started to fly. Without this, SSIS would be running but VMWare would scale back the resources - an odd quirk because other processes and software kept the VM happily awake. Not sure why SSIS was different, but as I said, the VMWare gurus fixed this problem by reserving RAM and CPU.
I have some other feedback by way of a checklist of things to do for great performance in SSIS:
Ensure SQL login has BULK DATA permission, else data load will be very slow. Also check that the target database uses the Simple or Bulk Logged recovery model.
Avoid sort and merge components on large data - once they start swapping to disk the performance gutters.
Source sorted input data (according to the target table's primary key), and disable non-clustered indexes on target table, set MaximumInsertCommitSize to 0 on the destination component. This bypasses TempDB and log altogether.
If you cannot meet requirements for 3, then simply set MaximumInsertCommitSize to the same size as the data flow's DefaultMaxBufferRows property.
The best way to diagnose performance issues with SSIS Data Flows is with decomposition.
Step 1 - measure your current package performance. You need a baseline.
Step 2 - Backup your package, then edit it. Remove the Destination and replace it with a Row Count (or other end-of-flow-friendly transform). Run the package again to measure performance. Now you know the performance penalty incurred by your Destination.
Step 3 - Edit the package again, removing the next transform "up" from the bottom in the data flow. Run and measure. Now you know the performance penalty of that transform.
Step 4...n - Rinse and repeat.
You probably won't have to climb all the way up your flow to get an idea as to what your limiting factor is. When you do find it, then you can ask a more targeted performance question, like "the X transform/destination in my data flow is slow, here's how it's configured, this is my data volume and hardware, what options do I have?" At the very least, you'll know exactly where your problem is, which stops a lot of wild goose chases.
Are you issuing any COMMITs? I've seen this kind of thing slow down when the working set gets too large (a relative measure, to be sure). A periodic COMMIT should keep that from happening.
First thoughts:
Are the database files growing (without instant file initialization for MDFs)?
Is the upload batched/transactioned? AKA, is it one big transaction?)