We're just starting to investigate using Postgres as the backend for our system which will be used with an OLTP-type workload: > 95% (possibly >99%) of the transactions will be inserting 1 row into 4 separate tables, or updating 1 row. Our test machine is running 9.5.6 (using out-of-the-box config options) on a modest cloud-hosted Windows VM with a 4-core i7 processor, with a conventional 7200 RPM disk. This is much, much slower than our targeted production hardware, but useful right now for finding bottlenecks in our basic design.
Our initial tests have been pretty discouraging. Although the insert statements themselves run fairly quickly (combined execution time is around 2ms), the overall transaction time is around 40ms, due to the commit statement taking 38 ms. Furthermore, during a simple 3-minute load test (5000 transactions), we're only seeing about 30 transactions per second, with pgbadger reporting 3 minutes spent in "commit" (38 ms avg.), and the next highest statements being the inserts at 10 (2ms) and 3 (0.6 ms) respectively. During this test, the cpu on the postgres instance is pegged at 100%
The fact that the time spent in commit is equal to the elapsed time of the test tells me the that not only is commit serialized (unsurprising, given the relatively slow disk on this system), but that it is consuming a cpu during that duration, which surprises me. I would have assumed before the fact that if we were i/o bound, we would be seeing very low cpu usage, not high usage.
In doing a bit of reading, it would appear that using Asynchronous Commits would solve a lot of these issues, but with the caveat of data loss on crashes/immediate shutdown. Similarly, grouping transactions together into a single begin/commit block, or using multi-row insert syntax improves throughput as well.
All of these options are possible for us to employ, but in a traditional OLTP application, none of them would be (you need to have fast, atomic, synchronous transactions). 35 transactions per second on a 4-core box would have unacceptable 20 years ago on other RDBMs running on much slower hardware than this test machine, which makes me think that we're doing this wrong, as I'm sure Postgres is capable of handling much higher workloads.
I've looked around but can't find some common-sense config options that would serve as starting points for tuning a Postgres instance. Any suggestions?
If COMMIT is your time hog, that probably means:
Your system honors the FlushFileBuffers system call, which is as it should be.
Your I/O is miserably slow.
You can test this by setting fsync = off in postgresql.conf – but don't ever do this on a production system. If that improves performance a lot, you know that your I/O system is very slow when it actually has to write data to disk.
There is nothing that PostgreSQL (or any other reliable database) can improve here without sacrificing data durability.
Although it would be interesting to see some good starting configs for OLTP workloads, we've solved our mystery of the unreasonably high CPU during the commits. Turns out it wasn't Postgres at all, it was Windows Defender constantly scanning the Postgres data files. The team that set up our VM that was hosting the test server didn't understand that we needed a backend configuration as opposed to a user configuration.
Related
I would like to tweak my PostgreSQL server but even after reading a few tutorials online I am not getting good performance out of the database.
I've got a server with the following specs:
Windows Server 2012 R2 Datacenter
Intel CPU E5-2670 v2 # 2.50 GHz
64-bit Operating System
512 GB RAM
PostgreSQL 9.3
I would like to use postgres as a data storage / aggregation system for the following tasks:
Read data from various data sources (mostly flat files) (volumes between 100GB and 1TB)
Pre-process / clean data
Aggregate data
Feed aggregated or sampled data into R or python for modelling
Up to 10 concurrent users only
This means, I do not really care about the following:
Update speads (I only bulk-load data)
Failure resistance (in the unlikely event that things break, I can always reload everything from my input files)
Currently, load speeds are fine, but creating indexes and aggregating data takes very long and barely uses any memory.
Here is my current postgres.config: http://pastebin.com/KpSi2zSd
I think the obvious step here is to increase the work_mem and maintenance_work_mem considerably, with the fine detail being "how much"?
If you have control over how many aggregation queries and/or index creations are running at a time then you can be pretty aggressive with these, but you face the risk that with 10 concurrent users and a 30GB setting you could be putting your server under memory pressure.
It would really benefit you to get some execution plans for the slow running queries, as they will tell you that you need so-much memory for "Sort Method: external merge Disk" for example, and you can then adjust your settings while keeping an eye on the total memory usage on the server.
I wouldn't rule out that you have to re-jig your loads so that the most resource intensive run on their own, while less resource intensive operations run at the same time.
However, I think at the moment you are lacking some of the hard metrics that will let you make a good choice on memory allocation.
I have the requirement to insert 10,000 docs into marklogic in less than 10 seconds.
I tested in one single-node marklogic server in the following way:
use xdmp:spawn to pass the doc insertion task to task server;
use xdmp:document-insert without specify forest explicitly;
the task server has 8 theads to process tasks;
We have enabled CPF.
The performance is very bad: it took 2 minutes to finish the 10,000 doc creation.
I'm sure the performance will be better if I tested it in a cluster environment, but I'm not sure whether it can finish in less than 10 seconds.
Please advise the way of improving the performance.
I would start by gathering more information. What version of MarkLogic is this? What OS is it running on? What's the CPU? RAM? What's the storage subsystem? How many forests are attached to the database?
Then gather OS-level metrics, to see if one of the subsystems is an obvious bottleneck. For now I won't speculate beyond that.
If you need a fast load, I wouldn't use xdmp:spawn for each individual document, nor use CPF. But 2 minutes for 10k docs doesn't necessarily sound slow. On the other hand, I have reached up to 3k/sec, but without range indexes, transforms, whatsoever. And a very fast disk (e.g. ssd)..
HTH!
Assuming 2 socket server, 128GB-256GB of ram, fast IO (400-800MB/sec sustained)
Appropriate number of forests (12 primary or 6 primary/6 secondary)
More than 8 threads assuming enough cores
CPF off
Turn on perf history, look in metrics, and you will see where the bottleneck is.
SSD is not required - just IO throughput...which multiple spinning disks provide without issue.
I want to do load testing for 10 million users for my site. The site is a Java based web-app. My approach is to create a Jmeter test plan for all the links and then take a report for the 10 million users. Then use jvisualVM to do profiling and check if there are any bottlenecks.
Is there any better way to do this? Is there any existing demo for doing this? I am doing this for the first time, so any assistance will be very helpful.
You are on the correct path, but your load limit is of with a high factor.
Why I'm saying this is cause your site probably will need more machine to handle 10Milj Concurrent users. A process alone would probably struggle to handle concurrent 32K TCP-streams. Also do some math of the bandwidth it would take to actually handle 10Milj users.
Now I do not know what kind of service you thinking of providing on your site, but when thinking of that JVisualVM slows down processing by a factor 10 (or more for method tracing), you would not actually measure the "real world" if you got JMeter and JVisualVM to work at the same time.
JVisualVM is more useful when you run on lower loads.
To create a good measurement first make sure your have a good baseline.
Make a test with 10 concurrent users, connect up JVisuamVM and let it run for a while, not down all interesting values.
After you have your baseline, then you can start adding more load.
Add 10times the load (ea: 100 users), look at the changes in JVisualVM. Continue this until it becomes obvious that JVisualVM slows you down, for every time to add extra load, make sure you have written down the numbers your are interested in. Plot down the numbers in a graph.
Now... Interpolate the graph (by hand) for the number of users you want. This works for memory usage, disc access etc, but not for used CPU time, cause JVisualVM will eat CPU and give you invalid numbers on that (especially if you have method tracing turned on).
If you really want to go as high as 10Milj users, I would not trust JMeter either, I would write a little test program of my own that performs the test you want. This would be okey, since the the setting up the site to handle 10Milj will also take time, so spending a little extra time of the test tools are not a waste.
Just because you have 10 million users in the database, doesn't mean that you need to load test using that many users. Think about it - is your site really going to have 10 million simultaneous users? For web applications, a ratio of 1:100 registered users is common i.e. you are unlikely to have more than 100K users at any moment.
Can JMeter handle that kind of load? I doubt it. Please try faban instead. It is very light-weight and can support thousands of users on a single VM. You also have much better flexibility in creating your workload and can also automate monitoring of your entire test infrastructure.
Now to the analysis part. You didn't say what server you were using. Any Java appserver will provide sufficient monitoring support. Commercial servers provide nice GUI tools while Tomcat provides extensive monitoring via JMX. You may want to start here before getting down to the JVM level.
For the JVM, you really don't want to use VisualVM while running such a large performance test. Besides to support such a load, I assume you are using multiple appserver/JVM instances. The major performance issue is usually GC, so use the JVM options to collect and log GC information. You will have to post-process the data.
This is a non-trivial exercise - good luck!
There are two types of load testing - bottleneck identification and throughput. The question leads me to believe this is about bottlenecks, so number of users is a something of a red herring, instead the goal being for a given configuration finding areas that can be improved to increase concurrency.
Application bottlenecks usually fall into three categories: database, memory leak, or slow algorithm. Finding them involves putting the application in question under stress (i.e. load) for an extended period of time - at least an hour, perhaps up to several days. Jmeter is a good tool for this purpose. One of the things to consider is running the same test with cookie handling enabled (i.e. Jmeter retains cookies and sends with each subsequent request) and disabled - sometimes you get very different results and this is important because the latter is effectively a simulation of what some crawlers do to your site. Details for bottleneck detection follow:
Database
Tables without indices or SQL statements involving multiple joins are frequent app bottlenecks. Every database server I've dealt with, MySQL, SQL Server, and Oracle has some way of logging or identifying slow running SQL statements. MySQL has the slow query log, whereas SQL Server has dynamic management views that track the slowest running SQL. Once you've got your hands on the slow statements use explain plan to see what the database engine is trying to do, use any features that suggest indices, and consider other strategies - such as denormalization - if those two options do not solve the bottleneck.
Memory Leak
Turn on verbose garbage collection logging and a JMX monitoring port. Then use jConsole, which provides much better graphs, to observe trends. In particular leaks usually show up as filling the Old Gen or Perm Gen spaces. Leaks are a bottleneck with the JVM spends increasing amounts of time attempting garbage collection unsuccessfully until an OOM Error is thrown.
Perm Gen implies the need to increase the space as a command line parameter to the JVM. While Old Gen implies a leak where you should stop the load test, generate a heap dump, and then use Eclipse Memory Analysis Tool to identify the leak.
Slow Algorithm
This is more difficult to track down. The most frequent offenders are synchronization, inter process communication (e.g. RMI, web services), and disk I/O. Another common issue is code using nested loops (look mom O(n^2) performance!).
Best way I've found to find these issues absent some deeper knowledge is generating stack traces. These will tell what all threads are doing at a given point in time. What you're looking for are BLOCKED threads or several threads all accessing the same code. This usually points at some slowness within the codebase.
I blogged, the way I proceeded with the performance test:
Make sure that the server (hardware can be as per the staging/production requirements) has no other installations that can affect the performance.
For setting up the users in DB, a procedure can be used and can be called as a part of jmeter test plan.
Install jmeter on a separate machine, so that jmeter won't affect the performance.
Create a test plan in jmeter (as shown in the figure 1) for all the uri's, with response checking and timer based requests.
Take the initial benchmark, using jmeter.
Check for the low performance uri's. These are the points to expect for bottlenecks.
Try different options for performance improvement, but focus on only one bottleneck at a time.
Try any one fix from step 6 and then take an benchmark. If there is any improvement commit the changes and repeat from step 5. Otherwise revert and try for any other options from step 6.
The next step would be to use load balancing, hardware scaling, clustering, etc. This may include some physical setup and hardware/software cost. Give the results with the scalability options.
For detailed explanation: http://www.daemonthread.com/2011/06/site-performance-tuning-using-jmeter.html
I started using JMeter plugins.
This allows me to gather application metrics available over JMX to use in my Load Test.
I need to load over 1 billion keys into Berkley DB and therefore I want to tune it in advance to get better performance. With standard configuration it takes me now about 15min to load 1'000'000 keys which is too slow.
Is there a proper way to tune for example the B+Tree of Berkley DB (node size etc...)?
(As an comparision, after tuning tokyo cabinet, it loads 1 billion keys in 25min).
P.S.
I'm looking for tuning tips as a code and not parameters to set for a running system (like jvm size etc...)
I'm curious, when TokyoCabinet loads 1B keys in 25 minutes what are the sizes of the keys/values being stored? What's the I/O systems and the storage system you're using? Are you using the term "load" to mean 1B transactional commits to permanent stable storage? That would be ~666,666 inserts/second, which is physically impossible given any I/O system I'm aware of. Multiply that number times the key and value size and now you're hopelessly beyond physical limits.
Please take a look at Gustavo Duarte's blog, read a bit about I/O systems and how things work in hardware and then review your statement. I'm very interested in finding out what exactly TokyoCabinet is doing and what it isn't doing. If I had to guess I'd say that either it's committing to file-system cache in the operating system, but not flushing (fdsync()-ing) those buffers to disk.
Full Disclosure: I'm a product manager at Oracle for Oracle Berkeley DB (a direct competitor of TokyoCabinet), I've been playing with these databases and the best hardware around for them for about ten years so I'm both biased and skeptical.
Berkeley DB has flags you can set on the transaction handle which mimic this and other similar methods of trading off durability (the "D" in ACID) for speed.
As far as how to make Berkeley DB Java Edition (BDB-JE) faster you can try the following:
Deferred writes: this delays writing
to the transaction log for as long as
possible (when buffers are full, it
flushes the data)
Sort your keys in advance: most
B-Trees (ours included) do much
better with in-order insertions for
fast load times-
Increasing the size of the log
files from the default of 10MiB to
something larger, like 100MiB, this
reduces I/O cost-
It's very important to be clear about claims of performance with databases. They seems simple, but it turns out to be very very tricky to get them right so that they don't ever corrupt data or lose committed transactions.
I hope this helps you a bit.
Bulk inserts on BDB-JE are an order of magnitude faster if you group them into a single transaction. The reason is that each single commit causes (by default) a sync write to disk while a transaction is synchronized on commit. In my application writing 100'000 small keys as single commits tooks more than a minute while in a transaction it takes just a few seconds.
I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64).
The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.)
The problem has the following qualities and resiliences:
Problem persists no matter what the target table is.
RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use.
Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high.
CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe)
Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s)
OLE DB destination and SQL Server Destination both exhibit this problem.
To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem.
I'll gratefully reward any reasonable answers / suggestions.
We finally found a solution... the problem lay in the fact that my client was using VMWare ESX, and despite the VM reporting plenty of free CPU and RAM, the VMWare gurus has to pre-allocate (i.e. gaurantee) the CPU for the SSIS guest VM before it really started to fly. Without this, SSIS would be running but VMWare would scale back the resources - an odd quirk because other processes and software kept the VM happily awake. Not sure why SSIS was different, but as I said, the VMWare gurus fixed this problem by reserving RAM and CPU.
I have some other feedback by way of a checklist of things to do for great performance in SSIS:
Ensure SQL login has BULK DATA permission, else data load will be very slow. Also check that the target database uses the Simple or Bulk Logged recovery model.
Avoid sort and merge components on large data - once they start swapping to disk the performance gutters.
Source sorted input data (according to the target table's primary key), and disable non-clustered indexes on target table, set MaximumInsertCommitSize to 0 on the destination component. This bypasses TempDB and log altogether.
If you cannot meet requirements for 3, then simply set MaximumInsertCommitSize to the same size as the data flow's DefaultMaxBufferRows property.
The best way to diagnose performance issues with SSIS Data Flows is with decomposition.
Step 1 - measure your current package performance. You need a baseline.
Step 2 - Backup your package, then edit it. Remove the Destination and replace it with a Row Count (or other end-of-flow-friendly transform). Run the package again to measure performance. Now you know the performance penalty incurred by your Destination.
Step 3 - Edit the package again, removing the next transform "up" from the bottom in the data flow. Run and measure. Now you know the performance penalty of that transform.
Step 4...n - Rinse and repeat.
You probably won't have to climb all the way up your flow to get an idea as to what your limiting factor is. When you do find it, then you can ask a more targeted performance question, like "the X transform/destination in my data flow is slow, here's how it's configured, this is my data volume and hardware, what options do I have?" At the very least, you'll know exactly where your problem is, which stops a lot of wild goose chases.
Are you issuing any COMMITs? I've seen this kind of thing slow down when the working set gets too large (a relative measure, to be sure). A periodic COMMIT should keep that from happening.
First thoughts:
Are the database files growing (without instant file initialization for MDFs)?
Is the upload batched/transactioned? AKA, is it one big transaction?)