I am evaluating ClickHouse's performance for potential use in a project. The write performance has been encouraging up to this point but as I was running my tests and had to restart the server a few times, I noticed an issue which has the potential of being a hard showstopper: the server startup time is fluctuating and most of the times extremely high.
My evaluation server contains 26 databases holding about 54 billion records and taking up 697.32 GB on disk.
With this amount of data I have been getting startup times from as low as 7m35s to almost 3h.
Is this normal? Can it be solved with some fancier configuration? Am I doing something really wrong? Because, as it stands, such a long startup time is a showstopper.
The main cause of slow startup time is due to the gigantic amount of metadata needed to be loaded, which has a positive correlation with the number of data files. In order to boost startup time, you need to either shrink the file count or get more memory in order to preserve all dentry and inode caches.
My evaluation server contains 26 databases holding about 54 billion records and taking up 697.32 GB on disk.
I'd suggest the following:
Try adjustint current data partition schemes in a coarser manner
Use OPTIMIZE TABLE <table> FINAL to compact all data files
Upgrade data disk to SSD or efficient RAID, or use file systems like btrfs to store meta data separately on fast storage.
Related
I have a 33MB collection with around 33k items in it. This has been working perfectly for the past month and the queries were responsive and no slow queries. The collection have all the required indexes, and normally the response is almost instant(1-2ms)
Today I spotted that there was a major query queue and the requests were just not getting processed. The Oplog was filling up and just not clearing. After some searching I found the post below which suggest compacting and databaseRepair. I ran the repair and it fixed the problem. Ridiculously slow mongoDB query on small collection in simple but big database
My question is what could have gone wrong with the collection and how did databaseRepair fix the problem? Is there a way for me to ensure this does not happen again?
There are many things that could be an issue here, but ultimately if a repair/compact solved things for you it suggests storage related issues. Here are a few suggestions to follow up on:
Disk performance: Ensure that your disks are performing properly and that you do not have bad sectors. If part of your disk is damaged it could have spiked access times and you may run into this again. You may want to test your RAM modules as well.
Fragmentation: Without knowing your write profile it's hard to say, but your collections and indexes could have fragmented all over your storage system. Running repair will have rebuilt them and brought them back into a more contiguous form, allowing your disk access times to be much faster, especially if you're using mechanical disks and are going to disk for a lot of data.
If this was the issue then you may want to adjust your paddingFactor to reduce the frequency of this in the future, especially if your updates are growing the size of your documents over time. (Assuming you're using MMAPv1 storage).
Page faults: I'm assuming you may have brought the system down to do the repair, which may have reset your memory/working set. You might want to monitor for hard page faults that indicate that your queries are being bottlenecked by IO rather than being served by your in-memory working set. If this is consistently the case, your application behavior may change unexpectedly as data gets pushed in and out of memory, and you may need to add more RAM.
I'm trying to insert about 250 million documents that are each roughly 400 bytes into MongoDB 3.0 with WiredTiger. I need to search on only one short string key, _user_lower. Although I'm using WiredTiger now, which is much better than MMAPv1, I did use MMAPv1 first and had similar issues.
My server (a very cheap VPS) has:
250 GB magnetic disk
1 GB RAM
2 GB Swap
2.1 GHz single-core CPU
I know that this machine is really slow, and I'm asking it to do something a bit unrealistic. But I'm confused about how it started so fast with one index, and the second just ruined the performance:
I inserted all the data that I had at the time (about 250M rows) without any index except on _id. This performed very well, considering my awful hardware:
Approximately 5000 inserts per second (totally acceptable)
This rate was nearly constant for the 14 hours hours it took to complete
The index size on _id once complete was nearly 2.5GB. Note that this is more than double my physical RAM.
The RES of the process didn't exceed 450 MB according to mongostat.
No swapping
top seemed to indicate that CPU time wasn't all being spent waiting for the disk (so a significant amount was spent in userspace, presumably with WiredTiger in the snappy code)
Then I built a (non-unique) index on the only field I need to query by, _user_lower. This took 7.7 hours, which is fine since that's a one-time deal. The index ended up being 1.6 GB, which seems really low to me when compared to the _id index. The RES went up to about 750 MB.
Then, I downloaded a new data set to load. It was only 102 MB (238 K documents). I loaded it in the same way, using mongoimport, but this time:
Only 80 inserts per second (slower at times)
RES stayed at around 750 MB
top says almost 100% of the CPU was spent waiting for IO
Of course, load went through the roof.
I could understand a sizable performance hit, since that index has to be updated. But I didn't expect this much. I've read all over the place that my indexes should fit in RAM, but the performance was great during the initial insert, where the index quickly outgrew my memory.
Can I optimize the _user_index index at all? I don't know what this would even mean, but maybe only index the first few characters? I'm definitely willing to halve the query performance in exchange for tripling the insert performance.
What accounts for the massive performance hit? How do I fix it without new hardware? I'm not really attached to MongoDB, so alternatives that don't have these performance characteristics are fine. I have an idea that just uses flat files which would probably work but I don't want to write all that code.
When adding new items to a collection, the database will have to keep the index up-to-date. Since the index in MongoDB is a B-Tree by default, that means it will have to insert an item in the tree. While that isn't a particularly expensive operation in the best case, it comes with two potential performance problems:
performance jitter: from time to time, the B-Tree bucket might be full, requiring a bucket split and hence a lot more operations than the 'simple' insert
the insert destination must be readily available
In this case, the latter is likely to cause trouble: because the insertion of a name hits a random node in the tree (i.e, the name insertion doesn't follow a pattern) and your RAM is smaller than the index, chances are high that the destination must be fetched from disk. Unfortunately, the performance of disk seeks is orders of magnitude lower than main memory references. If you're unlucky, the first ref location requires another disk seek such that for a single insert multiple disk reads are required before MongoDB can even begin writing. That can take hundreds of milliseconds, with spinning disks or some contention on typical IaaS infrastructure even seconds.
Because ObjectIds are generated monotonically (the timestamp is the most significant part), the insertion always happens at the end and it is possible to keep the destination largely in RAM. Performance jitter, i.e. problem 1 might still be an issue since a bucket split might require a disk seek, but it happens so rarely compared to the first case that it doesn't wreck average performance, which should explain the observed behavior.
Also, when the bucket is filled by a monotonically increasing value, MongoDB will split the bucket when it is 90% filled; with random insertion, splits will happen a lot earlier, at 50%, so the tree is a little more 'dense' in that case.
I'll need to migrate our main database to a new server and storage subsystem in a couple of weeks. Oracle 11 is currently running on Windows, and we will install a brand new SuSE for it. There will be no other major changes. Memory will be the same, and the server is just a bit newer.
My main concern is with the time it will take to create indexes. Our last experience recreating some indexes took very long, and since then I'm researching how to optimize it.
The current server has 128GB of memory and we're using Oracle ASSM (51GB for SGA and 44GB for PGA), and Spotlight On Oracle reports no physical read activity on datafiles. Everything is cached on memory, and the performance is great. Spotlight also informs that PGA consumption is only 500 MB.
I know my biggest table has 100 million rows, and occupy 15GB. Its indexes, however, occupy 53GB. When I recreate one of these, I can see a lot of write activity in the TEMP tablespace.
So the question is: how can I use all available memory in order to minimize TEMP activity ?
I'm not really sure if this is relevant, but we see an average of 300-350 users connections, and I raised initialization parameters to 700 max sessions.
Best regards,
You should consider setting WORKAREA_SIZE_POLICY to MANUAL for the session that will be doing the index rebuilds, and then setting SORT_AREA_SIZE to a sufficiently large number. (Max is O/S dependent, but 2GB would be a good starting point.)
Also, though you didn't make any mention of it, you should also consider using NOLOGGING to improve performance.
Hope that helps.
I need to load over 1 billion keys into Berkley DB and therefore I want to tune it in advance to get better performance. With standard configuration it takes me now about 15min to load 1'000'000 keys which is too slow.
Is there a proper way to tune for example the B+Tree of Berkley DB (node size etc...)?
(As an comparision, after tuning tokyo cabinet, it loads 1 billion keys in 25min).
P.S.
I'm looking for tuning tips as a code and not parameters to set for a running system (like jvm size etc...)
I'm curious, when TokyoCabinet loads 1B keys in 25 minutes what are the sizes of the keys/values being stored? What's the I/O systems and the storage system you're using? Are you using the term "load" to mean 1B transactional commits to permanent stable storage? That would be ~666,666 inserts/second, which is physically impossible given any I/O system I'm aware of. Multiply that number times the key and value size and now you're hopelessly beyond physical limits.
Please take a look at Gustavo Duarte's blog, read a bit about I/O systems and how things work in hardware and then review your statement. I'm very interested in finding out what exactly TokyoCabinet is doing and what it isn't doing. If I had to guess I'd say that either it's committing to file-system cache in the operating system, but not flushing (fdsync()-ing) those buffers to disk.
Full Disclosure: I'm a product manager at Oracle for Oracle Berkeley DB (a direct competitor of TokyoCabinet), I've been playing with these databases and the best hardware around for them for about ten years so I'm both biased and skeptical.
Berkeley DB has flags you can set on the transaction handle which mimic this and other similar methods of trading off durability (the "D" in ACID) for speed.
As far as how to make Berkeley DB Java Edition (BDB-JE) faster you can try the following:
Deferred writes: this delays writing
to the transaction log for as long as
possible (when buffers are full, it
flushes the data)
Sort your keys in advance: most
B-Trees (ours included) do much
better with in-order insertions for
fast load times-
Increasing the size of the log
files from the default of 10MiB to
something larger, like 100MiB, this
reduces I/O cost-
It's very important to be clear about claims of performance with databases. They seems simple, but it turns out to be very very tricky to get them right so that they don't ever corrupt data or lose committed transactions.
I hope this helps you a bit.
Bulk inserts on BDB-JE are an order of magnitude faster if you group them into a single transaction. The reason is that each single commit causes (by default) a sync write to disk while a transaction is synchronized on commit. In my application writing 100'000 small keys as single commits tooks more than a minute while in a transaction it takes just a few seconds.
I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64).
The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.)
The problem has the following qualities and resiliences:
Problem persists no matter what the target table is.
RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use.
Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high.
CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe)
Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s)
OLE DB destination and SQL Server Destination both exhibit this problem.
To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem.
I'll gratefully reward any reasonable answers / suggestions.
We finally found a solution... the problem lay in the fact that my client was using VMWare ESX, and despite the VM reporting plenty of free CPU and RAM, the VMWare gurus has to pre-allocate (i.e. gaurantee) the CPU for the SSIS guest VM before it really started to fly. Without this, SSIS would be running but VMWare would scale back the resources - an odd quirk because other processes and software kept the VM happily awake. Not sure why SSIS was different, but as I said, the VMWare gurus fixed this problem by reserving RAM and CPU.
I have some other feedback by way of a checklist of things to do for great performance in SSIS:
Ensure SQL login has BULK DATA permission, else data load will be very slow. Also check that the target database uses the Simple or Bulk Logged recovery model.
Avoid sort and merge components on large data - once they start swapping to disk the performance gutters.
Source sorted input data (according to the target table's primary key), and disable non-clustered indexes on target table, set MaximumInsertCommitSize to 0 on the destination component. This bypasses TempDB and log altogether.
If you cannot meet requirements for 3, then simply set MaximumInsertCommitSize to the same size as the data flow's DefaultMaxBufferRows property.
The best way to diagnose performance issues with SSIS Data Flows is with decomposition.
Step 1 - measure your current package performance. You need a baseline.
Step 2 - Backup your package, then edit it. Remove the Destination and replace it with a Row Count (or other end-of-flow-friendly transform). Run the package again to measure performance. Now you know the performance penalty incurred by your Destination.
Step 3 - Edit the package again, removing the next transform "up" from the bottom in the data flow. Run and measure. Now you know the performance penalty of that transform.
Step 4...n - Rinse and repeat.
You probably won't have to climb all the way up your flow to get an idea as to what your limiting factor is. When you do find it, then you can ask a more targeted performance question, like "the X transform/destination in my data flow is slow, here's how it's configured, this is my data volume and hardware, what options do I have?" At the very least, you'll know exactly where your problem is, which stops a lot of wild goose chases.
Are you issuing any COMMITs? I've seen this kind of thing slow down when the working set gets too large (a relative measure, to be sure). A periodic COMMIT should keep that from happening.
First thoughts:
Are the database files growing (without instant file initialization for MDFs)?
Is the upload batched/transactioned? AKA, is it one big transaction?)