Performance with sequentially increasing primary key - performance

Looking for guidance on selecting a database provider for a specific key pattern.
The only key field will be a pre-allocated unique sequentially-increasing number.
During each day between
50 and 100 thousand items will be added,
processed (updated), and then retained for a week or so,
after which usually the lowest-numbered records will be deleted. The number of
records will not fluctuate by very much from day to day but may drop at weekends.
The numbers will probably wrap back to 1 after 100M or so.
I need to find a database implementation where the efficiency of the index lookup,
addition and deletion remains constant. Should I be worried that the performance might drop off as the key value range moves continuously upwards?

index lookup, addition and deletion remains constant
You could ensure it remains constant by rebuilding the indexes every insert (just constantly really slow - no performance drop off at all :)), or close to constant by running index maintenance every hour/day etc.
that the performance might drop off as the key value range moves continuously upwards?
As long as you've got an index, it should be logN performance - e.g. having 1,000,000 rows will be around half the speed of 1,000 rows (when searching for an indexed value). (1,000,000,000,000 will be half that speed again).
So no, you shouldn't need to worry about performance.
The numbers will probably wrap back to 1 after 100M or so.
Ok - if you want. Generally, no need really - just use a big int.
As always with performance: test what you want to do. Make a script that inserts 10,000,000 rows, and see what happens.
My point here being that if you're going to wrap ids at 100M records, the worst you can do is actually have them all allocated. This would represent the fragmented index condition as well (where you only have say 100K records, but they're distributed in a space of 10M) - but you will do index/database maintenance right?

Related

Partition index to reduce buffer busy waits?

From time to time our Oracle response times decrease significally for a minute or two, without having extra load.
we were able to identify an insert statement, which produces a lot of buffer busy waits.
From the ADDM report, we got the following hint:
Consider partitioning the INDEX "IDX1" with object
ID 4711 in a manner that will evenly distribute concurrent DML across
multiple partitions.
To be honest: I am not sure what that means. I don't know what a partitioned index is. I only can Image that it means to create a Partition with a local index.
Can you help me out here?
There is a very high frequency of reading and writing to that table. no updates or deletes are used.
Thanks,
E.
I am not sure what that means.
Oracle is telling you that there is a lot of concurrent ("at the same time") activity on a very small part of your index. This happens a lot.
Consider an index column TAB1_PK on table TAB1 whose values are inserted from a sequence TAB1_S. Suppose you have 5 database sessions all inserting into TAB1 at the same time.
Because TAB1_PK is indexed, and because the sequence is generating values in numeric order, what happens is that all those sessions have to read and update the same blocks of the index at the same time.
This can cause a lot of contention -- way more than you would expect, due to the way indexes work with multi-version read consistency. I mean, in some rare situations (depending on how the transaction logic is written and the number of concurrent sessions), it can really be crippling.
The (really) old way to avoid this problem was to use a reverse key index. That way, the sequential column values did not all go to the same index blocks.
However, that is a two-edged sword. On the one hand, you get less contention because you're inserting all over the index (good). On the other hand, your rows are going all over the index, meaning you cannot cache them all. You've just turned a big logical I/O problem into a physical I/O problem!
Nowadays, we have a better solution -- a GLOBAL HASH PARTITION on the index.
With a GHP, you can specify the number of hash buckets and use that to trade-off between how much contention you need to handle vs how compact you want the index updates (for better buffer caching). The more index hash partitions you use, the better your concurrency but the worse your index block buffer caching will be.
I find a number (of global hash partitions) around 16 is pretty good.

Google datastore - index a date created field without having a hotspot

I am using Google Datastore and will need to query it to retrieve some entities. These entities will need to be sorted by newest to oldest. My first thought was to have a date_created property which contains a timestamp. I would then index this field and sort on this field. The problem with this approach is it will cause hotspots in the database (https://cloud.google.com/datastore/docs/best-practices).
Do not index properties with monotonically increasing values (such as a NOW() timestamp). Maintaining such an index could lead to hotspots that impact Cloud Datastore latency for applications with high read and write rates.
Obviously sorting data on dates is properly the most common sorting performed on a database. If I can't index timestamps, is there another way I can accomplish being able to sort my queires from newest to oldest without hotspots?
As you note, indexing monotonically changed values doesn't scale and can lead to hotspots. Whether you are potentially impacted by this depends on your particular usage.
As a general rule, the hotspotting point of this pattern is 500 writes per second. If you know you're definitely going to stay under that you probably don't need to worry.
If you do need higher than 500 writes per second, but have a upper limit in mind, you could attempt a sharded approach. Basically, if you upper on writes per second is x, then n = ceiling(x/500), where n is the number of shards. When you write your timestamp, prepend random(1, n) at the start. This creates n random key ranges that each can perform up to 500 writes per second. When you query your data, you'll need to issue n queries and do some client side merging of the result streams.

MongoDB insert performance with 2nd index

I'm trying to insert about 250 million documents that are each roughly 400 bytes into MongoDB 3.0 with WiredTiger. I need to search on only one short string key, _user_lower. Although I'm using WiredTiger now, which is much better than MMAPv1, I did use MMAPv1 first and had similar issues.
My server (a very cheap VPS) has:
250 GB magnetic disk
1 GB RAM
2 GB Swap
2.1 GHz single-core CPU
I know that this machine is really slow, and I'm asking it to do something a bit unrealistic. But I'm confused about how it started so fast with one index, and the second just ruined the performance:
I inserted all the data that I had at the time (about 250M rows) without any index except on _id. This performed very well, considering my awful hardware:
Approximately 5000 inserts per second (totally acceptable)
This rate was nearly constant for the 14 hours hours it took to complete
The index size on _id once complete was nearly 2.5GB. Note that this is more than double my physical RAM.
The RES of the process didn't exceed 450 MB according to mongostat.
No swapping
top seemed to indicate that CPU time wasn't all being spent waiting for the disk (so a significant amount was spent in userspace, presumably with WiredTiger in the snappy code)
Then I built a (non-unique) index on the only field I need to query by, _user_lower. This took 7.7 hours, which is fine since that's a one-time deal. The index ended up being 1.6 GB, which seems really low to me when compared to the _id index. The RES went up to about 750 MB.
Then, I downloaded a new data set to load. It was only 102 MB (238 K documents). I loaded it in the same way, using mongoimport, but this time:
Only 80 inserts per second (slower at times)
RES stayed at around 750 MB
top says almost 100% of the CPU was spent waiting for IO
Of course, load went through the roof.
I could understand a sizable performance hit, since that index has to be updated. But I didn't expect this much. I've read all over the place that my indexes should fit in RAM, but the performance was great during the initial insert, where the index quickly outgrew my memory.
Can I optimize the _user_index index at all? I don't know what this would even mean, but maybe only index the first few characters? I'm definitely willing to halve the query performance in exchange for tripling the insert performance.
What accounts for the massive performance hit? How do I fix it without new hardware? I'm not really attached to MongoDB, so alternatives that don't have these performance characteristics are fine. I have an idea that just uses flat files which would probably work but I don't want to write all that code.
When adding new items to a collection, the database will have to keep the index up-to-date. Since the index in MongoDB is a B-Tree by default, that means it will have to insert an item in the tree. While that isn't a particularly expensive operation in the best case, it comes with two potential performance problems:
performance jitter: from time to time, the B-Tree bucket might be full, requiring a bucket split and hence a lot more operations than the 'simple' insert
the insert destination must be readily available
In this case, the latter is likely to cause trouble: because the insertion of a name hits a random node in the tree (i.e, the name insertion doesn't follow a pattern) and your RAM is smaller than the index, chances are high that the destination must be fetched from disk. Unfortunately, the performance of disk seeks is orders of magnitude lower than main memory references. If you're unlucky, the first ref location requires another disk seek such that for a single insert multiple disk reads are required before MongoDB can even begin writing. That can take hundreds of milliseconds, with spinning disks or some contention on typical IaaS infrastructure even seconds.
Because ObjectIds are generated monotonically (the timestamp is the most significant part), the insertion always happens at the end and it is possible to keep the destination largely in RAM. Performance jitter, i.e. problem 1 might still be an issue since a bucket split might require a disk seek, but it happens so rarely compared to the first case that it doesn't wreck average performance, which should explain the observed behavior.
Also, when the bucket is filled by a monotonically increasing value, MongoDB will split the bucket when it is 90% filled; with random insertion, splits will happen a lot earlier, at 50%, so the tree is a little more 'dense' in that case.

Smart chunking from a huge table

I have a huge table in a data warehouse (Vertica). I am accessing this table in chunks for optimization purposes. The way I am deciding my chunks is pretty straightforward. I have a primary key column say A and I take a MAX(A). I have a chunk size of 20000 and I have now created (A/20000)+1 chunks. I frame query for each chunk and retrieve the data .
There problem with this approach is as follows:
My number of chunks is dependent on MAX(A) and MAX(A) is growing very fast and thereby my number of chunks increases with it as well.
I have decided on number 20000 because that is what gives me optimal performance. But distribution of primary key within the chunks of 20000 is so scattered. For example the 0-20000 might contain only 3 elements and range 20000-40000 might contain 500 elements and no ranges come close to 20000.
I am trying to figure whether there are any good approximation algorithm for this problem which minimizes the number of chunks and fill in close to 20000 primary keys in one chunk.
Any pointers towards the solution is appreciated.
I'm not sure what optimization purposes means, but I think the best approach would be to create a timestamp column, or use an eligible timestamp column to partition on. You could then partition on a larger frame of reference so there isn't a wide range between the partitions.
If the table is partitioned, it will be able to benefit from partition pruning. This means that Vertica can eliminate the storage containers during query execution which do not match on the timestamp predicate.
Otherwise, you can look at the segmentation clause and use the max/min from the storage containers. This could be slightly more complicated.

Time for updating a sqlite3 index fluctuates too much

I have a large-ish sqlite3 (3.6.22) database (about 1 GB, 5 million rows) with a single table indexed on one column. The problem is that the time to do a typical INSERT transaction fluctuates widely. I insert about 10000 rows at a time (wrapped in a transaction of course). Often it takes about 1.5 seconds, but about every fifth transaction it suddenly takes several minutes for the very same transaction to complete. I've done a lot of experimentation, and I've discovered that the phenomena only occurs if there is an index, which makes me think it is updating the index which takes a lot of time.
I need more consistent performance. A bit higher average insertion times times would be ok, if I can only avoid that some transactions suddenly takes 200x as long as the previous one... What should I do?
Here's the schema. The strings in blocks.md5 are always exactly 32 bytes long and likely unique. The rolling.value column will contain very large 64-bit integers.
CREATE TABLE blocks (blob char(32) NOT NULL,
offset long NOT NULL,
md5 char(32) NOT NULL,
row_md5 char(32));
CREATE TABLE rolling (value INT NOT NULL);
CREATE INDEX index_md5 ON blocks (md5);
CREATE UNIQUE INDEX index_rolling ON rolling (value);
I don't know exactly how sqlite indexes are implemented, but I'd expect the behavior you describe if they were storing the index on disk or reordering the data.
Imagine a scenario where when they are allocating blocks for the index, they start some page with N slots for data. When the page fills up, they have to allocate another and split the data between them.
When you're inserting your data, the ordering of the MD5 will be as random as it gets, so every page will fill up independently. There isn't any reasonable way for the indexing strategy to know that.
Other databases will even recommend using different indexing strategies than normal for strings, especially in the case of something like random MD5s.
Trying to do this in an all memory database would tell you whether its algorithmic or disk access.
I've only really tried to avoid this in an offline system where I could sort data before inserting. After it was all inserted I would index it and that was as fast as I could find. If you're doing 10k at a time, that might be your use case, though I don't know.

Resources