Which server parameters to tweak in Solr if I expect heavy writes and light reads? - performance

I am facing scalability issues designing a new Solr cluster and I need to master to be able to handle a relatively high rate of updates with almost no reads - they can be done via slaves.
My existing Solr instance is occupying a huge amount of RAM, in fact it started swapping at only 4.5mil docs. I am interested in making the footprint as little as possible in RAM, even if it affects search performance.
So, which Solr config values can I tweak in order to accomplish this?
Thank you.

It's hard to say without knowing the specifics of your enviroment (like the schema, custom indexers, queryfunctions etc...) and whats a huge amount of ram? but you could start by
setting filterCache, queryResultCache and documentCache to 0 in solrconfig.xml. This will severely impact the performance of queries executed in SOLR.
set compression to true TextField and StrField types that you store. Then set compressThreshold to a low integer value. This will decrease the size of the documents at the cost of increased CPU usage. (see http://wiki.apache.org/solr/SchemaXml#head-73cdcd26354f1e31c6268b365023f21ee8796613 for more details
turn off all autowarming queries and don't do any read queries
make sure you commit often enough
obviously these are all things to do on the master not on the slaves.

Related

HibernateSearch : Reindex 50 million rows from a single table into Elastic Search

We are currently using the default settings (10 objects to load per query, per thread) of the Mass Indexer with 7 threads to reindex data from 1 table (8-10 fields) into elastic search. The size of the table is currently at 25 million and will grow to a few hundred millions.
MassIndexer indexer = searchSession.massIndexer(Entity.class)
.threadsToLoadObjects(7);
indexer.start()
.thenRun(() ->
log.info("Mass Indexing Entity Complete")
)
.exceptionally(throwable -> {
log.error("Mass Indexing Entity Failed", throwable);
return null;
});
The database is a Postgres on RDS, and we are using AWS Elastic Search. Hibernate Search version is 6.
Recently we hit a bottleneck during the reindexing process as it ran for hours with 20 million rows in the table. One of the reason was that we had a connection pool of 10 max connections. With the current mass indexer setup (7 threads) it only left 2 connections (1 for Id Lookup + 7 for Entity lookup) for other operations causing timeouts waiting for a connection. We will increase the pool size to 20 and test.
What is the best strategy to reindex very large datasets? Can MassIndexer scale to this high volume with some configuration settings? Or should we look at other strategies? What has worked in the past for someone with same requirements?
UPDATE: Also it looks like the IDLoader thread is not batched, so for 50 million rows, it will load all 50 million IDs in memory in 1 query?
And, what is the use of idFetchSize? Looks like it is not used in the indexing process.
What is the best strategy to reindex very large datasets? Can MassIndexer scale to this high volume with some configuration settings?
With that many entities, things are definitely going to take more than just a few minutes.
Whether it can scale... the thing is, the mass indexer is just a middleman between your database and Elasticsearch. Assuming your database scales, and Elasticsearch scales, then the only thing required for the mass indexer to scale is to do more work in parallel. And you can control that.
Now, you probably meant "can it reindex in a satisfying amount of time", and that of course will depend on what your expectations are, as well as how much effort you put into tuning it.
The performance of mass indexing will be affected by the configuration you pass to the mass indexer, of course, but also by the schema and data of your entities, your RDBMS and its configuration, your Elasticsearch cluster and its configuration, the machines they run on, ... Really, no one knows what's possible: the only way to know is to try, assess the results, tune, and iterate.
I'd advise to first concentrate on addressing lazy loading issues, since those will have a tremendous impact of performance; be sure to set hibernate.default_batch_fetch_size in order to reduce the impact of lazy loading on performance.
Then, I can't do much more than repeating what the reference documentation says:
The MassIndexer was designed to finish the re-indexing task as quickly as possible, but there is no one-size-fits-all solution, so some configuration is required to get the best of it.
Performance optimization can get quite complex, so keep the following in mind while you attempt to configure the MassIndexer:
Always test your changes to assess their actual effect: advice provided in this section is true in general, but each application and environment is different, and some options, when combined, may produce unexpected results.
Take baby steps: before tuning mass indexing with 40 indexed entity types with two million instances each, try a more reasonable scenario with only one entity type, optionally limiting the number of entities to index to assess performance more quickly.
Tune your entity types individually before you try to tune a mass indexing operation that indexes multiple entity types in parallel.
Beyond tuning the mass indexer, remember that it only loads data from the database to push it to Elasticsearch. So sure, the mass indexer might be the bottleneck, but so could be the database or Elasticsearch, if they are under-dimensioned. Make sure that both can provide satisfying throughput as well: decent machines, clustering if necessary, server-side configuration, ...
Anyway, there are many things you can do: before you do, try to find out what the bottleneck is. Is your database always at 100% CPU? Then tune your database: change settings, use a beefier machine, ... Are Elasticsearch I/O clearly reaching their limits? Then tune Elasticsearch: change settings, add more nodes, ... Are both Postgresql and Elasticsearch doing just fine? Then maybe you should have even more DB connections, or more ES connections, or more threads in your mass indexer. Or maybe it's something else; performance is hard.
Or should we look at other strategies?
I would leave that as a last resort. If you don't understand what is wrong exactly with the performance of the mass indexer, then you're unlikely to find a better solution.
If you don't trust the MassIndexer to do a good job, you can try doing it yourself. Set up a thread that load IDs, and other threads that load the corresponding entities, then index them manually. That's not exactly simple to get right, but it's possible.
If you do just that, I doubt you will improve anything. But, assuming entity loading is the bottleneck, and not indexing (you must check that first!), I imagine that you could get better throughput by leveraging the specifics of your database:
If lazy loading seems to be the problem, you could use entity graphs to make sure all parts of your entity that are indexed will be loaded eagerly. The MassIndexer cannot currently do that, though hopefully it will someday (HSEARCH-521).
If there are some JDBC query hints that improve performance in your case, you could try setting them.
If it's more than capable of handling the load, and the bottleneck seems to be the processing of entities into documents, then you can try to partition the IDs and run your "custom indexing process" on multiple machines. E.g. reindex IDs 1 to 25,000,000 on one machine, and IDs 25,000,001 to 50,000,000 on another. You couldn't do that with the mass indexer, as it does not allow filtering the IDs (at least not in Hibernate Search 6.0, but it will in 6.1: HSEARCH-499)
UPDATE: Also it looks like the IDLoader thread is not batched, so for 50 million rows, it will load all 50 million IDs in memory in 1 query?
No, ids are loaded in batches. Then each batch is pushed to an internal queue, and consumed by a loading thread. The size of batches is controlled by batchSizeToLoadObjects.
The one exception is MySQL, whose default configuration is to load the whole result set in memory (don't ask me why), but that doesn't affect PostgreSQL. And anyway, that can be fixed (see below).
More information about the parameters here.
And, what is the use of idFetchSize? Looks like it is not used in the indexing process.
This is the JDBC fetch size. IDs are retrieved using a scroll (cursor), and the JDBC fetch size is the size of result pages (~ low-level buffers) for this scroll in your JDBC driver.
To be honest, it's mostly useful for MySQL (and perhaps MariaDB?), whose JDBC driver will load all results in memory even if we're using a cursor, unless the fetch size is set to Integer#MIN_VALUE. I know, it's weird.

Reason behind “Index creation no longer defaults to five shard”

What was the reasoning behind ""Index creation no longer defaults to five shard but one shard"
So far, the assumption was , more shards = more scalability = more parallelism
Isnt that change defeating the whole purpose of distributed systems like ES ?
Yeah, you can relate to more shards= more scalability = more parallelism but this only happens when this is only useful when these shards utilize the multi-cores or more machines(data-nodes) in the cluster.
This is the default config, which is created for the basic workloads and obviously needs more fine-tuning for the advance use cases, which is the sole purpose of making it extensible, it's very difficult to design the perfect Elasticsearch cluster and as it depends on various factors, Elasticsearch tends to provides some default values which works more for general use-cases.
Either you start with a modest workload and then gradually your workload tends to increase, or you start with the huge workload in the begining itself(in which case, any way you will have more shards to get the benefit listed in the first line and this is for advanced use-case).
But first use is more common and the beauty of Elasticsearch is that with little knowledge you can get started and these default settings work quite well for modest workload and oftentimes you don't have to change them and even don't have to understand them in details.
Having more number of shards for a small number of documents with huge search traffic created issues(creation of 5 threads for a single search as default shards were 5) and this is the common use for most of the basic and modest applications out there.
So it makes sense to change the default shards to 1 as its more common use-case and beyond that any way you need to go deep to scale your cluster which would require fine-tuning Elasticsearch further.

Elasticsearch with different java heaps, does it matter?

Say, I've got 2 servers. One of which has -xmx and -xms set to 4G and one to 2G.
Will ElasticSearch handle those performance differences in the balancing mode? Or will both the servers be called purely based on indices, resulting in a (much more) likely OOM for the latter than the former?
By the way, I've set the properties indices.fielddata.cache.size, indices.breaker.fielddata.limit, indices.breaker.request.limit, and indices.breaker.total.limit on both servers as ElasticSearch is suggesting
This is important, to me, because if it does, I'd have to change the index sharding on guessed index strain, which will be a hassle (if not impossible)
Elasticsearch treats every nodes as the same and equally balances the documents between them. This means that Elasticsearch wont readjust based on hardware and get you the optimal performance.
One thing to remember here is that a herd of bulls is only as fast as its slowest bull. The same gets applied here. But then if the load is small enough that it does not eat up all the hardware for 2 GB machine ,then we should not be seeing any issue. Otherwise you should see difference in memory aggressive operations like aggregations.

Strategy for "user data" in couchbase

I know that a big part of the performance from Couchbase comes from serving in-memory documents and for many of my data types that seems like an entirely reasonable aspiration but considering how user-data scales and is used I'm wondering if it's reasonable to plan for only a small percentage of the user documents to be in memory all of the time. I'm thinking maybe only 10-15% at any given time. Is this a reasonable assumption considering:
At any given time period there will be a only a fractional number of users will be using the system.
In this case, users only access there own data (or predominantly so)
Recently entered data is exponentially more likely to be viewed than historical user documents
UPDATE:
Some additional context:
Let's assume there's a user base of a 1 million customers, that 20% rarely if ever access the site, 40% access it once a week, and 40% access it every day.
At any given moment, only 5-10% of the user population would be logged in
When a user logs in they are like to re-query for certain documents in a single session (although the client does do some object caching to minimise this)
For any user, the most recent records are very active, the very old records very inactive
In summary, I would say of a majority of user-triggered transactional documents are queried quite infrequently but there are a core set -- records produced in the last 24-48 hours and relevant to the currently "logged in" group -- that would have significant benefits to being in-memory.
Two sub-questions are:
Is there a way to indicate a timestamp on a per-document basis to indicate it's need to be kept in memory?
How does couchbase overcome the growing list of document id's in-memory. It is my understanding that all ID's must always be in memory? isn't this too memory intensive for some apps?
First,one of the major benefits to CB is the fact that it is spread across multiple nodes. This also means your queries are spread across multiple nodes and you have a performance gain as a result (I know several other similar nosql spread across nodes - so maybe not relevant for your comparison?).
Next, I believe this question is a little bit too broad as I believe the answer will really depend on your usage. Does a given user only query his data one time, at random? If so, then according to you there will only be an in-memory benefit 10-15% of the time. If instead, once a user is on the site, they might query their data multiple times, there is a definite performance benefit.
Regardless, Couchbase has pretty fast disk-access performance, particularly on SSDs, so it probably doesn't make much difference either way, but again without specifics there is no way to be sure. If it's a relatively small document size, and if it involves a user waiting for one of them to load, then the user certainly will not notice a difference whether the document is loaded from RAM or disk.
Here is an interesting article on benchmarks for CB against similar nosql platforms.
Edit:
After reading your additional context, I think your scenario lines up pretty much exactly how Couchbase was designed to operate. From an eviction standpoint, CB keeps the newest and most-frequently accessed items in RAM. As RAM fills up with new and/or old items, oldest and least-frequently accessed are "evicted" to disk. This link from the Couchbase Manual explains more about how this works.
I think you are on the right track with Couchbase - in any regard, it's flexibility with scaling will easily allow you to tune the database to your application. I really don't think you can go wrong here.
Regarding your two questions:
Not in Couchbase 2.2
You should use relatively small document IDs. While it is true they are stored in RAM, if your document ids are small, your deployment is not "right-sized" if you are using a significant percentage of the available cluster RAM to store keys. This link talks about keys and gives details relevant to key size (e.g. 250-byte limit on size, metadata, etc.).
Basically what you are making a decision point on is sizing the Couchbase cluster for bucket RAM, and allowing a reduced residency ratio (% of document values in RAM), and using Cache Misses to pull from disk.
However, there are caveats in this scenario as well. You will basically also have relatively constant "cache eviction" where "not recently used" values are being removed from RAM cache as you pull cache missed documents from disk into RAM. This is because you will always be floating at the high water mark for the Bucket RAM quota. If you also simultaneously have a high write velocity (new/updated data) they will also need to be persisted. These two processes can compete for Disk I/O if the write velocity exceeds your capacity to evict/retrieve, and your SDK client will receive a Temporary OOM error if you actually cannot evict fast enough to open up RAM for new writes. As you scale horizontally, this becomes less likely as you have more Disk I/O capacity spread across more machines all simultaneously doing this process.
If when you say "queried" you mean querying indexes (i.e. Views), this is a separate data structure on disk that you would be querying and of course getting results back is not subject to eviction/NRU, but if you follow the View Query with a multi-get the above still applies. (Don't emit entire documents into your Index!)

Does this BIG monogdb storage cause low performance?

We have a mongodb with 336GB data on it.
Unfortunately there is only 8GB memory on that server.
Is it true to say that this will slow the db down, especially when I try to traverse the entire collection?
What can I do to improve performance?
To get things right, this isn't a "BIG" production setup; it is actually relatively small.
That aside:
Is it true to say that this will slow the db down, especially when I try to traverse the entire collection?
It is true yes. As you iterate the collection MongoDB will need to page in your data, this is true even if you have indexes on the collection.
The exception to this is when you use indexOnly cursors whereby all the data comes only from the index, including the returned document; these are otherwise known as covered queries.
The problem you have here is that your dataset is 42x greater than your RAM amount, assuming you are allowed to use all your RAM (this is not true of course, the OS and other programs will reserve amounts off for themselves). This means that if you expect to iterate the entire collection you will not be able to do it performantly, instead MongoDB could be page thrashing its allocated memory.
What can I do to improve performance?
Get a little more RAM.
You could also try a bit of sharding if getting too much RAM on that one server is a pain.
I would aim for about 20x more data than RAM, that shouldn't be too bad in most cases.
You should index your collection http://docs.mongodb.org/manual/applications/indexes/ to improve performance, but bear in mind that memory is utilised by mongodb when querying indexes so make sure each index you create can fit within the memory you have on your server.
You could also shard your collection but you will need more servers to do this. http://docs.mongodb.org/manual/sharding/
And I know it's obvious but get more memory - its cheap!
Mongodb uses memory-mapped files to map the data in to the systems virtual memory. If you try to access more data than the available memory of the system, the performance will be poor. You'll have to consider other options like sharding, indexing, increasing RAM etc. Indexing may improve the performance but not by much if done on a large data set, because indexes also need memory. A few references:
First 3 questions talk about memory-mapped files: http://docs.mongodb.org/manual/faq/storage/
On sharding: http://docs.mongodb.org/manual/faq/sharding/
Ensuring index fit into the RAM: http://docs.mongodb.org/manual/applications/indexes/#ensure-indexes-fit-ram
The other answers say either "have enough memory to fit your data" or "have enough memory for each index" or "have some multiple of your RAM in data". None of those are very effective nor very precise for capacity planning.
You need to know what your access patterns will be and then decide what indexes you will need to effectively be able to use your data. If all of your indexes fit in available RAM with some room to spare for most recently touched documents, then you should be okay.
When your working set (accessed data + indexes) cannot fit in RAM then your performance will be correlated more with disk access speed than anything else. Depending on how fast your disks are and on your throughput and latency requirements, it may work out okay or it may not.
While there is not enough information to say with certainty whether you will succeed or fail on this particular machine, you should be able to collect enough information to determine that for yourself by analyzing your indexing needs, etc.

Resources