Solr deployment strategies for 100% Up Time while creating whole index - asp.net-mvc-3

I'm working on a Solr 3.6 with ASP.net MVC3 e-commerce project.
I've an index of appx. 1 lac products in Solr. There is some changes in requirements, and we need to rebuild the whole index. Whole indexing is taking almost 1 & half hour during which site needs to be down.
How can I rebuild the index and also keep the site live which serving contents from older index. What is best practices to reduce down time while rebuilding the whole index. I wish I can do it with 100% uptime.
Edit
I'm storing a several URLs into Solr data as stored field and hence, which are dynamically generated while adding data into Solr. If I deploy application on different sub domain like test.example.com then it takes wrong URL, wherein it will only work with example.com. So hosting an another application is not an option for me.

You can leverage the concept of multiple cores in Solr and thereby have a live core that users are currently searching against and a standby core where you can make schema changes, re-index, etc. Then using the SWAP command you can switch the live and standby cores without any user downtime. The swap will be handled internally by Solr and your users will never notice a difference.

As I see, there are several ways to correctly solve this problem:
Do not rebuild the whole index - just update necessary records on-the-fly when they changes, Solr can do it pretty simple
Create 2 Solr instances on different ports and use them one after one. When first is rebuilding, on the second you can use old index. And when first is rebuilt, you can use it until the index on second instance is rebuilt.
Add boolean field to your index named, for example, "old_index". And whem reindexing starts, update all currrent records and set old_index=1, then write somewhere in configuration that you looking for records with old_index==1. Than start reindexing, than delete old records. It can be done with Solr`s deleteByQuery and either atomatic update in Solr 4.x or manual update.

Related

How do you perform hitless reindexing in elastic search that avoids races and keeps consistency?

I'm implementing Elastic Search for a blog where documents can be updated.
I need to perform hitless reindexing in Elastic Search that avoids races and keeps consistency. (By consistency, I mean if the application does a write followed by a query, the query should show the changes even during reindexing).
The best advice I've been able to find is that you use aliases to atomically switch which index the application is using and that the application writes to both the old index (via an write_alias) and the new index (via a special write_next_version alias) when writing during a reindexing operation, and reads from the old index (via a read_alias). Any races in the concurrent writes between reindex and the application are resolved by the document version numbers as long as the application writes to the old index first and the new index second. When the reindexing is done, just atomically switch the application's read and write aliases to the new index and delete the write_next_version alias.
However, there are still races and performance issues.
My application doesn't know there's a reindex occurring, reindex and the alias switching involved is a separate long-running process. I could use a HEAD command to find out if a special write_next_version alias exists and only write if it exists. However, that's an extra round trip to the ES servers. There's also still a race between the HEAD command and the reindex process described above that deletes the second write_next_version alias. I could just do both writes every time and silently handle the error to the usually non-existent write_next_version alias. I'd do this via a bulk API if my documents were small, but they are blog entries, they could be fairly large.
So should I just write twice every time and swallow the error on the second write? or should I use the HEAD API to determine whether the application needs to perform the second write for consistency? Or is there some better way of doing this?
The general outline of this strategy is shown in this article. This older article also shows how to do it but they don't use aliases which is not acceptable. There is a related issue on the Elastic Search github but they do not address the problem that two writes need to be done in order to maintain consistency. They also don't address the races or performance issues. (they closed the issue...)

My couchdb view is rebuilding for no reason

i have a couchdb with a database containing ~20M documents. it takes ~12h to build a single view.
i have saved 6 views successfully. they returned results quickly. at first.
after 2 days idle, i added another view. it took much longer to build, and it was a "nice-to-have", not a requirement, so i killed it after ~60% completion (restarted the windows service).
my other views now start re-building their indexes when accessed.
really frustrated.
additional info: disk had gotten within 65GB of full (1TB disk; local)
Sorry you have no choice but to wait for the views to rebuild here. However I will try to explain why this is happening. It won't solve your problem but perhaps it will help you understand what is happening and how to prevent it in future.
From the wiki
CouchDB view index filenames are based on the contents of the design document (not its name, ID or revision). This means that two design documents with identical view code will share view index files.
What follows is that if you change the contents by adding a new view or updating the existing one couchdb will rebuild the indexes.
So I think the most obvious solution is to add new views in new design docs. It will prevent re indexing of existing views and the new one will take whatever time it needs to index any way.
Here is another helpful answer that throws light on how to effectively use couchdb design documents and views.

Pattern to load data to Elasticsearch from SQL server

Here is what we came up with. By using 3 value status column.
0 = Not indexed
1 = Updated
2 = Indexed
There will be 2 jobs...
Job 1 will select top X records where status = 0 and pop them into a queue like RabitMQ.
Then a consumer will bulk insert those records to ES and update the status of DB records to 1.
For updates, since we have control of our data... The SQL stored proc that updates that particular record will set it's status to 2. Job2 will select top x records where status = 2 and pop them on RabitMQ. Then a consumer will bulk insert those records to ES and update the status of DB records to 1.
Of course we may need an intermediate status for "queued" so none of the jobs pick up the same record again but the same job should not run if it hasn't completed. The chances of a queued record being updated are slim to none. Since updates only happen at end of day usually the next day.
So I know there's rivers (but being deprecated and probably not flexible like ETL)
I would like to bulk insert records from my SQL server to Elasticsearch.
Write a scheduled batch job of some sort either ETL or any other tool doesn't matter.
select from table where id > lastIdInsertedToElasticSearch this will allow to load the latest records into Elasticsearch at scheduled interval.
But what if a record is updated in the SQL server? What would be a good pattern to track updated records in the SQL server and then push the updated records in ES? I know ES has document versions when putting the same Id. But can't seem to be able to visualize a pattern.
So IMHO, batch inserts are good for building or re-building the index. So for the first time, you can run batch jobs that run SQL queries and perform bulk updates. Rivers, as you correctly pointed out, don't provide a lot of flexibility in terms of transformation.
If the entries in your SQL data store are created by you (i.e. some codebase in your control), it would be better that the same code base updates documents in Elasticsearch, may be not directly but by notifying some other service or with the help of queues to not waste time in responding to requests (if that's the kind of setup you have).
We have a pretty similar use case of Elasticsearch. We provide search inside our app, which performs search across different categories of data. Some of this data is actually created by the users of our app through our app - so we handle this easily. Our app writes that data to our SQL data store and pushes the same data in RabbitMQ for indexing/updating in Elasticsearch. On the other side of RabbitMQ, we have a consumer written in Python that basically replaces the entire document in Elasticsearch. So the corresponding rows in our SQL datastore and documents in Elasticsearch share the ID which enables us to update the document.
Another case is where there are a few types of data that we perform search on comes from some 3rd party service which exposes the data over their HTTP API. The data creation is in our control but we don't have an automated mechanism of updating the entries in Elasticsearch. In this case, we basically run a cron job that takes care of this. We have managed to tune the cron's schedule because we also have a limited number of API queries quota. But in this case, our data is not really updated so much per day. So this kind of system works for us.
Disclaimer: I co-developed this solution.
I needed something like the jdbc-river that could do more complex "roll-ups" of data. After careful consideration of what it would take to modify the jdbc-river to suit my needs, I ended up writing the river-net.
Here are a few of the features:
It gets fairly decent performance (comparable to the jdbc-river. We get upwards of 6k rows/sec)
It can join many tables to create complex nested arrays of documents without creating duplicate child documents
It follows a lot of the same conventions as the jdbc-river.
It also supports reading from files.
It's written in C#
It uses Quartz.Net and supports cron expressions for scheduling.
This project is open source, and we already have a second project (also to be open sourced) that does generic job scheduling with RabbitMQ. We have ported over a lot of this project, and plan to the RabbitMQ river for better performance and stability when indexing into Elasticsearch.
To combat large updates, we aren't hitting tables directly. Instead we use stored procedures that only grab deltas. We also have an option on the sp to reset the delta to reindex everything.
The project is fairly young with only a few commits, but we are open to collaboration and new ideas.

Torquebox Infinispan Cache - Too many open files

I looked around and apparently Infinispan has a limit on the amount of keys you can store when persisting data to the FileStore. I get the "too many open files" exception.
I love the idea of torquebox and was anxious to slim down the stack and just use Infinispan instead of Redis. I have an app that needs to cache allot of data. The queries are computationally expensive and need to be re-computed daily (phone and other productivity metrics by agent in a call center).
I don't run a cluster though I understand the cache would persist if I had at least one app running. I would rather like to persist the cache. Has anybody run into this issue and have a work around?
Yes, Infinispan's FileCacheStore used to have an issue with opening too many files. The new SingleFileStore in 5.3.x solves that problem, but it looks like Torquebox still uses Infinispan 5.1.x (https://github.com/torquebox/torquebox/blob/master/pom.xml#L277).
I am also using infinispan cache in a live application.
Basically we are storing database queries and its result in cache for tables which are not up-datable and smaller in data size.
There are two approaches to design it:
Use queries as key and its data as value
It leads to too many entries in cache when so many different queries are placed into it.
Use xyz as key and Map as value (Map contains the queries as key and its data as value)
It leads to single entry in cache whenever data is needed from this cache (I call it query cache) retrieve Map first by using key xyz then find the query in Map itself.
We are using second approach.

Updating Solr Index when product data has changed

We are working on implementing Solr on e-commerce site. The site is continuously updated with a new data, either by updates made in existing product information or add new product altogether.
We are using it on asp.net mvc3 application with solrnet.
We are facing issue with indexing. We are currently doing commit using following:
private static ISolrOperations<ProductSolr> solrWorker;
public void ProductIndex()
{
//Check connection instance invoked or not
if (solrWorker == null)
{
Startup.Init<ProductSolr>("http://localhost:8983/solr/");
solrWorker = ServiceLocator.Current.GetInstance<ISolrOperations<ProductSolr>>();
}
var products = GetProductIdandName();
solrWorker.Add(products);
solrWorker.Commit();
}
Although this is just a simple test application where we have inserted just product name and id into the solr index. Every time it runs, the new products gets updated all at once, and available when we search it. I think this create the new data index into solr everytime it runs? Correct me if I'm wrong.
My Question is:
Does this recreate Solr Index Data in whole? Or just update the data that is changed/new? How? Even if it only updates changed/new data, how it knows what data is changed? With large data set, this must have some issues.
What is the alternative way to track what has changed since last commit, and is there any way to add those product into Solr index that has changed.
What happens when we update existing record into solr? Does it delete old data and insert new and recreate whole index? Is this resource intensive?
How big e-commerce retailer does this with millions of products.
What is the best strategy to solve this problem?
When you do an update only that record is delete and inserted. Solr does not update the records. The other records are untouched. When you commit the data new segments would be created with this new data. On optimize the data is optimized into a single segment.
You can use Incremental build technique to add/update records after the last build. DIH provides it out of the box, If you are handling it manually through jobs you can maintain the timestamp and run builds.
Solr does not have an update operation. It will perform a delete and add. So you have to use the complete data again and not just the updated fields. Its not resource intensive. Usually only Commit and Optimize are.
Solr can handle any amount of data. You can use Sharding if your data grows beyond the handling capacity of a single machine.

Resources