Algolia Magento site search disabled - record quota exceeded - magento

I've installed Algoila on my Magento store a while ago. Suddenly it stopped working and reverted back to the original Magento search.
When I've checked the settings it was showing that the Admin API Key was missing. I've re-entered it but on search this error appeared:
An error occurred while saving this configuration: Record quota exceeded, change plan or delete records.
http://prntscr.com/d300va
When checking in Algolia Dashboard I can see that there are only 6k out of 10k records.
http://prntscr.com/d304fw
Does anybody have any suggestions?
I'm using magento 1.9.2.0 and Algoila extension version 1.7.2
Thanks a bunch.
I.

During the Magento re-indexing process, the number of records will increase if there are multiple sort orders defined (to achieve the best performance possible the index for each sort is pre-computed and stored+queried separately). For quota purposes, this means one record per sort order. This could be the cause of going over quota when it appears that you have less records.

Related

retrieve sorted search results from elasticsearch

I am facing a problem with elastic search. I am using elasticsearch 5.6
When I am searching an index on some fields and I get to have more than 40000 results.
I found 2 problems:
When trying to access page 1001 (results 10001) I get an error and I understood I can increase the default 10,000, However I can accept this limitation and expose back to the user only the first 10,000 results.
When I am trying to sort by a specific field, the sort does not work. This is a huge problem for me as this search is used by a client UI and I must enable paging through the results. I read about the scroll API but I does not fit my requirements (user requests from UI).
Do you have any idea how to solve this problem?
Thank you.

GSA limit reached with "noindex"

Recently the GSA I manage reached the limit in URLs being indexed and for what I see the total number of URLs with actual content is very low as opposed to the amount of page listings (mostly by date and that are not content but only show results for users to navigate).
I have already added the Robots meta tag with "noindex" attribute and many of the URLs show as "Excluded":
So I assume those documents are not being counted towards the licensed total but without that amount my crawled URLs cannot possibly reach the limit of 500K.
My other guess is that having multiple collections will make documents count towards the total even if sometimes documents are duplicate in a couple of collections.
Has somebody else faced a similar problem?
Are you receiving a warning that you have exceeded your index? There is a limit of how many URLs, over your license, the GSA will crawl, but you should be able to have about 1M docs in your license (between CRAWLED/ERRORS/EXCLUDED). Only 500K can be in the "Crawled URLs".

Seeing latest results in Kibana after the page limit is reached

I am new to logstash. I have set up my logstash to populate elastic search and have Kibana read out of it. The problem I am facing is that after the
number of records = results per page x page limit
the UI stops getting new results. Is there a way to set Kibana up such that it discards the old results instead of the latest after the limit is reached?
To have kibana read the latest results, reload the query.
To have more pages available (or more results per page), edit the panel.
Make sure the table is reverse sorted by #timestamp.

magento and solr reindexing issue

im having troubles reindexing magento with solr, im getting the following error via ssh (all other indexes successfully:
Error reindexing Solr: Solr HTTP error: HTTP request failed, Operation timed out after 5001 milliseconds with 0 bytes received
any ideas how to fix this?
many thanks
Looks like there is a time limit of 5000 miliseconds where as your solr indexing needs more time.
Increase time limit.
While indexing is running check solr log using tail commmand.
Using Solr interface query solr if some new products or data update in place.
Also you can write some log code in sole client.php adddoc function to check if this is getting called or not.
Having the same issue... I'm assuming you're using Magento Solarium. I opened an issue on github with the dev, I'll update you if he responds with a solution. In the meanwhile, if you were able to fix it, please let us know.
Since this is the only relevant hit from Google considering this issue, I add my findings here. The issue arises when you have a large database of products (or many shops together with many products). I noticed SOLR was filling up until the error occurred, after that the SOLR index was empty. Then I found in the code that the indexing process ends with committing all the changes. This is where the timeout happens.
Just put the timeout settings in system -> configuration -> catalogus -> Solarium search to a large number (like 500 seconds), do a total re-index and put back the timeout settings to a more reasonable number (2 seconds).
Though there are 2 options, one for search and a general timeout setting, this doesn't seem to work. If you change the search time out setting it still affects the indexing process.
You don't want to leave the timeout at 500 seconds, this can cause serious issues on your server performance.

Magento indexes out of date less than an hour from manual index rebuild

All Magento indexes in this particular store we are working on are configured to "Update on save" and very often they will go to "out of date" and some products simply dissappear from the frontend.
Having just performed a manual index rebuild for all affected indexes, in less than an hour the indexes are supposedly out of date again.
I would appreciate any ideas as to what could be causing this erratic behavior.

Resources