Magento reindex does not populate solr search - magento

We have solr installed as a tomcat application. With the default installation of EE magento with sample data all runs perfect.
However, with a couple of our stores running the index does not touch the /solr/data folder. Normally when a reindex is called all files in /data are cleared and repopulated. However, on the live site this does not happen.
I have stipped back a lot of extesnions and run the index again.. this time the files are recreated but almost no data is captured... even less than the sample stores despite a much larger database.
Anyone any clues as to where we should be looking?

Related

Retention policy to TFS Code Search Server (Elastic Search)

We have TFS 2017.3 with separate Code Search server.
We have huge TFS DB (about 1.6TB), in the code search server we have 700GB dis space.
After few weeks the disk space running out and the code search not work in the tfs.
After we increase the disk space the search back to work.
How can we make retention policy to delete old code search data (index)? we don't want to increased more the disk space.
Search indexing (Code and Work Item) works in 2 phases:
Bulk Indexing (BI) where the entire code and work item artifacts in all projects/repositories under a Collection are indexed. This is a
time consuming operation and depends on the size of the artifacts
under the collection.
Continuous Indexing (CI) which handles all incremental updates to the artifacts (add/updated/delete) and indexes them. This is
notification based model where the indexer listens to TFS events
and operates based on those event notifications. CI handles almost
all update operations including CRUD operations at
Project/Repository/Collection layer (such as Repository renames,
Project add/deletes, etc.). The operation time for these CI would
depend again on the size of the incremental update. BI always
precedes CI i.e. a CI will never execute on a project/repository
until BI is completed for the same.
How to Clean-up Index Data and Re-index please follow below steps:
Pause Indexing for all collections. Run the following script on TFS
Configuration DB
https://github.com/Microsoft/Code-Search/blob/master/PauseIndexing.ps1
Login to the machine where the Elasticsearch (ES) is running
Stop the ES service
Delete the entire Search Index folder (something like,
C:\TfsData\Search\IndexStore, or wherever you had configured it to
be)
Restart the TFS Job Agent service(s) on the AT machines
Delete the following tables from each of the collection DBs
DELETE FROM [Search].[tbl_IndexingUnit]
DELETE FROM [Search].[tbl_IndexingUnitChangeEvent]
DELETE FROM [Search].[tbl_IndexingUnitChangeEventArchive]
DELETE FROM [Search].[tbl_JobYield]
DELETE FROM [Search].[tbl_TreeStore]
DELETE FROM [Search].[tbl_DisabledFiles]
DELETE FROM [Search].[tbl_ResourceLockTable]
Restart the ES service
Run this script on TFS Configuration DB:
https://github.com/Microsoft/Code-Search/blob/master/ResumeIndexing.ps1
Run this script (pick from the correct TFS release folder) on each of
the collections:
https://github.com/Microsoft/Code-Search/blob/master/TFS_2017Update2/MissingIndexFolderTriggerCollectionIndexing.ps1
Try the last script on a smaller collection first (which has less
number of repositories) so that you can verify that indexing happened
correctly and the results are query-able.
More details please refer this blog in MSDN: Resetting Search Index in Team Foundation Server
I was able to reduce the disk size after deleting the ES folders, reinstalling the code search extension, and sometimes had to run the MissingIndexFolderTriggerCollectionIndexing.ps1.
But - I came to the conclusion that it was not worth doing, the disk size was growing rapidly and reaching the original size, so I did not save anything.
Although Microsoft recommends giving disk space of 35% of the DB, it is not enough for us and we increase the size when the disk is full to the end (currently about 45% of the DB size).
The conclusion - don't touch the ES, if the disk fills up then increase the disk size.

ElasticSearch - I need to import indexes from an old server that is no longer onilne to the new Server

So I first attempted to shutdown elasticsearch on the original server, bring up the replacement, copy everything to the same location but that didnt work so I just started fresh. I now need to be able to search those old indexes to show that we have a years worth of log retention. I dont think snapshot or reindex will work because the old indexes are not currently attached to a live server.... But I may be wrong.
Any help is appreciated
Old servers indexes begin with graylog2_2xx and graylog2_1xx
New Server graylog_1x

TYPO3: clear indexed search cache

When I use the search function on my website, it shows results with old content. I tried to clear all caches but it doesn't solve the issue.
I am not sure if I should truncate some tables in the DB and how safe this is.
By the way I don't have any "indexed search" option in the Backend under "Info".
Which TYPO3 Version are you using? With TYPO3 8.7 (not sure if also with 7.6) the module moved to an own one called "Indexing".
Anyway you shouldn't get any problem truncating the indexed_search tables in the DB as they get shipped without data when you install indexed_search.
To be sure, just make a backup of the tables.

Solr new Core from UI

I'm trying to create a new Core with Solr 5.3. I have no experience working with Solr until a few days ago. I think I need this broken down Barney style. I've been through the system doc, wiki's, YouTube, and random discussion boards. The information I've found is either not current or not what I'm seeing from my UI. I've now wasted five hours trying to get this to work. I'm out of options. I'm about ready to drop this project and start from scratch. I'm completely exasperated and throwing myself to the mercy of my betters. Can anyone just show me how to do it?
I followed the following steps for adding a core using solr admin UI.
Start the solr server using ~/solr-5.2.0/bin/solr start. This will start the solr on 8983 port.
Now go to solr directory. cd ~/solr-5.2.0/server/solr.
Create a new folder, which will contain the solr core configuration. mkdir newCore.
Now create a conf directory in side the newCore and copy your schema.xml and solrconfig.xml along with other necessary files.
Go to Solr Admin UI, Core Admims section. Specify the core name, as per your requirement and newCore (name of the directory which we have created) in the instanceDir field. Click the Add Core button.
I found a tutorial here: apache-solr-tutorial-beginners
I followed the exact instructions the author gives for creating a new core via the command line from solar-5.3.0/bin:
solr create -c jcg -d basic_configs
jcg then appeared in my Solr UI.
I went back and tried this same thing with my Project specs and it worked! I still have no idea how to do this from the UI but at least I can move forward an inch!

Reduce inodes count on Magento website

I am getting errors on my website and my website inodes count is overload. The hosting inodes limit is 200,000 but my website inodes count is 909,496 and I can't even open phpMyAdmin. The hosting support asked me to remove unused files. How can I decrease the inodes count and which files are unused in Magento based website?
Usually an indicator that you need a more capable hosting provider.
The major places that Magento creates files during operation are in the var/ folder and your product image cache.
If you've never checked before, the following areas can accumulate a phenomenal amount of detritus. Using an ftp client, check the following areas in your var/ folder:
Check that you don't have a bazillion sessions files in var/session, remove anything older than current date
Check that there aren't an excessive amount of files in var/report, you might want to find out why Magento is generating them and fix the issue. Delete them all.
Logging will generate over time several huge files in var/log, delete them and then look at the new ones to find out what errors are being generated.
Imports and other stuff can cause temporary files to accumulate in var/tmp, delete them. Also check in var/import for old imports that can be deleted
Stored database backups are kept in var/backup, using the admin backend System > Tools > Backups:
Download the latest database backups to a local workstation and delete all backups.
Magento uses a lot of caching to store information, the biggest will be the Image Cache if you have a large catalog, and it will contain cached images from the beginning of time, and lots of useless ones if you've deleted product over time. Using the Admin backend, go into System > Cache Management:
Clear the Magento Cache.
Flush Catalog Images Cache.
Magento does not delete product images when you delete product. In fact Magento would be a prime candidate for appearing on one of those Hoarder programs that were prevalent on TV there for a while.
After you get the site working, consider installing ImageClean.
Hopefully this will have reduced your inode count enough to carry out the following operations. Before proceeding, do a couple database backups and store off server!!!
Next step is to ask your hosting provider if they include your database in that inode table count. If they do, you are kind of stuck as Magento uses innodb and likely, they've cheaply not set up MySQL to use files-per-table so you can resize the innodb file size by optimizing each table. Ask them if they use files-per-table when they set up MySQL, if they don't know what it is, develop that sinking feeling in the pit of your stomach.
Some tables that get excessively huge, especially if you've haven't properly set set up the Magento master cron job trigger in your cPanel and checked to make sure log table cleaning is enabled in System > Configuration > Advanced > System > Log Cleaning. These tables are as follows:
'dataflow_batch_export',
'dataflow_batch_import',
'log_customer',
'log_quote',
'log_summary',
'log_summary_type',
'log_url',
'log_url_info',
'log_visitor',
'log_visitor_info',
'log_visitor_online',
'index_event',
'report_event',
'report_viewed_product_index',
'report_compared_product_index',
'catalog_compare_item',
'catalogindex_aggregation',
'catalogindex_aggregation_tag',
'catalogindex_aggregation_to_tag'
Magento has a built-in script to clean the logs. If running this crashes with a memory error because you've never set the cron job up and there's too much bloat to clean out, Crucial Web Host has a script that can be run to manually delete all log file contents. including the dataflow tables which won't be cleaned out by the Magento log cleaning process. If you use dataflow import/export a lot, Nexcess has a script that can check on the dataflow tables size and clear them as well.
After cleaning the database, you will need to use phpMyAdmin to optimize each table in your Magento database. If the hosting provider hasn't set up files-per-table in MySQL, it will do squat for reducing your inode count.
After all that, don't bother messing with deleting application files or anything else Magento uses. It doesn't really accumulate that much aside from the var/ folders and the Image cache and you likely will end up with a dead website.
At this point, you're at the mercy of a shared server hosting plan that has decided to be fair to everyone by limiting what can be done in each account and doesn't allow enough resources to run Magento. Start looking for a hosting provider that supports Magento, often they don't bother limiting your inode count (a cheap trick to allow too many people to share a hard drive) as they offer plenty of disk space for you to run your e-commerce website.

Resources