Reindex for Multiple magento roots in a single server using Shell script - magento

I am trying to implement a shell script for Reindex.
Concern:
I am having five sites in a single server. (html-abc, html-cdf, html-xyz etc., at /var/www/). I want to run the reindex for each site using a shell script and that should record the logs with the time stamp in the reindex.log file.
Currently I am using the cron to perform the reindex for each site but I am unable to see the time stamp on the log file. Can any one help?
The sample cron
*/30 * * * * /usr/bin/php7.4 /var/www/html-abc/bin/magento indexer:reindex 2>&1 | grep -v "Ran jobs by schedule" >> /var/log/magento_index.log
The result I got
test#testaz:~/scripts$ tail -n 50 /var/log/magento_index.log Design Config Grid index has been rebuilt successfully in 00:00:00 Customer Grid index has been rebuilt successfully in 00:00:00 process error during indexation process: Product Flat Data index is locked by another reindex process. Skipping. Category Products index has been rebuilt successfully in 00:00:01 Product Categories index has been rebuilt successfully in 00:00:00 Catalog Rule Product index has been rebuilt successfully in 00:00:00 Product EAV index has been rebuilt successfully in 00:00:02 Stock index has been rebuilt successfully in 00:00:01 Inventory index has been rebuilt successfully in 00:00:00 Catalog Product Rule index has been rebuilt successfully in 00:00:00 Product Price index has been rebuilt successfully in 00:00:09 Catalog Search index has been rebuilt successfully in 00:00:03 Design Config Grid index has been rebuilt successfully in 00:00:00 Customer Grid index has been rebuilt successfully in 00:00:01 process error during indexation process: Product Flat Data index is locked by another reindex process. Skipping. Category Products index has been rebuilt successfully in 00:00:01 Product Categories index has been rebuilt successfully in 00:00:00 Catalog Rule Product index has been rebuilt successfully in 00:00:00 Product EAV index has been rebuilt successfully in 00:00:01 Stock index has been rebuilt successfully in 00:00:00 Inventory index has been rebuilt successfully in 00:00:00 Catalog Product Rule index has been rebuilt successfully in 00:00:00 Product Price index has been rebuilt successfully in 00:00:02 Catalog Search index has been rebuilt successfully in 00:00:02
The sample cron
*/30 * * * * /usr/bin/php7.4 /var/www/html-abc/bin/magento indexer:reindex 2>&1 | grep -v "Ran jobs by schedule" >> /var/log/magento_index.log
The result I got
test#testaz:~/scripts$ tail -n 50 /var/log/magento_index.log Design Config Grid index has been rebuilt successfully in 00:00:00 Customer Grid index has been rebuilt successfully in 00:00:00 process error during indexation process: Product Flat Data index is locked by another reindex process. Skipping. Category Products index has been rebuilt successfully in 00:00:01 Product Categories index has been rebuilt successfully in 00:00:00 Catalog Rule Product index has been rebuilt successfully in 00:00:00 Product EAV index has been rebuilt successfully in 00:00:02 Stock index has been rebuilt successfully in 00:00:01 Inventory index has been rebuilt successfully in 00:00:00 Catalog Product Rule index has been rebuilt successfully in 00:00:00 Product Price index has been rebuilt successfully in 00:00:09 Catalog Search index has been rebuilt successfully in 00:00:03 Design Config Grid index has been rebuilt successfully in 00:00:00 Customer Grid index has been rebuilt successfully in 00:00:01 process error during indexation process: Product Flat Data index is locked by another reindex process. Skipping. Category Products index has been rebuilt successfully in 00:00:01 Product Categories index has been rebuilt successfully in 00:00:00 Catalog Rule Product index has been rebuilt successfully in 00:00:00 Product EAV index has been rebuilt successfully in 00:00:01 Stock index has been rebuilt successfully in 00:00:00 Inventory index has been rebuilt successfully in 00:00:00 Catalog Product Rule index has been rebuilt successfully in 00:00:00 Product Price index has been rebuilt successfully in 00:00:02 Catalog Search index has been rebuilt successfully in 00:00:02

Related

Kibana gauge with dynamic maximum value

I have a data coming from logstash that shows how much space is used on a table in a database and maximum allocated capacity for a table. I want to create in Kibana gauges for every table that show how much space is currently occupied.
The problem is that maximum available space sometimes changes so the limit for a gauge has to be set as a variable and I can't figure out how to do this. I also don't know how to show only data from current day on a dashboard for a time range. Data coming from logstash looks like that:
time | table_name | used_gb | max_gb
---------+------------+---------+--------
25.04.18 | table_1 | 1.2 | 10.4
25.04.18 | table_2 | 4.6 | 5.0
26.04.18 | table_1 | 1.4 | 14.6
26.04.18 | table_2 | 4.9 | 5.0
I want my gauge for every table to look something like that:
This problem can be solved using Time Series Visual Builder.
Choose Gauge, then Panel options, you can specify 1 as your max value. Then in your gauge data settings you can compute the dynamic ratio per table. Here's a screenshot of a similar setup:
In older versions of Kibana instead of Bucket Script you should use Calculation Aggregation.
Reference:
https://discuss.elastic.co/t/gauge-with-dynamic-maximum-value/130634/2

Kibana Visualize - How to aggregate data with two string field values?

I have data as follows in ElasticSearch:
timestamp item_id item_status
January 24th 2018, 12:06:34.287 1 Processing
January 24th 2018, 12:10:14.310 1 Completed
January 25th 2018, 07:21:30.876 2 Cancelled
January 26th 2018, 09:11:55.775 3 Completed
I want to query this data such that I can get all items that have had both Processing and Completed as their status. In my case, my query result would just be:
item_id
1
timestamp is a timestamp field and item_id & item_status are string fields.
How can I do this with Kibana Visualization? I have been doing something similar to https://discuss.elastic.co/t/how-can-i-make-visualization-with-group-by/43569/2 and Run a simple sql group by query in kibana 4 but it did not really get me what I wanted.
Thanks in advance!
In a Kibana visualization, if you add a query string or a filter, and save the visualization, then the visualization will apply these on top of any other filters that you would use when using a dashboard.
If you plan to apply these filters to multiple visualizations, then you can first make a saved search in the Discover mode, and when making the visualization, create from the saved search (Visualize > New > From a saved search.

New magento trial account algolia, run through 100.000 operations in 1 day with 300 products

With great enthousiasm i installed the Magento extension in a store having issues with its search. it worked amazing for me within minutes search results came back superb.
Its a small store, with around 100 unique visitors a day and 300 unique products. Within a day i received notifications about running to 50 and 80% of my trial amount of 100.000 operations.
only a few minutes later it reached 100.000+.
These are the stats
Delete Record 67 %
Update Record 33 %
Query 0.17 %
Set Settings 0.04 %
Get Settings 0.01 %
With a lot of these lines:
/1/indexes/magento_default_products/batch
/1/indexes/magento_default_products/batch
/1/indexes/magento_default_products/batch
/1/indexes/magento_default_products/batch
/1/indexes/magento_default_products/batch
/1/indexes/magento_default_products/batch
Whats going on, i checked all settings and it seems to be working just fine.
The extension is following changes of the magento instance to have the index up to date.
- When a product is saved in the db it will trigger an operation to update it in Algolia. same for creation/deletion
- Same for categories.
When magento products are updated by a third party service importer, depending on the implementation it can generate a lot of operations.
You should contact support at algolia dot com for more details

Complex Queries in ELK?

I've successfully set-up ELK stack. ELK gives me great insights on data. However, I'm not sure how I'll fetch the following result.
Let say, I've a column user_id and action. The values in action can be installed , activated, engagement and click. So, I want that if a particular user has performed an activity installed on 21 May and 21 June, then while fetching results for the month of June, ELK should not return those users who has already performed that activity earlier before. For eg, for the following table:-
Date UserID Activityin the previous month
1 May 1 Activated
3 May 2 Activated
6 May 1 Click
8 May 2 Activated
11 June 1 Activated
12 June 1 Activated
13 June 1 Click
User1 and User2 has activated on 1May and 3May respectively. User2 has also activated on 8May. So, when I filter the users for the month of May having activity Activated, it should return me count 2, ie
1 May 1 Activated
3 May 2 Activated
User2 on 8May is being removed because it has performed that same activity before.
Now if I write the same query for the month of June, it should return me nothing, because the same users have perform the same activity earlier as well.
How can I write this query in ELK?
This type of relational query is not possible with ElasticSearch.
You would need to add another column (FirstUserAction) and either populate it when the data is loaded, or schedule a task (in whatever scripting/programming language you're comfortable with) to periodically calculate and update the values for this column.

TFServer : Cannot update data because the server clock may have been set incorrectly

I am working on project which is on the TFServer. every thing was fine till today which the server time changed to 4 hours ahead. i cann't check in any thing from then because i get the following error
TFS2010 TF54000: Cannot update data because the server clock may have been set incorrectly. Contact your Team Foundation Server Administrator
i searched in internet i found this
http://www.windows-tech.info/4/d1a37cfc6cf38a79.php
so i look into the tbl_Changeset i have two records for today
780 1 2013-12-09 11:13:56.930 807 1
781 1 2013-12-09 11:16:40.727 808 1
i am writing this post on 14:00 which definately bigger than 11:13, so i couldn't checkin again?
This usually occurs when changesets have future times, possibly due to a system clock being temporarily set in the future. To fix it, first run this query against the relevant collection DB (e.g. Tfs_DefaultCollection):
SELECT TOP 20 *
FROM tbl_ChangeSet
ORDER BY CreationDate DESC
You will probably see a row with a CreationDate in the future
Update the errored rows to a sensible time in the past:
UPDATE tbl_ChangeSet SET CreationDate = '2014-07-10 05:51:04.160' WHERE ChangeSetId = 73
That happens when the TFS server clock has been tampered. A common scenario is the server clock changed backwards after a check-in has been submitted. The times you see on tbl_Changeset are in UTC. You can try modifying the records several hours backwards, i.e.: from 2013-12-09 11:13:56.930 to 2013-12-09 00:13:56.930
About TFS Clock Error: It happens when your server machine time changed and you at time of Checkin & Checkout it put dates according to server in SQL Server Database hosting TFS Porject Collection. It update multiple tables not only tbl_ChangeSet or tbl_PendingChangeSet. I got it working by updating dates in multiple tables. What I did is mentioned below:
///Generated Script at once for all table for field data_type='datetime' Please see image below.
After above part: I copied Resutl Script to new query window and make it Common Table expression to limit the list accordingly. See Image Below.
See the result below: You will have list of tables with field name you need to update if date is mismatch. Changing of date you can do it with TSQL Query or can manually change directly editing in table.
Note: Please take your database Backup first. :) Enjoy!!!

Resources