Shell script to wait till the command execute and status change - bash

I am creating a shell script to Backup openstack cinder volume to glance image as like below.
test1 is volume name in the script.
#!/bin/bash
DATE=`date +%Y-%m-%d`
BACKUPDIR=/mnt/osbk/
declare -a VMS=(
test1
)
source /root/admin-openrc
echo $DATE `date` Starting Backup process of images
for vmname in "${VMS[#]}"
do
echo Backing up $vmname
echo cinder upload-to-image ${vmname} ${vmname}-vol-bkp --disk-format qcow2 --container-format bare --force True
cinder upload-to-image ${vmname} ${vmname}-vol-bkp --disk-format qcow2 --container-format bare --force True
echo glance image-download ${vmname}-vol-bkp --file $BACKUPDIR/${vmname}-vol-bkp-${DATE}.qcow2
glance --os-image-api-version 1 image-download ${vmname}-vol-bkp --file $BACKUPDIR/${vmname}-vol-bkp-${DATE}.qcow2
done
Output looks like this:
2018-12-29 Sat Dec 29 16:37:45 IST 2018 Starting Backup process of images
Backing up test1
cinder upload-to-image test1 test1-vol-bkp --disk-format qcow2 --container-format bare --force True
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| container_format | bare |
| disk_format | qcow2 |
| display_description | |
| id | 26c90209-8151-4136-b5de-f2ad7419b100 |
| image_id | 01e88175-a3fa-4354-8c0f-e4fafd9c9fc3 |
| image_name | test1-vol-bkp |
| is_public | False |
| protected | False |
| size | 2 |
| status | uploading |
| updated_at | 2018-12-29T11:07:00.000000 |
| volume_type | None |
+---------------------+--------------------------------------+
glance image-download test1-vol-bkp --file /mnt/osbk//test1-vol-bkp-2018-12-29.qcow2
404 Not Found
The resource could not be found.
Image 01e88175-a3fa-4354-8c0f-e4fafd9c9fc3 is not active (HTTP 404)
From above output the status is uploading...
I need to hold my script to wait or check the status of volume change to Active, then only the glance image download command has to run.
What am I doing wrong?

Related

How to ssh into and issue command to list of ip addresses in a txt file Jenkins

I have a list of ip addresses in a text file that I wish to use in a script.
Here is the code outputting the ip addresses in the text file
openstack server list | grep agent | awk '{print \$9}' >> ${STACK}_list.txt
I would like to retrieve the ip addresses and use in a loop by ssh'ing in to them but not sure how to do that
Please refer this post.
script to read a file with IP addresses and login
Might be helpful for you.
Thanks.
Subhadeep
You can use a regex to filter all ip addresses from the server list-output:
openstack server list | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*'
You could pipe this output into a file if you need or to use this within a bash-script you could make something like this, without writing it into a file:
#!/bin/bash
#
ADDRESSES=$(openstack server list | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*')
for ADDRESS in $ADDRESSES
do
echo "ip: $ADDRESS"
done
It reads all ip-addresses from ther server-list output and iterate within the for-loop over this output and prints each ip separate on the terminal. Instead of the echo you could insert your ssh-command.
Example-server on my deployment:
root#m1r1:~# openstack server list
+--------------------------------------+-----------------------+--------+--------------------------+----------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------------------+--------+--------------------------+----------------+--------+
| 46d04a77-4d33-4bb3-8214-b1444eed33a3 | server1 | ACTIVE | l2-network=192.168.4.131 | cirros | XS |
| e9489aca-00c3-4fc9-afc5-515c08b17406 | server2 | ACTIVE | l2-network=192.168.4.61 | | XS |
| ea8cec6a-a8d5-4bbb-970e-aaf65d7374b2 | server3 | ACTIVE | l2-network=192.168.4.163 | cirros | S |
| 7d934ec4-1d53-467b-9220-d67b4b68a832 | server4 | ACTIVE | l2-network=192.168.4.184 | | XS |
| 74d3036e-372a-4566-8ba2-10a0760c5562 | server5 | ACTIVE | l2-network=192.168.4.232 | cirros | XS |
| e08e1637-f4df-478d-a478-6578d038cb22 | server6 | ACTIVE | l2-network=192.168.4.190 | | XS |
| 8307a481-679e-4df0-a64e-3a497b13ac81 | server7 | ACTIVE | l2-network=192.168.4.202 | | XS |
| 38d10b12-daa5-483e-b9a5-9a16ba14d841 | server8 | ACTIVE | l2-network=192.168.4.250 | cirros | XS |
+--------------------------------------+-----------------------+--------+--------------------------+----------------+--------+
Output of this example:
ip: 192.168.4.131
ip: 192.168.4.61
ip: 192.168.4.163
ip: 192.168.4.184
ip: 192.168.4.232
ip: 192.168.4.190
ip: 192.168.4.202
ip: 192.168.4.250
#!/bin/sh
openstack server list | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' > stack
while $(wc -l stack | cut -d' ' -f1) -gt 0 ]
do
ipnumber=$(sed -n '1p' stack)
echo "${ipnumber}"
sed -i '1d' stack
done
The echo command there is just a placeholder. You can replace it with ssh, or whatever else you want to do, with the IP number in the variable.

Openstack snapshot create and restore scripting with bash commands

First of all it amazes me there is so little information about openstack and script examples but that is not the question i have
I want to create a snapshot and an simple way to restore the snapshot. Because the way of our hosting provider uses underlying storage i am unable to use the rebuild command so i need to destroy the running vm and recreate it with the snapshot image as a base. The creation of the image only works when all information about the running vm is provided as input parameters and here comes the troubles i have.
the information needed is provided by 3 commands
command1: nova show
Output:
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NWFINFRA_1600 network | 10.0.0.39 |
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | gn3a |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2019-10-02T14:25:21.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2019-10-02T14:25:05Z |
| description | - |
| flavor:disk | 0 |
| flavor:ephemeral | 0 |
| flavor:extra_specs | {"ostype": "win", "hw:cpu_cores": "1", "hw:cpu_sockets": "2"} |
| flavor:original_name | win.2large |
| flavor:ram | 8192 |
| flavor:swap | 0 |
| flavor:vcpus | 2 |
| hostId | 18aa94c61106a53b2d9e672e93619a6fce76abb1ee6ba9da471491f9 |
| id | 70941fbf-9143-4f1c-a5e7-979f818ace23 |
| image | IFW039-InstanceSnapshot (8ee1104d-55e4-4c99-93e5-ceb4a53ce13f) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | IWF039 |
| os-extended-volumes:volumes_attached | [{"id": "1134fe12-777b-4c26-ac2b-e6ecb6ad4f70", "delete_on_termination": false}, {"id": "f610a46e-46ad-460f-81b3-e2b34acfbbfc", "delete_on_termination": false}] |
| progress | 0 |
| status | ACTIVE |
| tags | [] |
| tenant_id | 4c15fd467dde4bd6a25427d6bab64a7f |
| trusted_image_certificates | - |
| updated | 2019-10-02T14:25:21Z |
| user_id | ddff2ce854114bef873bac9a1476805e |
+--------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Command 2: ./Scripts/openstack port show NWFINFRA_1600_IWF039
where NWFINFRA_1600_IWF039 is a combination of previous output NWFINFRA_1600 network and the server name IWF039
Output:
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | None |
| binding_profile | None |
| binding_vif_details | None |
| binding_vif_type | None |
| binding_vnic_type | normal |
| created_at | 2019-10-02T06:49:08Z |
| data_plane_status | None |
| description | |
| device_id | 70941fbf-9143-4f1c-a5e7-979f818ace23 |
| device_owner | compute:gn3a |
| dns_assignment | fqdn='iwf039.rijkscloud.local.', hostname='iwf039', ip_address='10.0.0.39' |
| dns_domain | |
| dns_name | iwf039 |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.0.0.39', subnet_id='3298e8d0-b317-465c-8757-c1a4f2cad298' |
| id | b49a7d3a-bb0d-49cb-a04b-64c5dbf9df20 |
| location | cloud='', project.domain_id='default', project.domain_name=, project.id='4c15fd467dde4bd6a25427d6bab64a7f', project.name='vws-pgb', region_name='Groningen3', zone= |
| mac_address | fa:16:3e:99:cc:3c |
| name | NWFINFRA_1600_IWF039 |
| network_id | 450dcc7a-5e55-4e38-9f4e-de9a9c685502 |
| port_security_enabled | False |
| project_id | 4c15fd467dde4bd6a25427d6bab64a7f |
| propagate_uplink_status | None |
| qos_policy_id | None |
| resource_request | None |
| revision_number | 15 |
| security_group_ids | |
| status | ACTIVE |
| tags | |
| trunk_details | None |
| updated_at | 2019-10-03T11:10:00Z |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
with these outputs i can create the build command to restore the snapshot:
nova boot --poll --flavor win.2large --image IFW039-InstanceSnapshot --security-groups default --availability-zone gn3a --nic net-id=450dcc7a-5e55-4e38-9f4e-de9a9c685502 IWF039
note: the image is the name of the created snapshot
I try to script this so i can have a simple snapshot create and restore procedure
but i get stuck on the table layout of the output. This shows really nice but i cannot use it in my scripting to redirect the output to input variables.
I tryed using this: { read foo ; read ID Name MAC IP status;} < <(./Scripts/openstack port list --server IWF039 | sed 's/+--------------------------------------+----------------------+-------------------+--------------------------------------------------------------------------+-------- +//' | sed 's/|//' | sed 's/MAC Address/MAC/' | sed 's/Fixed IP Addresses/IP/')
But the variables get contents like '|' char etc.
So echo $Name gives '|' as output.
There must be a simpler way but i am unable to see it.
Please help ...
I managed to get it almost working by using awk instead of grep:
i have now this code:
#/bin/bash # # Query needed variables # echo -e "\nQuery needed information" NETWORK=$(nova show IWF039 | awk '/network/ {print $2}') ZONE=$(nova show IWF039 | awk '/OS-EXT-AZ:availability_zone/ {print $4}') FLAVOR=$(nova show IWF039 | awk '/flavor:original_name/ {print $4}') SERVERID=$(nova show IWF039 | awk -F '|' '/id/ {print $3; exit}') NETWORKPORT=$(nova interface-list IWF039 | awk -F '|' '/ACTIVE/ {print $3}') # Print out variables echo "network: $NETWORK" echo "zone: $ZONE" echo "flavor: $FLAVOR" echo "server_id: $SERVERID" echo "network_port_id: $NETWORKPORT" # Remove current instance echo -e "\nRemove current instance" nova delete $SERVERID # Rebuild instance from snapshot image echo -e "\nRebuild instance from snapshot" nova boot --poll --flavor $FLAVOR --image IFW039-InstanceSnapshot2 --security-groups default --availability-zone $ZONE --nic port-id=$NETWORKPORT IWF039
If i run the script the last item however e.g. IWF039 which is the name of the instance i want to use throws me an error:
error: unrecognized arguments: IWF039
anyone can tell me why ?
If i run the line on the commandline it works, only not from the bash script
#/bin/bash
#
# Script restores snapshot created in OpenStack
# Script created by Lex IT, Alex Flora
# usage: snapshot-restore.sh <snapshot image name> <server name to restore to>
# to use: make sure to install the following python openstack modules:
# pip install python-openstackclient python-keystoneclient python-glanceclient python-novaclient python-neutronclient
#
#
# Query needed variables
#
if [ "$#" -eq "0" ]
then
echo -e "usage: restore_snapshot <name of snapshot> <name of server>"
echo -e "Querying available snapshots, one moment please ..."
glance image-list
echo -e "\n\033[0;33mGive name of snapshot to restore"
echo -e "\033[0m"
read SNAPSHOT
echo -e "\n\033[0;33mGive server name to restore"
echo -e "\033[0m"
read SERVER
else
SNAPSHOT=$1
SERVER=$2
fi
echo -e "\n\033[0mQuery needed server information from server $SERVER, one moment please ..."
NETWORK=$(nova show IWF039 | awk '/network/ {print $2}' | sed -e 's/^[[:space:]]*//')
ZONE=$(nova show IWF039 | awk '/OS-EXT-AZ:availability_zone/ {print $4}' | sed -e 's/^[[:space:]]*//')
FLAVOR=$(nova show IWF039 | awk '/flavor:original_name/ {print $4}' | sed -e 's/^[[:space:]]*//')
SERVERID=$(nova show IWF039 | awk -F '|' '/\<id\>/ {print $3; exit}' | sed -e 's/^[[:space:]]*//')
NETWORKPORT=$(nova interface-list IWF039 | awk -F '|' '/ACTIVE/ {print $3}' | sed -e 's/^[[:space:]]*//')
# Print out variables
echo -e "\033[0mnetwork: \033[0;32m$NETWORK"
echo -e "\033[0mzone: \033[0;32m$ZONE"
echo -e "\033[0mflavor: \033[0;32m$FLAVOR"
echo -e "\033[0mserver_id: \033[0;32m$SERVERID"
echo -e "\033[0mnetwork_port_id: \033[0;32m$NETWORKPORT"
echo -e "\033[0mSnapshot image: \033[0;32m$SNAPSHOT"
echo -e "\033[0mServer naam: \033[0;32m$SERVER"
# Ask confirmation
echo -e "\n\033[0mGoing to restore snapshot image $SNAPSHOT to server $SERVER"
read -p "Is this correct (y/n) ? " -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]
then
# Remove current instance
echo -e "\n\033[0mRemove current instance"
nova delete $SERVERID
sleep 3
# Rebuild instance from snapshot image
echo -e "\nRebuild instance from snapshot using command:"
echo -e "\033[0mnova boot --poll --flavor $FLAVOR --image $SNAPSHOT --security-groups default --availability-zone $ZONE --nic port-id=$NETWORKPORT $SERVER"
nova boot --poll --flavor $FLAVOR --image $SNAPSHOT --security-groups default --availability-zone $ZONE --nic port-id=$NETWORKPORT $SERVER
fi
This is my complete script. Problem was that the output contained spaces in front of the string. I resolved this with the following sed command: sed -e 's/^[[:space:]]*//'
Hopefully someone has some use of the script.

Is there any feasible and easy option to use a local folder as a Hadoop HDFS folder

I have a massive chunk of files in an extremely fast SAN disk that I like to do Hive query on them.
An obvious option is to copy all files into HDFS by using a command like this:
hadoop dfs -copyFromLocal /path/to/file/on/filesystem /path/to/input/on/hdfs
However, I don't want to create a second copy of my files, just to be to Hive query in them.
Is there any way to point an HDFS folder into a local folder, such that Hadoop sees it as an actual HDFS folder? The files keep adding to the SAN disk, so Hadoop needs to see the new files as they are being added.
This is similar to Azure's HDInsight approach that you copy your files into a blob storage and HDInsight's Hadoop sees them through HDFS.
For playing around with small files using the local file system might be fine, but I wouldn't do it for any other purpose.
Putting a file in an HDFS means that it is being split to blocks which are replicated and distributed.
This gives you later on both performance and availability.
Locations of [external] tables can be directed to the local file system using file:///.
Whether it works smoothly or you'll start getting all kinds of error, that's to be seen.
Please note that for the demo I'm doing here a little trick to direct the location to a specific file, but your basic use will probably be for directories.
Demo
create external table etc_passwd
(
Username string
,Password string
,User_ID int
,Group_ID int
,User_ID_Info string
,Home_directory string
,shell_command string
)
row format delimited
fields terminated by ':'
stored as textfile
location 'file:///etc'
;
alter table etc_passwd set location 'file:///etc/passwd'
;
select * from etc_passwd limit 10
;
+----------+----------+---------+----------+--------------+-----------------+----------------+
| username | password | user_id | group_id | user_id_info | home_directory | shell_command |
+----------+----------+---------+----------+--------------+-----------------+----------------+
| root | x | 0 | 0 | root | /root | /bin/bash |
| bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| uucp | x | 10 | 14 | uucp | /var/spool/uucp | /sbin/nologin |
+----------+----------+---------+----------+--------------+-----------------+----------------+
You can mount your hdfs path into local folder, for example with hdfs mount
Please follow this for more info
But if you want speed, it isn't an option

Linux - Postgres psql retrieving undesired table

I've got the following problem:
There is a Postgres database which I need to get data from, via a Nagios Linux distribution.
My intention is to make a resulting SELECT be saved to a .txt, that would be sent via email to me using MUTT.
Until now, I've done:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\d
\pset border 2
SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador
EOF
My problem is:
The .txt "saida.txt" is bringing me info about the database, as follows:
Lista de relações
Esquema | Nome | Tipo | Dono
---------+----------------------------------+-----------+------------
public | apns | tabela | jmsilva
public | config_imsis_centrais | tabela | thdroaming
public | config_imsis_sgsn | tabela | postgres
(3 Registers)
+---------+---------+----------+---------+---------+--------+------------+-------+---------+----------+-------+-------+------------------+-----------+
| central | imsi | mapver | camel | nrrg | plmn | inoper | natms | cba | cbaz | stall | ownms | imsi_translation | forbrat |
+---------+---------+----------+---------+---------+--------+------------+-------+---------+----------+-------+-------+------------------+-----------+
| MCTA02 | 20210 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20404 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20408 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20412 | | | | | INOPER-127 | | | | | | | |
.
.
.
How could I make the first table not to be imported to the .txt?
Remove the '\d' portion of the script which causing listing the tables in the DB you see at the top of your output. So your script will become:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\pset border 2
SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador
EOF
To get the output to appear CSV formatted in a file named /tmp/output.csv do you can do the following:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\pset border 2
COPY (SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador) TO '/tmp/output.csv' WITH (FORMAT CSV)
EOF

Magento Indexes Issue - Can't reindex

I have a problem with index management inside my Magento 1.6.2.0 store. Basically I can't get them to update. The status says Processing but it says like that for over a 3 weeks now.
And when I try to reindex I am getting this message Stock Status Index process is working now. Please try run this process later but later is 3 weeks now? So it looks like the process is frozen but I don't know how to restart.
Any ideas?
cheers
Whenever you start an indexing process, Magento writes out a lock file to the var/locks folder.
$ cd /path/to/magento
$ ls var/locks
index_process_1.lock index_process_4.lock index_process_7.lock
index_process_2.lock index_process_5.lock index_process_8.lock
index_process_3.lock index_process_6.lock index_process_9.lock
The lock file prevents another user from starting an indexing process. However, if the indexing request times out or fails before it can complete, the lock file will be left in a lock state. That's probably what happened to you. I'd recommend you check the last modified dates on the lock files to make sure someone else isn't running the re-indexer right now, and then remove the lock files. This will clear up your
Stock Status Index process is working now. Please try run this process later
error. After that, run the indexers one at a time to make sure each one completes.
Hello Did you call script manually if not then create one file in your root folder and write this code in it
require_once 'app/Mage.php';
umask( 0 );
Mage :: app( "default" );
$process = Mage::getSingleton('index/indexer')->getProcessByCode('catalog_product_flat');
$process->reindexAll();
this code do indexing of your magento manually some times it's happen that if your magento store contain large number of products then it will required lot's of time to reindexing of products so when you can go to your index management from admin it will show some indexing in processing stage so may be this code will help you to remove processing stage to ready stage of your indexes.
or you can also do indexing with SSH if you have rights of it. it's faster too for indexing
For newer versions of magento , ie 2.1.3 I had to use this solution:
http://www.elevateweb.co.uk/magento-ecommerce/magento-error-sqlstatehy000-general-error-1205-lock-wait-timeout-exceeded
This might happen if you are running a lot of custom scripts and killing the scripts before the database connection gets chance to close
If you login to MySQL from CLI and run the command
SHOW PROCESSLIST;
you will get the following output
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| | 6794372 | db_user| 111.11.0.65:21532 | db_name| Sleep | 3800 | | NULL | 0 | 0 | 0
|
| 6794475 | db_user| 111.11.0.65:27488 | db_name| Sleep | 3757 | | NULL | 0 | 0 | 0
|
| 6794550 | db_user| 111.11.0.65:32670 | db_name| Sleep | 3731 | | NULL | 0 | 0 | 0
|
| 6794797 | db_user| 111.11.0.65:47424 | db_name | Sleep | 3639 | | NULL | 0 | 0 | 0
|
| 6794909 | db_user| 111.11.0.65:56029 | db_name| Sleep | 3591 | | NULL | 0 | 0 | 0
|
| 6794981 | db_user| 111.11.0.65:59201 | db_name| Sleep | 3567 | | NULL | 0 | 0 | 0
|
| 6795096 | db_user| 111.11.0.65:2390 | db_name| Sleep | 3529 | | NULL | 0 | 0 | 0
|
| 6795270 | db_user| 111.11.0.65:10125 | db_name | Sleep | 3473 | | NULL | 0 | 0 | 0
|
| 6795402 | db_user| 111.11.0.65:18407 | db_name| Sleep | 3424 | | NULL | 0 | 0 | 0
|
| 6795701 | db_user| 111.11.0.65:35679 | db_name| Sleep | 3330 | | NULL | 0 | 0 | 0
|
| 6800436 | db_user| 111.11.0.65:57815 | db_name| Sleep | 1860 | | NULL | 0 | 0 | 0
|
| 6806227 | db_user| 111.11.0.67:20650 | db_name| Sleep | 188 | | NULL | 1 | 0 | 0
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
15 rows in set (0.00 sec)
You can see as an example
6794372 the command is sleep and time is 3800. This is preventing other operations
These processes should be killed 1 by 1 using the command.
KILL 6794372;
Once you have killed all the sleeping connections, things should start working as normal again
You need to do two steps:
give 777 permition to var/locks folders
delete all file of var/locks folder
Whenever you start an indexing process, Magento writes out a lock file to the var/locks folder. So uou need to do two steps:
Give 777 permission to var/locks folders
Delete all file of var/locks folder.
Now refresh the index management page in admin panel.
Enjoy!!

Resources