I am using ElasticSearch 7.17.
I am trying to create a snapshot of an index:
(I know I shouldn't have a single shard, but for now, that's how it is) :
$ curl -s -k "http://localhost:9200/_cat/indices"
yellow open myIndex vVr6ojDCQTi9ASOUGkkRBA 1 1 679161903 0 140.8gb 140.8gb
I have already registered an S3 bucket for snapshots, which I named backups.
I ran the following command:
$ curl -s -k -X PUT "http://localhost:9200/_snapshot/backups/myIndex?pretty&wait_for_completion=false" -H "content-type:application/json" -d'{"indices": "myIndex"}'
{
"accepted" : true
}
Now, I want to have a look at the progress of that backup's upload :
$ curl -s -k "http://localhost:9200/_cat/snapshots/backups/myIndex"
myIndex IN_PROGRESS 1676385605 14:40:05 0 00:00:00 8.6m 1 0 0 0
$ curl -s -k "http://localhost:9200/_cat/recovery"
myIndex 0 37ms empty_store done n/a n/a 172.24.0.3 7529c7447620 n/a n/a 0 0 0.0% 0 0 0 0.0% 0 0 0 100.0%
It's been in this state, with no change, for the past 1 hour.
I don't understand why 0 bytes are transfered. Am I missing something obvious ?
I don't know what empty_store refers to - shouldn't it be existing_store ?
Other people were right - it just took its time.
The snapshot ended in "SUCCESS" status, but the repository remains in empty_store.
I have a Gearman worker in a shell script started with perp in the following way:
runuid -s gds \
/usr/bin/gearman -h 127.0.0.1 -t 1000 -w -f gds-rel \
-- xargs /home/gds/gds-rel-worker.sh < /dev/null 2>/dev/null
The worker only does some input validation and calls another shell script run.sh that invokes bash, curl, Terragrunt, Terraform, Ansible and gcloud to provision and update resources in GCP like this:
./run.sh --release 1.2.3 2>&1 >> /var/log/gds-release
The script is intended to run unattended. The problem I have is that after the job finishes successfully (that's both shell scripts run.sh and gds-rel-worker.sh) the Gearman job remains executing, because the child process becomes zombie (see last line below).
root 144748 1 0 Apr29 ? 00:00:00 perpboot -d /etc/perp
root 144749 144748 0 Apr29 ? 00:00:00 \_ tinylog -k 8 -s 100000 -t -z /var/log/perp/perpd-root
root 144750 144748 0 Apr29 ? 00:00:00 \_ perpd /etc/perp
root 2492482 144750 0 May14 ? 00:00:00 \_ tinylog (gearmand) -k 10 -s 100000000 -t -z /var/log/perp/gearmand
gearmand 2492483 144750 0 May14 ? 00:00:08 \_ /usr/sbin/gearmand -L 127.0.0.1 -p 4730 --verbose INFO --log-file stderr --keepalive --keepalive-idle 120 --keepalive-interval 120 --keepalive-count 3 --round-robin --threads 36 --worker-wakeup 3 --job-retries 1
root 2531800 144750 0 May14 ? 00:00:00 \_ tinylog (gds-rel-worker) -k 10 -s 100000000 -t -z /var/log/perp/gds-rel-worker
gds 2531801 144750 0 May14 ? 00:00:00 \_ /usr/bin/gearman -h 127.0.0.1 -t 1000 -w -f gds-rel -- xargs /home/gds/gds-rel-worker.sh
gds 2531880 2531801 0 May14 ? 00:00:00 \_ [xargs] <defunct>
So far I have traced the problem to run.sh, because if I replace its call with something simpler (e.g. echo "Hello"; sleep 5) the worker does not hang. Unfortunately, I have no clue what is causing the problem. The script run.sh is rather long and complex, but has been working without a problem so far. Tracing the worker process I see this:
getpid() = 2531801
write(2, "gearman: ", 9) = 9
write(2, "gearman_worker_work", 19) = 19
write(2, " : ", 3) = 3
write(2, "gearman_wait(GEARMAN_TIMEOUT) ti"..., 151) = 151
write(2, "\n", 1) = 1
sendto(5, "\0REQ\0\0\0'\0\0\0\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
recvfrom(5, "\0RES\0\0\0\n\0\0\0\0", 8192, MSG_NOSIGNAL, NULL, NULL) = 12
sendto(5, "\0REQ\0\0\0\4\0\0\0\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
poll([{fd=5, events=POLLIN}, {fd=3, events=POLLIN}], 2, 1000) = 1 ([{fd=5, revents=POLLIN}])
sendto(5, "\0REQ\0\0\0'\0\0\0\0", 12, MSG_NOSIGNAL, NULL, 0) = 12
recvfrom(5, "\0RES\0\0\0\6\0\0\0\0\0RES\0\0\0(\0\0\0QH:terra-"..., 8192, MSG_NOSIGNAL, NULL, NULL) = 105
pipe([6, 7]) = 0
pipe([8, 9]) = 0
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fea38480a50) = 2531880
close(6) = 0
close(9) = 0
write(7, "1.2.3\n", 18) = 6
close(7) = 0
read(8, "which: no terraform-0.14 in (/us"..., 1024) = 80
read(8, "Identity added: /home/gds/.ssh/i"..., 1024) = 54
read(8, 0x7fff6251f5b0, 1024) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=2531880, si_uid=1006, si_status=0, si_utime=0, si_stime=0} ---
read(8,
So the worker continues reading standard output even though the child has finished successfully and presumably closed it. Any ideas how to catch what causes this problem?
I was able to solve it. The script run.sh was starting ssh-agent, which opens a socket and since Gearman redirects all outputs the worker continued reading the open file descriptor even after the script successfully completed.
I found it by examining the open file descriptors for the Gearman worker process after it hang:
# ls -l /proc/2531801/fd/*
lr-x------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/0 -> /dev/null
l-wx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/1 -> 'pipe:[9356665]'
l-wx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/2 -> 'pipe:[9356665]'
lr-x------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/3 -> 'pipe:[9357481]'
l-wx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/4 -> 'pipe:[9357481]'
lrwx------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/5 -> 'socket:[9357482]'
lr-x------. 1 gds devops 64 May 17 11:26 /proc/2531801/fd/8 -> 'pipe:[9369888]'
Then identified the processes using file node for the pipe in file descriptor 8 that German worker continued reading:
# lsof | grep 9369888
gearman 2531801 gds 8r FIFO 0,13 0t0 9369888 pipe
ssh-agent 2531899 gds 9w FIFO 0,13 0t0 9369888 pipe
And finally listed files opened by ssh-agent and found what stands behind file descriptor 3:
# ls -l /proc/2531899/fd/*
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/0 -> /dev/null
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/1 -> /dev/null
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/2 -> /dev/null
lrwx------. 1 root root 64 May 17 11:14 /proc/2531899/fd/3 -> 'socket:[9346577]'
# lsof | grep 9346577
ssh-agent 2531899 gds 3u unix 0xffff89016fd34000 0t0 9346577 /tmp/ssh-0b14coFWhy40/agent.2531898 type=STREAM
As a solution I added kill of the ssh-agent before exit from run.sh script and now there are no jobs hanging due to zombie process.
Here is the 'smem' command I run on the Redhat/CentOS Linux system. I expect the output be printed without the fields with zero size however I would expect the heading columns.
smem -kt -c "pid user command swap"
PID User Command Swap
7894 root /sbin/agetty --noclear tty1 0
9666 root ./nimbus /opt/nimsoft 0
7850 root /sbin/auditd 236.0K
7885 root /usr/sbin/irqbalance --fore 0
11205 root nimbus(hdb) 0
10701 root nimbus(spooler) 0
8446 trapsanalyzer1 /opt/traps/analyzerd/analyz 0
50316 apache /usr/sbin/httpd -DFOREGROUN 0
50310 apache /usr/sbin/httpd -DFOREGROUN 0
3971 root /usr/sbin/lvmetad -f 36.0K
63988 root su - 0
7905 ntp /usr/sbin/ntpd -u ntp:ntp - 4.0K
7876 dbus /usr/bin/dbus-daemon --syst 44.0K
9672 root nimbus(controller) 0
7888 root /usr/lib/systemd/systemd-lo 0
63990 root -bash 0
59978 postfix pickup -l -t unix -u 0
3977 root /usr/lib/systemd/systemd-ud 736.0K
9016 postfix qmgr -l -t unix -u 0
50303 root /usr/sbin/httpd -DFOREGROUN 0
3941 root /usr/lib/systemd/systemd-jo 52.0K
8199 root //usr/lib/vmware-caf/pme/bi 0
8598 daemon /opt/quest/sbin/.vasd -p /v 0
8131 root /usr/sbin/vmtoolsd 0
7881 root /usr/sbin/NetworkManager -- 8.0K
8364 root /opt/puppetlabs/puppet/bin/ 0
8616 daemon /opt/quest/sbin/.vasd -p /v 0
23290 root /usr/sbin/rsyslogd -n 3.8M
64091 root python /bin/smem -kt -c pid 0
7887 polkitd /usr/lib/polkit-1/polkitd - 0
8363 root /usr/bin/python2 -Es /usr/s 0
53606 root /usr/share/metricbeat/bin/m 0
24631 nagios /usr/local/ncpa/ncpa_passiv 0
24582 nagios /usr/local/ncpa/ncpa_listen 0
7886 root /opt/traps/bin/authorized 76.0K
7872 root /opt/traps/bin/pmd 12.0K
8374 root /opt/puppetlabs/puppet/bin/ 0
7883 root /opt/traps/bin/trapsd 64.0K
----------------------------------------------------
54 10 5.1M
Like this?:
$ awk '$NF!=0' file
PID User Command Swap
7850 root /sbin/auditd 236.0K
...
7883 root /opt/traps/bin/trapsd 64.0K
----------------------------------------------------
54 10 5.1M
But instead of using the form awk ... file you'd probably like to smem ... | awk '$NF!=0'.
Could you please try following, for extra precautions removing the space from last fields(in case it is there).
smem -kt -c "pid user command swap" | awk 'FNR==1{print;next} {sub(/[[:space:]]+$/,"")} $NF==0{next} 1'
My rsync script for creating daily incremental backups is working pretty well now. But I have noticed after a week or so that I am left with hundreds of Sleeping Rsync Processes running. Has this to do with my script? Is there a command I can add to the script to stop this?
Here is the Bash Script
#!/bin/bash
LinkDest=/home/backup/files/backupdaily/monday
WeekDay=$(date +%A)
case $WeekDay in
Monday)
rsync -avz --delete --exclude backup --exclude virtual_machines /home /home/backup/files/backupdaily/monday
;;
Tuesday|Wednesday|Thursday|Friday|Saturday)
rsync -avz --exclude backup --exclude virtual_machines --link- dest=$LinkDest /home /home/backup/files/backupdaily/$WeekDay
;;
Sunday)
exit 0
;;
esac
here is my entry in the crontab -e logged in as root
#Backup Schedule
# Daily
* 0 * * * /usr/local/src/backup/backup_daily_v3.sh
This is the Process View
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ COMMAND
1096 root 20 0 116M 1720 716 S 0.0 0.0 14:26.33 |- SCREEN
5169 root 20 0 105M 1428 1084 S 0.0 0.0 0:00.07 | |- /bin/bash
4012 root 20 0 105M 1188 968 S 0.0 0.0 0:00.00 | |- /bin/bash
1097 root 20 0 105M 980 676 S 0.0 0.0 0:00.34 | |- /bin/bash
I need to get certain fields from the output of the more +n command in Windows. The output of the more command is shown below. I need to extract certain field from this output.
Backup SAP L01_xyzabc_d01p001_PBW_ON_Daily Completed full 9/17/2013 6:00:05 PM 0:00 5:49 2360.00 1 0 0 254 100% 2013/09/17-135
Backup SAP L01_xyzabc_d01p001_PEC_ON_Daily Completed full 9/17/2013 7:00:05 PM 0:00 1:37 549.89 1 0 0 75 100% 2013/09/17-142
Backup SAP L01_xyzabc_d01p001_PPI_ON_Daily Completed full 9/17/2013 7:00:07 PM 0:00 2:04 656.00 1 0 0 104 100% 2013/09/17-143
Backup SAP L01_xyzabc_d01p001_PEP_ON_Daily Completed full 9/17/2013 8:00:05 PM 0:00 0:09 12.89 1 0 0 15 100% 2013/09/17-148
Backup SAP L01_xyzabc_d01p001_PDI_ON_Daily Completed full 9/17/2013 9:00:05 PM 0:00 0:07 5.63 1 0 0 14 100% 2013/09/17-156
Backup SAP L01_xyzabc_d01p001_PSM_ON_Daily Completed full 9/17/2013 10:00:06 P 0:00 0:22 92.08 1 0 0 21 100% 2013/09/17-161
Backup SAP L01_xyzabc_d01p001_PMD_ON_Daily Completed full 9/17/2013 11:00:06 P 0:00 0:09 9.53 1 0 0 26 100% 2013/09/17-169
Can this be done without installing anything or without using PowerShell?
-Louie
Try to use a for loop. This is a batch file version.
#echo off
for /f "tokens=1,2,3" %%a in ('more +n ...') do (
echo %%a %%b %%c
)
It would depend on the columns you wanted. You can see more info by typing help for on the command-line.