While loop is not working in during using ssh [duplicate] - shell

This question already has answers here:
ssh breaks out of while-loop in bash [duplicate]
(2 answers)
Closed 7 years ago.
In the below code, while loop is executing only one time whereas there are entry in crawler.cfg file.
when I am commmenting VAR=$(ssh ${HOSTS} ps ax | grep -v grep | grep $SERVICE|wc -l) line in code then it is looping three times and working fine...
So, What is problem in this line?
Please assist.
while read crawler_info
do
PATH_OF_SERVER="/home/Crawler/"
THREAD_SCRIPT=crawler18.sh
HOSTS=$(echo $crawler_info|cut -d'|' -f1)
SERVICE="java"
VAR=$(ssh ${HOSTS} ps ax | grep -v grep | grep $SERVICE|wc -l)
if [ $VAR -ne 0 ]
then
echo "$SERVICE service running, everything is fine on ${HOSTS} node"
else
echo "java services is not running on ${HOSTS} node. process is triggering java services"
ssh ${HOSTS} ${PATH_OF_SERVER}${THREAD_SCRIPT} > /dev/null 2> /dev/null&
if [ $? -eq 0 ]
then
echo "java services triggered successfully on ${HOSTS} node"
else
echo "process is unable to trigger the java services on ${HOSTS} node"
fi
fi
done < crawler.cfg
crawler.cfg
n0007
n00011
n0000023

This is rather a FAQ.
The easiest way to solve it is to use a FD other than 0:
while read -u 3 ...
done 3<crawler.cfg
Alternately, redirect ssh's stdin from /dev/null in both invocations:
VAR=$(ssh ${HOSTS} ps ax </dev/null | grep -v grep | grep $SERVICE|wc -l)
ssh ${HOSTS} ${PATH_OF_SERVER}${THREAD_SCRIPT} </dev/null >/dev/null 2>&1

Related

How do I prevent my bash script (tailing a file) from repeatedly acting on the same line?

I was working on a script that would keep monitoring login to my server or laptop via ssh.
this was the code that I was working with.
slackmessenger() {
curl -X POST -H 'Content-type: application/json' --data '{"text":"'"$1"'"}' myapilinkwashere
## removed it the api link due to slack restriction
}
while true
do
tail /var/log/auth.log | grep sshd | head -n 1 | while read LREAD
do
echo ${LREAD}
var=$(tail -f /var/log/auth.log | grep sshd | head -n 1)
slackmessenger "$var"
done
done
The issue I'm facing is that it keeps sending the old logs due to the while loop. can there be a condition that the loop only sends the new entries/updated enter as opposed to sending the old one over and over again. could not think of a condition that would skip the old entries and only shows old one.
Instead of using head -n 1 to extract a line at a time, iterate over the filtered output of tail -f /var/log/auth.log | grep sshd and process each line once as it comes through.
#!/usr/bin/env bash
# ^^^^- this needs to be a bash script, not a sh script!
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
while IFS= read -r line; do
printf '%s\n' "$line"
slackmessenger "$line"
done < <(tail -f /var/log/auth.log | grep --line-buffered sshd)
See BashFAQ #9 describing why --line-buffered is necessary.
You could also write this as:
#!/usr/bin/env bash
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
tail -f /var/log/auth.log |
grep --line-buffered sshd |
tee >(xargs -d $'\n' -n 1 slackmessenger)

Empty command output [duplicate]

This question already has answers here:
Check if command error contains a substring
(3 answers)
Closed 4 years ago.
I am doing some bash scripting and would like to know how to recognize empty output or a specific string output when running a command in the bash script.
Example - If I perform a ping to google.com and due to no connectivity I get the message "No route to host", I would like the program to echo "You have done a boo boo".
What I have tried:
if [[ "$(ping -c 1 -n -q $address | grep -q 'ping: sendto: No route to
host')" > /dev/null ]];
then echo " Your server is up and working, please proceed to the next
step"
elif [-n "$(ping -c 1 -n -q $address | grep -q 'ping: sendto: No route
to host')" == *java* ];
then echo "Your server is down, please fix this issue"
else
echo "No response"
fi
Also, looked at other methods of achieving this but couldn't find a working solution.
You can check the exit status of grep. grep exits 0 if it has a match.
if ! ping -c 1 -n -q $address 2>&1 | grep -q 'ping: sendto: No route to host' > /dev/null
then
echo " Your server is up and working, please proceed to the next step"
fi

Bash script not killing all PIDs in specified file or allowing partial names for input [duplicate]

This question already has answers here:
How to kill all processes with a given partial name? [closed]
(14 answers)
Closed 6 years ago.
Right now, my bash script works for 1 PID processes and I must use an exact process name for input. It will not accept *firefox*' for example. Also, I run a bash script that opens multiplersync` processes, and I would like this script to kill all of those processes. But, this script only works on processes with 1 PID.
Here is the script:
#!/bin/bash
createProcfile() {
ps -eLf | grep -f process.tmp | grep -v 'grep' | awk '{print $2,$10}' | sort -u | egrep -o '[0-9]{4,}' > pid.tmp
# pgrep "$(cat process.tmp)" > pid.tmp
}
PIDFile=pid.tmp
echo "Enter a process name"
read -r process
echo "$process" > process.tmp
# node_process_id=$(pidof "$process")
node_process_id=$(ps -eLf | grep $process | grep -v 'grep' | awk '{print $2,$10}' | sort -u | egrep -o '[0-9]{4,}')
if [[ -z "$node_process_id" ]]; then
echo "Please enter a valid process."
rm process.tmp
exit 0
fi
ps -eLf | grep $process | awk '{print $2,$10}' | sort -u | grep -v 'grep'
# pgrep "$(cat process.tmp)"
echo "Would you like to kill this process(es)? (y/n)"
read -r answer
if [[ "$answer" == y ]]; then
createProcfile
pkill -F "$PIDFile"
rm "$PIDFile"
sleep 1
createProcfile
node_process_id=$(pidof "$process")
if [[ -z $node_process_id ]]; then
echo "Process terminated successfully."
rm process.tmp
exit 0
else
echo "Process not terminated. Kill process manually."
ps -eLf | grep $process | awk '{print $2,$10}' | sort -u | grep -v 'grep'
# pgrep "$(cat process.tmp)"
rm "$PIDFile"
rm process.tmp
exit 0
fi
fi
I edited the script. Thanks to your comments, it works now and does the following:
Make script accept partial name as input
Kill more than 1 PID
Thank you!
pkill exists to solve your problem. It accepts a pattern to match against the process name, or the entire command line if -f is specified.
It will not accept *firefox*
Use killall command. Example :
killall -r "process.*"
This will kill all the processes whose names contain process in the beginning followed by any stuff.
The [ manual ] says :
-r, --regexp
Interpret process name pattern as an extended regular expression.
Sidenote:
Note that we have to double quote the regular expression to prevent file globbing. (Thanks #broslow for reminding this stuff).

SSH in a script - commands not running on remote server [duplicate]

This question already has answers here:
Execute a command on remote hosts via ssh from inside a bash script
(4 answers)
Closed 7 years ago.
I need a help with a bash script that connect to server as root, execute some commands and then exit from the server.
I tried this script but when login login to server performed the command not running !
#!/bin/bash
sudo ssh -o ConnectTimeout=10 $1 'exit'
if [ $? != 0 ]; then
echo "Could not connect to $1 , script stopped"
exit
fi
sudo ssh $1
echo "SRV=`cat /etc/puppet/puppet.conf | grep -i srv_domain | awk '{print $3}'`"
echo $SRV
echo "puppetMaster=`host -t srv _x-puppet._tcp.$SRV | head -1 | awk '{print $8}' | cut -f1 -d"."`"
echo $puppetMaster
'exit'
I'm surprised nobody has suggested a heredoc yet.
sudo ssh "$1" <<'EOF'
SRV=`cat /etc/puppet/puppet.conf | grep -i srv_domain | awk '{print $3}'`
echo $SRV
echo "puppetMaster=`host -t srv _x-puppet._tcp.$SRV | head -1 | awk '{print $8}' | cut -f1 -d"."`"
echo $puppetMaster
EOF
This feeds everything from the <<'EOF' until the line starting with EOF into the stdin of ssh, to be received and run by the remote shell.
The commands following ssh machine in a script are not run on the machine. They will be run on the local machine once the ssh exits.
Either specify the commands to run as an argument of ssh, or alternatively, run ssh and make it read the commands from standard input, and send the commands to it.
ssh machine ls
# or
echo ls | ssh machine
You seem to be a little confused as to what runs where.
ssh -o ConnectTimeout=10 $1 'exit'
will connect to $1, run exit, and disconnect.
ssh -o ConnectTimeout=10 $1 'echo hello world'
will print hello world on
the server and then disconnect.
ssh $1
will open up a shell on the remote. After the shell has ended, the following commands will run locally.
echo "SRV=`cat /etc/puppet/puppet.conf | grep -i srv_domain | awk '{print $3}'`"
echo $SRV
echo "puppetMaster=`host -t srv _x-puppet._tcp.$SRV | head -1 | awk '{print $8}' | cut -f1 -d"."`"
echo $puppetMaster
'exit'
What you probably want is start bash on the remote and forward to it the commands you want to give it via stdin.
echo "my commands" | ssh $1 bash
Technically, you don't need that bash -- ssh will start bash even without it (but with different rc files).

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

Resources