Bash script to check running process [duplicate] - bash

This question already has answers here:
Linux Script to check if process is running and act on the result
(8 answers)
Closed 5 years ago.
I wrote a bash-script to check if a process is running. It doesn't work since the ps command always returns exit code 1. When I run the ps command from the command-line, the $? is correctly set, but within the script it is always 1. Any idea?
#!/bin/bash
SERVICE=$1
ps -a | grep -v grep | grep $1 > /dev/null
result=$?
echo "exit code: ${result}"
if [ "${result}" -eq "0" ] ; then
echo "`date`: $SERVICE service running, everything is fine"
else
echo "`date`: $SERVICE is not running"
fi
Bash version: GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)

There are a few really simple methods:
pgrep procname && echo Running
pgrep procname || echo Not running
killall -q -0 procname && echo Running
pidof procname && echo Running

This trick works for me. Hope this could help you. Let's save the followings as checkRunningProcess.sh
#!/bin/bash
ps_out=`ps -ef | grep $1 | grep -v 'grep' | grep -v $0`
result=$(echo $ps_out | grep "$1")
if [[ "$result" != "" ]];then
echo "Running"
else
echo "Not Running"
fi
Make the checkRunningProcess.sh executable.And then use it.
Example to use.
20:10 $ checkRunningProcess.sh proxy.py
Running
20:12 $ checkRunningProcess.sh abcdef
Not Running

I tried your version on BASH version 3.2.29, worked fine. However, you could do something like the above suggested, an example here:
#!/bin/sh
SERVICE="$1"
RESULT=`ps -ef | grep $1 | grep -v 'grep' | grep -v $0`
result=$(echo $ps_out | grep "$1")
if [[ "$result" != "" ]];then
echo "Running"
else
echo "Not Running"
fi

I use this one to check every 10 seconds process is running and start if not and allows multiple arguments:
#!/bin/sh
PROCESS="$1"
PROCANDARGS=$*
while :
do
RESULT=`pgrep ${PROCESS}`
if [ "${RESULT:-null}" = null ]; then
echo "${PROCESS} not running, starting "$PROCANDARGS
$PROCANDARGS &
else
echo "running"
fi
sleep 10
done

Check if your scripts name doesn't contain $SERVICE. If it does, it will be shown in ps results, causing script to always think that service is running. You can grep it against current filename like this:
#!/bin/sh
SERVICE=$1
if ps ax | grep -v grep | grep -v $0 | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
echo "$SERVICE is not running"
fi

Working one.
!/bin/bash
CHECK=$0
SERVICE=$1
DATE=`date`
OUTPUT=$(ps aux | grep -v grep | grep -v $CHECK |grep $1)
echo $OUTPUT
if [ "${#OUTPUT}" -gt 0 ] ;
then echo "$DATE: $SERVICE service running, everything is fine"
else echo "$DATE: $SERVICE is not running"
fi

Despite some success with the /dev/null approach in bash. When I pushed the solution to cron it failed. Checking the size of a returned command worked perfectly though. The ampersrand allows bash to exit.
#!/bin/bash
SERVICE=/path/to/my/service
result=$(ps ax|grep -v grep|grep $SERVICE)
echo ${#result}
if ${#result}> 0
then
echo " Working!"
else
echo "Not Working.....Restarting"
/usr/bin/xvfb-run -a /opt/python27/bin/python2.7 SERVICE &
fi

#!/bin/bash
ps axho comm| grep $1 > /dev/null
result=$?
echo "exit code: ${result}"
if [ "${result}" -eq "0" ] ; then
echo "`date`: $SERVICE service running, everything is fine"
else
echo "`date`: $SERVICE is not running"
/etc/init.d/$1 restart
fi
Something like this

Those are helpful hints. I just needed to know if a service was running when I started the script, so I could leave the service in the same state when I left. I ended up using this:
HTTPDSERVICE=$(ps -A | grep httpd | head -1)
[ -z "$HTTPDSERVICE" ] && echo "No apache service running."

I found the problem. ps -ae instead ps -a works.
I guess it has to do with my rights in the shared hosting environment. There's apparently a difference between executing "ps -a" from the command line and executing it from within a bash-script.

A simple script version of one of Andor's above suggestions:
!/bin/bash
pgrep $1 && echo Running
If the above script is called test.sh then, in order to test, type:
test.sh NameOfProcessToCheck
e.g.
test.sh php

I was wondering if it would be a good idea to have progressive attempts at a process, so you pass this func a process name func_terminate_process "firefox" and it tires things more nicely first, then moves on to kill.
# -- NICE: try to use killall to stop process(s)
killall ${1} > /dev/null 2>&1 ;sleep 10
# -- if we do not see the process, just end the function
pgrep ${1} > /dev/null 2>&1 || return
# -- UGLY: Step trough every pid and use kill -9 on them individually
for PID in $(pidof ${1}) ;do
echo "Terminating Process: [${1}], PID [${PID}]"
kill -9 ${PID} ;sleep 10
# -- NASTY: If kill -9 fails, try SIGTERM on PID
if ps -p ${PID} > /dev/null ;then
echo "${PID} is still running, forcefully terminating with SIGTERM"
kill -SIGTERM ${PID} ;sleep 10
fi
done
# -- If after all that, we still see the process, report that to the screen.
pgrep ${1} > /dev/null 2>&1 && echo "Error, unable to terminate all or any of [${1}]" || echo "Terminate process [${1}] : SUCCESSFUL"

I need to do this from time to time and end up hacking the command line until it works.
For example, here I want to see if I have any SSH connections, (the 8th column returned by "ps" is the running "path-to-procname" and is filtered by "awk":
ps | awk -e '{ print $8 }' | grep ssh | sed -e 's/.*\///g'
Then I put it in a shell-script, ("eval"-ing the command line inside of backticks), like this:
#!/bin/bash
VNC_STRING=`ps | awk -e '{ print $8 }' | grep vnc | sed -e 's/.*\///g'`
if [ ! -z "$VNC_STRING" ]; then
echo "The VNC STRING is not empty, therefore your process is running."
fi
The "sed" part trims the path to the exact token and might not be necessary for your needs.
Here's my example I used to get your answer. I wrote it to automatically create 2 SSH tunnels and launch a VNC client for each.
I run it from my Cygwin shell to do admin to my backend from my windows workstation, so I can jump to UNIX/LINUX-land with one command, (this also assumes the client rsa keys have already been "ssh-copy-id"-ed and are known to the remote host).
It's idempotent in that each proc/command only fires when their $VAR eval's to an empty string.
It appends " | wc -l" to store the number of running procs that match, (i.e., number of lines found), instead of proc-name for each $VAR to suit my needs. I keep the "echo" statements so I can re-run and diagnose the state of both connections.
#!/bin/bash
SSH_COUNT=`eval ps | awk -e '{ print $8 }' | grep ssh | sed -e 's/.*\///g' | wc -l`
VNC_COUNT=`eval ps | awk -e '{ print $8 }' | grep vnc | sed -e 's/.*\///g' | wc -l`
if [ $SSH_COUNT = "2" ]; then
echo "There are already 2 SSH tunnels."
elif [ $SSH_COUNT = "1" ]; then
echo "There is only 1 SSH tunnel."
elif [ $SSH_COUNT = "0" ]; then
echo "connecting 2 SSH tunnels."
ssh -L 5901:localhost:5901 -f -l USER1 HOST1 sleep 10;
ssh -L 5904:localhost:5904 -f -l USER2 HOST2 sleep 10;
fi
if [ $VNC_COUNT = "2" ]; then
echo "There are already 2 VNC sessions."
elif [ $VNC_COUNT = "1" ]; then
echo "There is only 1 VNC session."
elif [ $VNC_COUNT = "0" ]; then
echo "launching 2 vnc sessions."
vncviewer.exe localhost:1 &
vncviewer.exe localhost:4 &
fi
This is very perl-like to me and possibly more unix utils than true shell scripting. I know there are lots of "MAGIC" numbers and cheezy hard-coded values but it works, (I think I'm also in poor taste for using so much UPPERCASE too). Flexibility can be added with some cmd-line args to make this more versatile but I wanted to share what worked for me. Please improve and share. Cheers.

A solution with service and awk that takes in a comma-delimited list of service names.
First it's probably a good bet you'll need root privileges to do what you want. If you don't need to check then you can remove that part.
#!/usr/bin/env bash
# First parameter is a comma-delimited string of service names i.e. service1,service2,service3
SERVICES=$1
ALL_SERVICES_STARTED=true
if [ $EUID -ne 0 ]; then
if [ "$(id -u)" != "0" ]; then
echo "root privileges are required" 1>&2
exit 1
fi
exit 1
fi
for service in ${SERVICES//,/ }
do
STATUS=$(service ${service} status | awk '{print $2}')
if [ "${STATUS}" != "started" ]; then
echo "${service} not started"
ALL_SERVICES_STARTED=false
fi
done
if ${ALL_SERVICES_STARTED} ; then
echo "All services started"
exit 0
else
echo "Check Failed"
exit 1
fi

The most simple check by process name :
bash -c 'checkproc ssh.exe ; while [ $? -eq 0 ] ; do echo "proc running";sleep 10; checkproc ssh.exe; done'

Related

How to supply password from shell script without using expect command

I'm writing a bash script to stop start my postgres DB service. Initially I succeeded in creating one, but as soon I enabled SSL certificate it prompts to enter the phrase password.
I know the easiest solution is to use expect , but in my environment i am not authorized to use it.
Can someone help me out in scripting as to how can I supply the PEM PHRASE password without a user intervention.
This is what I have worked so far.
-bash-4.2$ cat start_postgres_db.sh
cd `dirname $0`
. `dirname $0`/parameter.env
${POSTGREBIN}/pg_ctl -D ${POSTGREDATAPATH} start -w
while true
do
sleep 1
loopcnt=0
loopcnt=`expr ${loopcnt} + 1`
PRCCNT=`ps -ef | grep ${DBEXENAME} | grep -v grep|wc -l`
if [ ${PRCCNT} -eq 1 ]
then
echo "PostgreSQL process started sucessfully"
exit
fi
if [ ${loopcnt} -gt 11 ]
then
echo "PostgreSQL process not started successfully"
echo "su to postgres and run ${POSTGREBIN}/pg_ctl -D ${POSTGREDATAPATH} restart"
exit
fi
done
Execution:
bash-4.2$ ./start_postgres_db.sh
waiting for server to start....Enter PEM pass phrase:.........
You can provide a password to pg_ctl as argument on the command line with the option -P. I will assume it is contained in the variable ${POSTGREPASSWORD}.
start_postgres_db.sh
cd `dirname $0`
. `dirname $0`/parameter.env
${POSTGREBIN}/pg_ctl start -w -D ${POSTGREDATAPATH} -P ${POSTGREPASSWORD}
while true; do
sleep 1
(( loopcnt++ ))
PRCCNT=$(ps -ef | grep ${DBEXENAME} | grep -v grep | wc -l)
if [ ${PRCCNT} -eq 1 ]; then
echo "PostgreSQL process started sucessfully"
exit 0
fi
if [ ${loopcnt} -gt 11 ]; then
echo "PostgreSQL process not started successfully"
echo "su to postgres and run ${POSTGREBIN}/pg_ctl -D ${POSTGREDATAPATH} restart"
exit 1
fi
done

Nested if statement inside a for loop in bash script

I'm writing a bash script that goes through a for loop which is a list of each hostname, then will test each one if it's responding on port 22, if it is then execute an ssh session, however both the first and second if statements are only executed on the first host in the list, not the rest of the hosts. If the host isn't responding on port 22, I want the script to continue to the next host. Any ideas how to ensure the script runs the ssh on each host in the list? Should this be another for loop?
#!/bin/bash
hostlist=$(cat '/local/bin/bondcheck/hostlist_test.txt')
for host in $hostlist; do
test=$(nmap $host -P0 -p 22 | egrep 'open|closed|filtered' | awk '{print $2}')
if [[ $test = 'open' ]]; then
cd /local/bin/bondcheck/
mv active.current active.fixed
ssh -n $host echo -n "$host: ; cat /proc/net/bonding/bond0 | grep Active" >> active.current
result=$(comm -13 active.fixed active.current)
if [ "$result" == "" ]; then
exit 0
else
echo "$result" | cat -n
fi
else
echo "$host is not responding"
fi
done
exit 0 exits the entire script; you just want to move on to the next iteration of the loop. Use continue instead.
You problem is most likely in the lines
if [ "$result" == "" ]
then
exit 0
else
echo "$result" | cat -n
fi
Here the exit 0 causes the entire script to exit when the $result is empty. You could the way around using :
if [ "$result" != "" ] #proceeding on non-empty 'result'
then
echo "$result" | cat -n
fi

Bash while loop - break and exit (DIE script DIE!!!)

All I want from this script is to ssh to the host, and check if the process is alive, and if it is not, I want the littel script to die.
Does not die though. It stops, and then starts up again on the ssh is successful again.
I want death though.
#!/bin/bash
iterate=0
while [ $iterate -le 20000 ]
do
rc=$?
ssh -q -T coolhost "ps -ef | egrep '[i]cool-process' | grep wrapper "
if [[ $rc -eq 0 ]] ; then
sleep 2
iterate=$((iterate+1 ))
else
break
exit 1
fi
done
It will iterate to 2000, however if the remote process breaks, it will not die. It will not break and exit.
this will work - but won't sleep - if I put a sleep the rc goes to 0 and is never dies.
so this works but is too basic.
#!/bin/bash
set -e
while : ; do
ssh -q -T coolhost "ps -ef | egrep '[i]cool-process' | grep wrapper" > /dev/null 2>&1
done
You set rc=$? before the ssh command, and the last command was the test ([) command, which just succeeded, so when you test if [[ $rc -eq 0 ]] the answer is always 'yes, it does'.
It's best to test the status of ssh directly:
#!/bin/bash
iterate=0
while [ $iterate -le 20000 ]
do
if ssh -q -T coolhost "ps -ef | egrep '[i]cool-process' | grep wrapper"; then
sleep 2
((iterate++))
else
break # or exit 1
fi
done

Shellscript - Check Running services

I'm new in Shellscript, and i'm getting some problems. I need a script to check if the services are running or not, if its not running, and dont exist the flag, start all services. What i'm doing wrong?
#!/bin/bash
file= "$PIN_HOME/apps/DE_BILL_MI_BRM/alarmistica/flags/intervencao.flag"
# Check if services are running
for service in $BRM_SERVICES
do
if [ps -ef | grep $service | grep -v grep | awk 'NR>1{for (i=1;i<=NF;i++)if($i==1) print "Services not running", i}' ]; then
echo $service " is not running correctly"
else
if [ -f "$file" ]; then
echo "Flag exists. The service will not start"
else
echo "$file not found. Starting all services"
pin_ctl start all
fi
fi
done
When ($i==1), the services is not running!
But the results is not corresponding. For exemple, when the services are down, the script dont start the services...
For checking process tables, use pgrep instead.
#!/bin/bash
file= "$PIN_HOME/apps/DE_BILL_MI_BRM/alarmistica/flags/intervencao.flag"
# Check if services are running
for service in $BRM_SERVICES
do
pgrep -f "$service";
exstat=$?; # This checks the exit status
if [ "$exstat" -eq 0 ] && ! [ -f "$file" ]; then
echo "pgrep returned exit status $extstat";
else
echo "$file not found. Starting all services"
pin_ctl start all
fi
done

bash: running cURLs in parallel slower than one after another

We have to cache quite a big database of data after each upload, so we created a bash script that should handle it for us. The script should start 4 paralel curls to the site and once they're done, start the next one from the URL list we store in the file.
In theory everything works ok, and the concept works if we run the run 4 processes from our local machines to the target site.
If i set the MAX_NPROC=1 the curl takes as long as it would if the browser hits the URL
i.e. 20s
If I set the MAX_NPROC=2 the time request took, triples.
Am I missing something? Is that an apache setting that is slowing us down? or is this a secret cURL setting that I'm missing?
Any help will be appreciated. Please find the bash script below
#!/bin/bash
if [[ -z $2 ]]; then
MAX_NPROC=4 # default
else
MAX_NPROC=$2
fi
if [[ -z $1 ]]; then
echo "File with URLs is missing"
exit
fi;
NUM=0
QUEUE=""
DATA=""
URL=""
declare -a URL_ARRAY
declare -a TIME_ARRAY
ERROR_LOG=""
function queue {
QUEUE="$QUEUE $1"
NUM=$(($NUM+1))
}
function regeneratequeue {
OLDREQUEUE=$QUEUE
echo "OLDREQUEUE:$OLDREQUEUE"
QUEUE=""
NUM=0
for PID in $OLDREQUEUE
do
process_count=`ps ax | awk '{print $1 }' | grep -c "^${PID}$"`
if [ $process_count -eq 1 ] ; then
QUEUE="$QUEUE $PID"
NUM=$(($NUM+1))
fi
done
}
function checkqueue {
OLDCHQUEUE=$QUEUE
for PID in $OLDCHQUEUE
do
process_count=`ps ax | awk '{print $1 }' | grep -c "^${PID}$"`
if [ $process_count -eq 0 ] ; then
wait $PID
my_status=$?
if [[ $my_status -ne 0 ]]
then
echo "`date` $my_status ${URL_ARRAY[$PID]}" >> $ERROR_LOG
fi
current_time=`date +%s`
old_time=${TIME_ARRAY[$PID]}
time_difference=$(expr $current_time - $old_time)
echo "`date` ${URL_ARRAY[$PID]} END ($time_difference seconds)" >> $REVERSE_LOG
#unset TIME_ARRAY[$PID]
#unset URL_ARRAY[$PID]
regeneratequeue # at least one PID has finished
break
fi
done
}
REVERSE_LOG="$1.rvrs"
ERROR_LOG="$1.error"
echo "Cache STARTED at `date`" > $REVERSE_LOG
echo "" > ERROR_LOG
while read line; do
# create the command to be run
DATA="username=user#server.com&password=password"
URL=$line
CMD=$(curl --data "${DATA}" -s -o /dev/null --url "${URL}")
echo "Command: ${CMD}"
# Run the command
$CMD &
# Get PID for process
PID=$!
queue $PID;
URL_ARRAY[$PID]=$URL;
TIME_ARRAY[$PID]=`date +%s`
while [ $NUM -ge $MAX_NPROC ]; do
checkqueue
sleep 0.4
done
done < $1
echo "Cache FINISHED at `date`" >> $REVERSE_LOG
exit
The network is almost always the bottleneck. Spawning more connections usually makes it slower.
You can try to see if parallel'izing it will do you any good by spawning several
time curl ...... &

Resources