Bash: Check up, Run a process if not running [duplicate] - bash

This question already has answers here:
How do I write a bash script to restart a process if it dies?
(10 answers)
Closed 6 years ago.
Bash: Check up, Run a process if not running
Hi ,
My requirement is that , if Memcache server is down for any reason in production , i want to restart it immediately
Typically i will start Memcache server in this way with user as nobody with replication as shown below
memcached -u nobody -l 192.168.1.1 -m 2076 -x 192.168.1.2 -v
So for this i added a entry in crontab this way
(crontab -e)
*/5 * * * * /home/memcached/memcached_autostart.sh
memcached_autostart.sh
#!/bin/bash
ps -eaf | grep 11211 | grep memcached
# if not found - equals to 1, start it
if [ $? -eq 1 ]
then
memcached -u nobody -l 192.168.1.1 -m 2076 -x 192.168.1.2 -v
else
echo "eq 0 - memcache running - do nothing"
fi
My question is inside memcached_autostart.sh , for autorestarting the memcached server , is there any problem with the above script ??
Or
If there is any better approach for achieving this (rather than using cron job )
Please share your experience .

Yes the problem is ps -eaf | grep 11211 | grep memcached I assume is the process ID which always changes on every start, so what you should do is ps -ef | grep memcached
hope that helped

Instead of running it from cron you might want to create a proper init-script. See /etc/init.d/ for examples. Also, if you do this most systems already have functionality to handle most of the work, like checking for starting, restarting, stopping, checking for already running processes etc.
Most daemon scripts save the pid to a special file (e.g. /var/run/foo), and then you can check for the existence of that file.
For Ubuntu, you can see /etc/init.d/skeleton for example script that you can copy.

Related

ip command could not get IP address in shell script in crontab

I use Cloudflare API script to update DDNS on my Raspberry PI using crontab. The shell script works fine in Debian but fails in CentOS/Fedora. While run in the terminal, it works.
I checked out the ip addr could not get data, but I could not solve it. And I tried out that I can instead ip addr with hostname -I, then it works well.
But I am wondering why ip could not work in .sh / bash shell script?
Ferora 28 server Raspberry.
I tried many resolvation I can googled, none works.
#!/bin/bash
#this works
ip=$(hostname -I | awk '{print $NF;exit}')
echo $ip>>/usr/local/bin/cloudflare.log
#this fail
ips=$(ip route get 1:: | awk '{print $(NF-4);exit}')
echo $ips>>/usr/local/bin/cloudflare.log
# crontab -l
#automatic update ddns per 1 min
* */1 * * * /usr/local/bin/cf-ddns.sh >/dev/null 2>&1
cat cloudflare.log
xx.xx.xxx.xx
<Blank_None>
crontab dosn't set PATH an cannot find the binarys. Add PATH at the top of your script, or with an export at top of crontab.
# for example
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

if statement with variable comparison not working in bash shell

I'm trying to write a very simple script to check whether iptables are already updated for Synergy to work. The current script is:
if [[ $SYNERGY = "yes" ]]
then
echo "Synergy is active"
else
sudo iptables -I INPUT -p tcp --dport 24800 -j ACCEPT
export SYNERGY=yes
fi
But it does not work (I'm always asked for the sudo password each time I open a new terminal)
I also tried with this modified version, but the result is the same
syn="yes"
if [ "$SYNERGY" = "$syn" ]
then
echo "Synergy is active"
else
sudo iptables -I INPUT -p tcp --dport 24800 -j ACCEPT
export SYNERGY=yes
fi
Where is the issue?
If you are expecting this to be run from one terminal/shell session and to affect other unrelated terminals/shell sessions then the issue is that that isn't how export works.
export sets the variable in the environment of the current process so that any processes spawned from this process also have it in their environment. Notice how I said "spawned from"? It only applies to processes that process spawns. Unrelated processes aren't affected.
If you want something globally checkable then you either need a flag/lock/state file of some sort or an actual runtime check of the iptables configuration.
Just to help those who have the same question, this is how I managed to persist firewall settings:
sudo apt-get install iptables-presistent
and then the rules specified in the files rules.v4 or rules.v6 in /etc/iptables are automatic loaded at startup

How to execute shell script on two servers

I want to automate the below scenario:
I have two servers connected in a n/w. If wanted to fetch the /var/logs/messages on both servers. I try to automate it as below but I couldnt proceed from step4 as after logging into the other server the process does not run anymore.
1 echo "Hello $HOSTNAME"
2 date
3 echo -n > /var/log/messages
4 ssh 10.30.3.2;echo -n > /var/log/messages;exit
5 ls
6 cp /var/log/messages > my_bug_log.txt
7 ssh 10.30.3.2;cp /var/log/messages > my_bug_log.txt
How to automate and fetch the logs from both servers ?
EDITED :
1 #!/bin/bash
2
3 echo "Hello $HOSTNAME"
4 date
5 echo -n > /var/log/messages
6 ssh 10.30.3.2 "echo -n > /var/log/messages ";exit
7 echo "welcome"
The echo "welcome" is not executed after exiting from the other host.
EDITED :
ssh 10.30.3.2 "cd /var/log" "touch bug_iteration_$i" "cp /var/log/messages > bug_iteration_$i"
Fetching both message logs from each of the remote servers is fairly easy. However, you don't seem to be using the correct tool for the job. This answer presumes that you are familiar with creating dsa keys to allow passwordless connections between the hosts with ssh-keygen and that is setup and working properly. I will also presume you have needed permission to copy the message logs.
The correct tool for the job is rsync (others will work, but rsync is the defacto standard). What you want to do to retrieve the files is:
rsync -uav 10.30.3.2:/var/log/messages /path/to/store/messages.10.30.3.2
This will get /var/log/messages on 10.30.3.2 and save it on the local machine at /path/to/store/messages.10.30.3.2.
Now if you want to modify it in some way as your echo -n > /var/log/messages suggest before using rsync to retrieve the messages log, remember ssh will execute any command you tell it to on the remote host. So if you want to enter something in the remote log before retrieving it, then you can use:
ssh 10.30.3.2 "echo -n 'something' > /var/log/messages"
(I'm not sure your reason for suppressing the newline in echo... but to each his own) Another trick for executing multiple commands on 10.30.3.2 easily is to create a script on 10.30.3.2 that does what you need and make sure it has the execute bit set. Then you can run the script on 10.30.3.2 from your machine by ssh:
`
ssh 10.30.3.2 /path/to/your/script.sh
If this hasn't answered your question, send a comment. It was somewhat unclear what your were actually attempting to do from your post.
after comment
It is still unclear what you are trying to do. It appears that you want to echo the hostname and date then truncate the messages file by echo -n > /var/log/messages, then ssh 10.30.3.2 truncate its /var/log/messages file and after the ssh command completes exit the script before you echo "Welcome". You see, when ssh 10.30.3.2 "echo -n > /var/log/messages " completes, your next exit command causes the script you are running to exit. You don't need that exit there.
second addendum:
Let's do it this way. You want to run the same commands on each host and you want to be able to run those commands on a remote host via ssh. So let's create a script on each box in /usr/local/bin/empty_message_log.sh that contains the following:
#!/bin/bash
echo "Hello $HOSTNAME" # echos Hello hostname to terminal
date # echos date to terminal
echo -n > /var/log/messages # truncates /var/log/messages to (empty)
if [ "$HOSTNAME" == "fillin localhost hostname" ]; then
# runs this same script on 10.30.3.2
# only run if called on local machine
ssh 10.30.3.2 /usr/local/bin/empty_message_log.sh
fi
echo "welcome" # echos welcome and exits
Now make sure it has the execute bit set:
chmod 0755 /usr/local/bin/empty_message_log.sh
# adjust permissions as required
Put this script on all the hosts you want this capability on. Now you can call this script on your local machine and it will run the same set of command on the remote host at 10.30.3.2. It will only call and execute the script remotely if "fillin localhost hostname" matches the box it is run on.
Did you consider using scp to fetch the files you need?
Apart from that, if you need to perform the same actions of multiple machines, you might look at ansible (http://www.ansibleworks.com)

Make who in all jails

Looking for a script what will show all logged users sorted by FreeBSD jails where they're logged in. So, need run the who command in all currently running FreeBSD jails and in the main host too.
I make this:
who #main host
jls | grep -v JID | while read jid ip host path
do
echo $jid $host
jexec $jid who
done
but the jexec need root execution and i'm logging in usually as non-root and make su everytime is painfull...
Is here any other simple way?
The who command in FreeBSD knows a file argument from where read informations about the logged-in users, the default is /var/run/utx.active - and the file is usually world-readable...
Probably will be enough the next script:
#!/usr/local/bin/bash
while read jpath
do
echo JWHO: ${jpath:-$(hostname)}
who "${jpath}/var/run/utx.active"
done < <( jls -h path | sed '1s:.*::' )
example output:
JWHO: marvin.example.com
smith pts/0 7 nov 20:55 (adsl2343-some-another.example.com)
JWHO: /jails/jail1
JWHO: /jails/testjail
root pts/2 7 nov 20:55 (someother.example.com)
JWHO: /jails/dbjail
steps:
show the path to "root filesystem" for all running jails
run the who for the /var/run/utx.active for the given jail
skip the header line from the jls, - so the 1st output will be the host.
Maybe someone know much simpler solution, e.g. by sorting the ps output or something like...
Comments: you usually don't want to use constructions like command | while read - the pipe forks new shell and you losing values of the variables set inside of the loop, the done < <( commands ) is usually better...
You can enable sudo in your system change your script just a little to:
sudo jexec $jid who
Then your srcipt can run as normal user.

hot to get memcached stats without nc?

this is how I'm getting the stats now:
echo -e "stats\nquit" | nc 127.0.0.1 11211
I can't use expect as it's not part of a default installation.
Is there a way to get memcached stats without nc?
Your question doesn't specify why you're looking for an alternative to netcat, so it's hard to to tell what you're looking for. You could do it in bash like this:
exec 3<>/dev/tcp/127.0.0.1/11211
echo -e "stats\nquit" >&3
cat <&3
You could do it using telnet:
(echo -e 'stats\nquit'; sleep 1) | telnet localhost 11211
The sleep is to precent telnet from exiting before receiving a response from memcached.
You could also write something simple in python or perl or some other high level scripting language. Or brush up on your c. There are lots of options.
Another, possibly simpler way, is with the memcached-tool script. It came installed with my installation of memcached 1.4.5 via yum, but under apt and ubuntu I didn't get it. I found it here and put it on my system: https://raw.githubusercontent.com/memcached/memcached/master/scripts/memcached-tool
on the server, type the following to get memcached stats:
memcached-tool 127.0.0.1:11211 stats
or the following to get slabs:
memcached-tool 127.0.0.1:11211
assuming your server is listening on port 11211 and IP 127.0.0.1 (set config options at /etc/sysconfic/memcached)
article: http://www.cyberciti.biz/faq/rhel-fedora-linux-install-memcached-caching-system-rpm/

Resources