Terminal Application to Keep Web Server Process Alive - bash

Is there an app that can, given a command and options, execute for the lifetime of the process and ping a given URL indefinitely on a specific interval?
If not, could this be done on the terminal as a bash script? I'm almost positive it's doable through terminal, but am not fluent enough to whip it up within a few minutes.
Found this post that has a portion of the solution, minus the ping bits. ping runs on linux, indefinitely; until it's actively killed. How would I kill it from bash after say, two pings?

General Script
As others have suggested, use this in pseudo code:
execute command and save PID
while PID is active, ping and sleep
exit
This results in following script:
#!/bin/bash
# execute command, use '&' at the end to run in background
<command here> &
# store pid
pid=$!
while ps | awk '{ print $1 }' | grep $pid; do
ping <address here>
sleep <timeout here in seconds>
done
Note that the stuff inside <> should be replaces with actual stuff. Be it a command or an ip address.
Break from Loop
To answer your second question, that depends in the loop. In the loop above, simply track the loop count using a variable. To do that, add a ((count++)) inside the loop. And do this: [[ $count -eq 2 ]] && break. Now the loop will break when we're pinging for a second time.
Something like this:
...
while ...; do
...
((count++))
[[ $count -eq 2 ]] && break
done
ping twice
To ping only a few times, use the -c option:
ping -c <count here> <address here>
Example:
ping -c 2 www.google.com
Use man ping for more information.
Better practice
As hek2mgl noted in a comment below, the current solution may not suffice to solve the problem. While answering the question, the core problem will still persist. To aid to that problem, a cron job is suggested in which a simple wget or curl http request is sent periodically. This results in a fairly easy script containing but one line:
#!/bin/bash
curl <address here> > /dev/null 2>&1
This script can be added as a cron job. Leave a comment if you desire more information how to set such a scheduled job. Special thanks to hek2mgl for analyzing the problem and suggesting a sound solution.

Say you want to start a download with wget and while it is running, ping the url:
wget http://example.com/large_file.tgz & #put in background
pid=$!
while kill -s 0 $pid #test if process is running
do
ping -c 1 127.0.0.1 #ping your adress once
sleep 5 #and sleep for 5 seconds
done

A nice little generic utility for this is Daemonize. Its relevant options:
Usage: daemonize [OPTIONS] path [arg] ...
-c <dir> # Set daemon's working directory to <dir>.
-E var=value # Pass environment setting to daemon. May appear multiple times.
-p <pidfile> # Save PID to <pidfile>.
-u <user> # Run daemon as user <user>. Requires invocation as root.
-l <lockfile> # Single-instance checking using lockfile <lockfile>.
Here's an example of starting/killing in use: flickd
To get more sophisticated, you could turn your ping script into a systemd service, now standard on many recent Linuxes.

Related

Cron job won't start again after I stopped it?

I wrote a script to run constantly on startup. If for whatever reason the script were to fail, I wrote a second script to check if it has failed, and if so, run the first script again. I then set this second script as a cronjob to run every minute so that it is constantly checking if the first script is alive.
So to test this, I reboot my system. I can see in htop that the first script is running from start up as expected. Good. I kill the process to test the second script. Sure enough, the second script starts the first script again. Still good. I then kill this process, but the second script won't run again now. It still updates a txt file when I manually start the first script, but the second script just doesn't start the first script like it's supposed to. Is it because I killed the cronjob? Restarting the cron service doesn't fix anything though, so I don't know why my second script isn't running again at all.
First script:
#!/bin/bash
stamp=$(date +%Y%m%d-%H%M)
timeout 10d tcpdump -i eth0 -s 96 -z gzip -C 10 -w /home/user/Documents/${stamp}
Second script:
#!/bin/bash
echo "not running" > /home/working.txt
if (( $(ps -ef | grep -v grep | grep tcpdump.sh | wc -l) > 0 ))
then
echo "tcpdump is running!!!" > /home/working.txt
else
/usr/local/bin/tcpdump.sh start
fi
Any help?
You would probably be better off running a simple for loop as the main script, and that kicks off the tcpdump script in the background, so something like:
#!/bin/bash
while true; do
if ps -ef | grep -v grep | grep -q tcpdump; then
: tcpdump running OK
else
# tcpdump not running - start it off
nohup /usr/local/bin/firstscript.sh start &
fi
sleep 30
done
This checks that "tcpdump.sh" is in the output of the "ps -ef" command - if it is, then do nothing (note that you must have an actual command between the "then" and "else" - the ":" command, which just takes it s arguments and ignores them, is sufficient). If it isn't running, start the first script in the background. Then sleep 30 seconds and check again. (Yes, I could have inverted the test so that I didn't need an empty "then" arm, but it would have made the code less obvious)
You put this script as the one which starts at boot time.
Edit: Do you really want to check for "tcpdump.sh"? Is that what the first script is actually called? Assuming that you actually want to check for the tcpdump program, you could use:
if pgrep tcpdump; then

How to issue shell commands to slave machines from master and wait until all are finished?

I have 4 shell commands I need to run and they do not depend on each other.
I have 4 slave machines. So, I want to run one of the 4 commands on each of the 4 machines, and then I want to wait until all 4 of them are finished.
How do I distribute this processing? This is what I tried:
$1 is a list of ip addresses to the slave machines.
for host in $(cat $1)
do
echo $host
# ssh into each machine and launch command
ssh username#$host <command>;
done
But this seems as if it is waiting for the command to finish before moving on to the next host and launching the next command.
How do I accomplish this distributed processing that doesn't depend on each other?
I would use GNU Parallel like this - running hostname in parallel on each of 4 servers:
parallel -j 4 --nonall -S 192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4 hostname
If you need to pass parameters, use --onall and put arguments after :::
parallel -j 4 --onall -S 192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4 echo ::: hello
Add --tag if you want the output lines tagged by the hostname/IP.
Add -k if you want to keep the output in order.
Add : to the server list to run on local host too.
If you aren't concerned about how many commands run concurrently, just put each one in the background with &, then wait on them as a group.
while IFS= read -r host; do
ssh username#$host <command> &
done < "$1"
wait
Note the use of a while loop instead of a for loop; see Bash FAQ 001.
The ssh part of your script needs to be like:
$ ssh -f user#host "sh -c 'sleep 30 ; nohup ls > foo 2>&1 &'"
This one sleeps for 30 secs and writes the output of ls to file foo. 30 secs is enough for you to go and see it yourself. Just build your loop around that.

bash script parallel ssh remote command

i have a script that fires remote commands on several different machines through ssh connection. Script goes something like:
for server in list; do
echo "output from $server"
ssh to server execute some command
done
The problem with this is evidently the time, as it needs to establish ssh connection, fire command, wait for answer, print it. What i would like is to have script that would try to establish connections all at once and return echo "output from $server" and output of command as soon as it gets it, so not necessary in the list order.
I've been googling this for a while but didn't find an answer. I cannot cancel ssh session after command run as one thread suggested, because i need an output and i cannot use parallel gnu suggested in other threads. Also i cannot use any other tool, i cannot bring/install anything on this machine, only useable tool is GNU bash, version 4.1.2(1)-release.
Another question is how are ssh sessions like this limited? If i simply paste 5+ or so lines of "ssh connect, do some command" it actually doesn't do anything, or execute only on first from list. (it works if i paste 3-4 lines). Thank you
Have you tried this?
for server in list; do
ssh user#server "command" &
done
wait
echo finished
Update: Start subshells:
for server in list; do
(echo "output from $server"; ssh user#server "command"; echo End $server) &
done
wait
echo All subshells finished
There are several parallel SSH tools that can handle that for you:
http://code.google.com/p/pdsh/
http://sourceforge.net/projects/clusterssh/
http://code.google.com/p/sshpt/
http://code.google.com/p/parallel-ssh/
Also, you could be interested in configuration deployment solutions such as Chef, Puppet, Ansible, Fabric, etc (see this summary ).
A third option is to use a terminal broadcast such as pconsole
If you only can use GNU commands, you can write your script like this:
for server in $servers ; do
( { echo "output from $server" ; ssh user#$server "command" ; } | \
sed -e "s/^/$server:/" ) &
done
wait
and then sort the output to reconcile the lines.
I started with the shell hacks mentionned in this thread, then proceeded to something somewhat more robust : https://github.com/bearstech/pussh
It's my daily workhorse, and I basically run anything against 250 servers in 20 seconds (it's actually rate limited otherwise the connection rate kills my ssh-agent). I've been using this for years.
See for yourself from the man page (clone it and run 'man ./pussh.1') : https://github.com/bearstech/pussh/blob/master/pussh.1
Examples
Show all servers rootfs usage in descending order :
pussh -f servers df -h / |grep /dev |sort -rn -k5
Count the number of processors in a cluster :
pussh -f servers grep ^processor /proc/cpuinfo |wc -l
Show the processor models, sorted by occurence :
pussh -f servers sed -ne "s/^model name.*: //p" /proc/cpuinfo |sort |uniq -c
Fetch a list of installed package in one file per host :
pussh -f servers -o packages-for-%h dpkg --get-selections
Mass copy a file tree (broadcast) :
tar czf files.tar.gz ... && pussh -f servers -i files.tar.gz tar -xzC /to/dest
Mass copy several remote file trees (gather) :
pussh -f servers -o '|(mkdir -p %h && tar -xzC %h)' tar -czC /src/path .
Note that the pussh -u feature (upload and execute) was the main reason why I programmed this, no tools seemed to be able to do this. I still wonder if that's the case today.
You may like the parallel-ssh project with the pssh command:
pssh -h servers.txt -l user command
It will output one line per server when the command is successfully executed. With the -P option you can also see the output of the command.

Why are my local commands failing to run after remote ssh commands in the same script?

I've done my homework, but I think I may be mixing apples and oranges here. My script is designed to run a remote inline series of commands, exit, and then run some additional LOCAL commands. It has to be done remote first, as these services are for a fail-over agent. The problem is that after the remote ssh line disconnects, the entire script just stops. I'm not sure why the disconnect is halting the entire script. Perhaps the exit line is to blame?
#!/bin/bash
#
### Run remote svc restarts and then Local restarts
#
exec ssh -t REMOTEHOST 'stop svc1; restart svc2; start svc3; exit'
(SCRIPT FAILS HERE)
## Run local shell (This works independently, but not in the entire script)
rst=`pgrep -n failoversvc`
echo "Stopping 1st service at `date | awk '{print $2,$3,$4}'`" && service 1 stop >> SYNCLOG.txt
sleep 2
echo "Restarting 2nd service at `date | awk '{print $2,$3,$4}'`" && service 2 restart >> SYNCLOG.txt
if rst="";then
echo "Starting 3rd service at `date | awk '{print $2,$3,$4}'`" && service 3 start >> SYNCLOG.txt
else
echo "3rd Service PID not found! Check for functionality"
fi
I took a look at but THIS I wasn't able to get the results I was looking for.
exec is a very brutal command: it completely replaces the current process (in this case, your shell that's running the script) with the command you specify. Unless exec fails, nothing after that line in your script will ever run. This is by design, that's what exec is for.
If you want your script to continue after the ssh, simply remove exec.

How i can do a shell script to know if my connection is up on ubuntu

I'm very low with shell scripts..
I need to check with a cron (no problem for this) if my connection is up. If it isn't, i want to call some scripts to reconnect.
I was thinking about using grep and ping to some site (or ip), then check the returned string.
Then call my command:
sudo pppd call speedtch
I need a clue thanks! Or there are better methods to do this?
Since my call need sudo, there's a batch way to call it without password input?
pinging works well, use this command line:
if ping -W 5 -c 1 google.com >/dev/null; then
echo "Internet is up."
fi
This uses the following command line arguments to ping:
-W 5: Time out waiting for a response after five seconds.
-c 1: Only send one ping request. Default of most ping implementation is to send unlimited packages.
Also the output of ping is redirected to /dev/null to avoid useless mails from cron.
As for your sudo question, you can configure that in the sudoers file.
Use the following line in sudo:
fromuser ALL=(root) NOPASSWD: /usr/bin/mycommand
You'll need to replace the following strings:
fromuser: Replace with the user who runs the cronjob
/usr/bin/mycommand: Replace with the path of the command you want to execute
With this configuration the user "fromuser" will always be able to execute /usr/bin/mycommand without entering a password. You'll want to be careful with that of course for security reasons.
A quicker (and passive way), if the box you are testing from is directly connected to the internet, is to check if the default route is present:
netstat -rn | grep '^default' && echo connection is up || echo connection is down
If you wanted to check for the errorlevel in a script:
#!/bin/sh
netstat -rn | grep '^default'
connected=$?
if [ ! -z $connect ]; then
echo connected;
fi
Another thing that I personally do, is wait for a connection...
function wait_for_router() {
el=1
while [ -z $el ]; do
sleep 1
ping -c 1 $1 > /dev/null 2>&1
el=$?
done
}
if you include this function in another script, you can simply do this:
#!/bin/sh
echo Waiting for connection...
wait_for_router 192.168.1.1
echo We have a connection... lets party

Resources