I have a test env with bash 4 and loops work but the same loops do not run within the production env using bash 3 - bash

for i in `cat ip_list` ; do
ping -c 1 $i 2&>1 > /dev/nul && echo $i good || echo $i bad ;
done
This loop works in bash 4 but not in bash 3... what should I change in the loop for the older RedHat 5 machines running version 3?

Three problems,
/dev/nul should be /dev/null
2&>1 should be 2>&1
some systems don't support -c option in command ping. Please confirm your command by manually run the command, if you get any error message.
ping -c 1 IP_ADDRESS
If you successfully run the ping -c command, then the code can be replaced by
for i in `cat ip_list` ; do
ping -c 1 $i 2>&1 > /dev/null && echo $i good || echo $i bad ;
done

Related

Output not showing all echo commands

I'm using a bash script which is run on serverA and connects to serverB to run a file.
The results are saved in a variable and then echo'd. However it doesn't echo all of the data.
The script on serverA is running:
count=$(sshpass -p password ssh -t -q user#serverB cd /home/tom && ./count.sh)
echo "Count: $count"
This echos: 341 not Count: 341
The count.sh script on serverB is looping through some folders and doing a count of files.
E.g.
total=0
count=$(ls -l | wc -l | xargs)
if [ "$count" > 0 ]; then
total=$(( total + count ))
fi
echo "$total"
How do I display the full echo on serverA?
You are attempting to run ./count.sh on the local machine, not the remote host. The && is a command separator that terminates the sshpass command. Use quotes to ensure your desired shell command is passed to the remote host.
count=$(sshpass -p password ssh -t -q user#serverB 'cd /home/tom && ./count.sh')
I don't see anyway of producing the reported output, unless count.sh can run locally but something (are you using set -e?) prevents the following echo from executing at all.

SCP loop stops executing after some time

So I have these two versions of the same script. Both are attempting to copy my profile to all the servers on my infra ( about 5k ). The problem I am having is that no matter which version I use, I always get the process stuck somewhere around 300 servers. It does not matter if I do it sequentially or in parallel, both version fail and both at a random server. I dont get any error message (Yes I know Im redirecting error messages to null now), it simply stops executing after reaching a random point close to 300 servers and it just lingers there doing nothing.
The best run I could get did it for about 357 servers.
Probably there is some detail I unknow that is causing this. Could someone advise?
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log
done <<< "$( cat all_servers.txt )"
echo "$(date) - Process completed!!"
Parallel
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log &
done <<< "$( cat all_servers.txt )"
wait
echo "$(date) - Process completed!!"
Let's start with better input parsing. Instead of parsing a bash herestring from a posix command substitution via a while read loop, I've got the while read loop running through your server list directly via pipeline (this assumes one server per line in that file. I can fix this if that's not the case). If the contents of all_servers.txt was too long for a command line, you'd experience an error and/or premature termination.
I've also removed extraneous ./ items and I assume that rouser's home directory on each server is in fact /home/rouser (scp defaults to the home directory if given a relative path or no path at all).
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log \
|| echo "$server - Failed!" >> log.log
done < all_servers.txt
echo "$(date) - Process completed!!"
Parallel
For the Parallel solution, I've enclosed your conditional in parentheses just in case the pipeline was backgrounding the wrong process.
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
(
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log
|| echo "$server - Failed!" >> log.log
) &
done < all_servers.txt
wait
echo "$(date) - Process completed!!"
SSH keys
I highly recommend learning more about SSH. The scp -B flag was unknown to me because I'm used to using SSH keys and ssh-agent, which will make such connectivity seamless (use passwordless keys if you're running this in a cron job).

the bash script only reboot the router without echoing whether it is up or down

#!/bin/bash
ip route add 10.105.8.100 via 192.168.1.100
date
cat /home/xxx/Documents/list.txt | while read output
do
ping="ping -c 3 -w 3 -q 'output'"
if $ping | grep -E "min/avg/max/mdev" > /dev/null; then
echo 'connection is ok'
else
echo "router $output is down"
then
cat /home/xxx/Documents/roots.txt | while read outputs
do
cd /home/xxx/Documents/routers
php rebootRouter.php "outputs" admin admin
done
fi
done
The other documents are:
lists.txt
10.105.8.100
roots.txt
192.168.1.100
when i run the script, the result is a reboot of the router am trying to ping. It doesn't ping.
Is there a problem with the bash script.??
If your files only contain a single line, there's no need for the while-loop, just use read:
read -r router_addr < /home/xxx/Documents/list.txt
# the grep is unnecessary, the return-code of the ping will be non-zero if the host is down
if ping -c 3 -w 3 -q "$router_addr" &> /dev/null; then
echo "connection to $router_addr is ok"
else
echo "router $router_addr is down"
read -r outputs < /home/xxx/Documents/roots.txt
cd /home/xxx/Documents/routers
php rebootRouter.php "$outputs" admin admin
fi
If your files contain multiple lines, you should redirect the file from the right-side of the while-loop:
while read -r output; do
...
done < /foo/bar/baz
Also make sure your files contain a newline at the end, or use the following pattern in your while-loops:
while read -r output || [[ -n $output ]]; do
...
done < /foo/bar/baz
where || [[ -n $output ]] is true even if the file doesn't end in a newline.
Note that the way you're checking for your routers status is somewhat brittle as even a single missed ping will force it to reboot (for example the checking computer returns from a sleep-state just as the script is running, the ping fails as the network is still down but the admin script succeeds as the network just comes up at that time).

Remove display pid in bash when runing background

If we run process in background we see process pid and output:
# echo cho &
cho
19078
Is it possible to make:
# echo cho &
cho
Why I need this?
I want to write simple inline LAN-scanner with only pings for some PC which have no utilities like nmap or arp-scan.
for ip in 192.168.1.{1..254}; do (ping -c 1 -t 1 $ip > /dev/null && echo ">>> ${ip} is up"; ) & done
It works but PIDs spoil output.
(echo cho &)
In loop:
for ip in 192.168.23.{1..254}; do (ping -c 1 -t 1 $ip > /dev/null && echo ">>> ${ip} is up" &) done
I’d just run the for loop itself as a single job in the background. There’s also no need to use parentheses to run any commands in a subshell (with Bash, using the & control operator automatically creates a subshell to run the commands). The less processes that are forked within the loop, the quicker it will run.
for ip in 192.168.1.{1..254}; do ping -c 1 -t 1 $ip > /dev/null &&
echo ">>> ${ip} is up"; done &
If you don’t want any job control feedback to be printed to screen, you can enclose the backgrounded loop in parentheses so that it runs within in another subshell level:
( for ip in 192.168.1.{1..254}; do ping -c 1 -t 1 $ip > /dev/null &&
echo ">>> ${ip} is up"; done & )
A better solution would be to redirect the output of the echo statements to a file and keep the job control output so that the shell can notify you when the loop has finished. You can keep using your shell and avoid having the terminal getting cluttered with output printed by the loop running in the background.
for ip in 192.168.1.{1..254}; do ping -c 1 -t 1 $ip > /dev/null
&& echo ">>> ${ip} is up"; done > hosts_up &
Note: The above commands can be run as one-liners but I use two lines here to avoid horizontal scrolling (&& at the end of a line means the rest of the command continues on the following line).

Timeout command on Mac OS X?

Is there an alternative for the timeout command on Mac OSx. The basic requirement is I am able to run a command for a specified amount of time.
e.g:
timeout 10 ping google.com
This program runs ping for 10s on Linux.
You can use
brew install coreutils
And then whenever you need timeout, use
gtimeout
..instead. To explain why here's a snippet from the Homebrew Caveats section:
Caveats
All commands have been installed with the prefix 'g'.
If you really need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
Additionally, you can access their man pages with normal names if you add
the "gnuman" directory to your MANPATH from your bashrc as well:
MANPATH="/usr/local/opt/coreutils/libexec/gnuman:$MANPATH"
Another simple approach that works pretty much cross platform (because it uses perl which is nearly everywhere) is this:
function timeout() { perl -e 'alarm shift; exec #ARGV' "$#"; }
Snagged from here:
https://gist.github.com/jaytaylor/6527607
Instead of putting it in a function, you can just put the following line in a script, and it'll work too:
timeout.sh
perl -e 'alarm shift; exec #ARGV' "$#";
or a version that has built in help/examples:
timeout.sh
#!/usr/bin/env bash
function show_help()
{
IT=$(cat <<EOF
Runs a command, and times out if it doesnt complete in time
Example usage:
# Will fail after 1 second, and shows non zero exit code result
$ timeout 1 "sleep 2" 2> /dev/null ; echo \$?
142
# Will succeed, and return exit code of 0.
$ timeout 1 sleep 0.5; echo \$?
0
$ timeout 1 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
142
$ timeout 3 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
bye
0
EOF
)
echo "$IT"
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
#
# Mac OS-X does not come with the delightfully useful `timeout` program. Thankfully a rough BASH equivalent can be achieved with only 2 perl statements.
#
# Originally found on SO: http://stackoverflow.com/questions/601543/command-line-command-to-auto-kill-a-command-after-a-certain-amount-of-time
#
perl -e 'alarm shift; exec #ARGV' "$#";
As kvz stated simply use homebrew:
brew install coreutils
Now the timeout command is already ready to use - no aliases are required (and no gtimeout required, although also available).
You can limit execution time of any program using this command:
ping -t 10 google.com & sleep 5; kill $!
The Timeout Package from Ubuntu / Debian can be made to compile on Mac and it works.
The package is available at http://packages.ubuntu.com/lucid/timeout
You can do ping -t 10 google.com >nul
the >nul gets rid of the output. So instead of showing 64 BYTES FROM 123.45.67.8 BLAH BLAH BLAH it'll just show a blank newline until it times out. -t flag can be changed to any number.

Resources