sending an ssh -t command to multiple systems simultaneously (without ansible) - bash

Because of the nature of the script (done at work, on a work RHEL machine) I cannot show the code, but I can at least provide pseudocode to help with a starting point. Currently:
start loop
1) read in the first line of a host text file (then the next and such per
the loop) of a file and assign it to a variable (host name)
2) send ssh -t command to the host (which takes anywhere between 2 to 6
minutes to receive a response back)
3) log response to a text file (repeat loop with new host from read in
text file)
end loop
Currently I have to run this script over night because of how many systems this script hits.
I want to be able to achieve the same goal and get the response from the command in that file per host, but I want the command to be sent out at the same time so that it takes anywhere between 2 to 6 minutes all together.
But because this is for work, I am not allowed to install ansible on the system; would there be another way to achieve this goal? If so please provide some areas or point me in the right direction.

With GNU Parallel:
parallel -j0 --slf hosts.txt --nonall mycommand > out.txt
But maybe you want a bit more info:
parallel -j0 --slf hosts.txt --joblog my.log --tag --nonall mycommand > out.txt

I did this using sh years ago using something like:
while true
do
if [ numberOfFileinSomeDir -lt N ]
then
(touch SomeDir/hostname; ssh hostname ... > someotherDir/hostname.txt ; rm SomeDir/hostname) &
...
But this stops working after ~100 hosts. It sucks - don't do it. If less than about ~500 hosts pssh may be the easiest - maybe you can install in your home directory?
Google something like python parallel execute process multiple and someone's bound to have a script that will do what you need already.
More than ~500 hosts and you really need to start installing some tools as others have mentioned in the comments.

Related

How can I write a bash command that takes 15 mins to run before another command is executed in Ansible?

I am trying to update the version of some software through Ansible for a server.
- name: upgrade firmware version
shell: bash -x bmc_firmware_update.sh -k -F BMC_0204.00.bin_enc -s 1
this could take about 15 mins to run. I have another command to run after that, i.e
-name: something else
shell: bash -x bmc_firmware_update.sh -k -F BMC_0204.00.bin_enc -s 2.
I came across wait_for: timeout=300 but I want to know if there is a better way to go about making sure that the first one is successfully completed before the second shell command is run. Please advice!
Apparently this has nothing to do with Ansible.
Programs runs in foreground by default.
#sorin this is Ansible related!
#op; You can append to the firmware_update.sh script that when finished, it writes a certain value to specific path. Then let Ansible check that file and if that value is there. If it is, continue, if not, retry.
Another possibility is to write the logic in the bash scripts themselves.

linux script to send me an email every time a log file changes

I am looking for a simple way to constantly monitor a log file, and send me an email notification every time thhis log file has changed (new lines have been added to it).
The system runs on a Raspberry Pi 2 (OS Raspbian /Debian Stretch) and the log monitors a GPIO python script running as daemon.
I need something very simple and lightweight, don't even care to have the text of the new log entry, because I know what it says, it is always the same. 24 lines of text at the end.
Also, the log.txt file gets recreated every day at midnight, so that might represent another issue.
I already have a working python script to send me a simple email via gmail (called it sendmail.py)
What I tried so far was creating and running the following bash script:
monitorlog.sh
#!/bin/bash
tail -F log.txt | python ./sendmail.py
The problem is that it just sends an email every time I execute it, but when the log actually changes, it just quits.
I am really new to linux so apologies if I missed something.
Cheers
You asked for simple:
#!/bin/bash
cur_line_count="$(wc -l myfile.txt)"
while true
do
new_line_count="$(wc -l myfile.txt)"
if [ "$cur_line_count" != "$new_line_count" ]
then
python ./sendmail.py
fi
cur_line_count="$new_line_count"
sleep 5
done
I've done this a bunch of different ways. If you run a cron job every minute that counts the number of lines (wc -l) compares that to a stored count (e.g. in /tmp/myfilecounter) and sends the emails when the numbers are different.
If you have inotify, there are more direct ways to get "woken up" when the file changes, e.g https://serverfault.com/a/780522/97447 or https://serverfault.com/search?q=inotifywait.
If you don't mind adding a package to the system, incron is a very convenient way to run a script whenever a file or directory is modified, and it looks like it's supported on raspbian (internally it uses inotify). https://www.linux.com/learn/how-use-incron-monitor-important-files-and-folders. Looks like it's as simple as:
sudo apt-get install incron
sudo vi /etc/incron.allow # Add your userid to this file (or just rm /etc/incron.allow to let everyone use incron)
incron -e # Add the following line to the "cron" file
/path/to/log.txt IN_MODIFY python ./sendmail.py
And you'd be done!

log of parallel computations, how do I prevent interleaved write? lockfile or flock?

I see that has been discussed several times how to run scripts not concurrently, but I have not see the topic of concurrent write.
I am doing some parallel computation with xargs launching the commands for the actual computations. At the end of each computation I want that process to access a file and put the results in there. I am getting troubles because the write on the log file happens in a way that each process can access the log file at the same time, resulting in interleaved entries with one line from one run, another line from another run that finished about the same time (which is likely to happen due to the parallel nature of the run with xargs).
So in practice let's say that using xargs I run in parallel several insances of a script that reads:
#!/bin/bash
#### do something that takes some time
#### define content of the log
folder="<folder>"$PWD"</folder>\n"
datetag="<enddate>"`date`"</enddate>\n"
#### store log in XML ####
echo -e "<myrun>\n""$folder""$datetag""</myrun>" >> $outputfie
At present I get output file with interleaved runs log like this
<myrun>
<myrun>
<folder>./generations/test/run1</folder>
<folder>./generations/test/run2</folder>
<enddate>Sun Jul 6 11:17:58 CEST 2014</enddate>
</myrun>
<enddate>Sun Jul 6 11:17:58 CEST 2014</enddate>
</myrun>
Is there a way to give "exclusive access" to one instance of the script at a time, so that each script is writing its log without interference with the others?
I have seen flock and lockfile, but I am not sure what fits best to my case and I am seeking for advise/suggestion.
Thanks,
Roberto
I will use traceroute as example as that prints output slowly, but any other command would also work. Compare:
(echo 8.8.8.8;echo 8.8.4.4) | xargs -P6 -n1 traceroute > traceroute.xarg
to:
(echo 8.8.8.8;echo 8.8.4.4) | parallel traceroute > traceroute.para
Make sure you install GNU Parallel and not another parallel, and that /etc/parallel/config is empty.
I thinks this in the end does the job. The loop keeps going until this instance of the script can lock the log file for itself. Then writes and unlocks it.
The other instances of the script that are running in parallel and might be trying to write will find the lock ... or will be able to lock the file for themselves.
while [ -! `lockfile -1 log.lock` ]; do
echo -e "accessing file at "`date`
echo -e "$logblock" >> log
rm -f log.lock
break
done
Does anybody see any drawbacks in this type of solution?

Why ftam service will start and return prompt from terminal but not from bash script?

I am starting ftam server (ft820.rc on CentOS 5) using bash version bash 3.0 and I am having an issue with starting it from the script, namely in the script I do
ssh -nq root#$ip /etc/init.d/ft820.rc start
and the script won't continue after this line, although when I do on the machine defined by $ip
/etc/init.d/ft820.rc start
I will get the prompt back just after the service is started.
This is the code for start in ft820.rc
SPOOLPATH=/usr/spool/vertel
BINPATH=/usr/bin/osi/ft820
CONFIGFILE=${SPOOLPATH}/ffs.cfg
# Set DBUSERID to any value at all. Just need to make sure it is non-null for
# lockclr to work properly.
DBUSERID=
export DBUSERID
# if startup requested then ...
if [ "$1" = "start" ]
then
mask=`umask`
umask 0000
# startup the lock manager
${BINPATH}/lockmgr -u 16
# update attribute database
${BINPATH}/fua ${CONFIGFILE} > /dev/null
# clear concurrency locks
${BINPATH}/finit -cy ${CONFIGFILE} >/dev/null
# startup filestore
${BINPATH}/ffs ${CONFIGFILE}
if [ $? = 0 ]
then
echo Vertel FT-820 Filestore running.
else
echo Error detected while starting Vertel FT-820 Filestore.
fi
umask $mask
I repost here (on request of #Patryk) what I put in the comments on the question:
"is it the same when doing the ssh... in the commandline? ie, can you indeed connect without entering a password, using the pair of private_local_key and the corresponding public_key that you previously inserted in the destination root#$ip:~/.ssh/authorized_keys file ? – Olivier Dulac 20 hours ago "
"you say that, at the commandline (and NOT in the script) you can ssh root#.... and it works without asking for your pwd ? (ie, it can then be run from a script?) – Olivier Dulac 20 hours ago "
" try the ssh without the '-n' and even without -nq at all : ssh root#$ip /etc/init.d/ft820.rc start (you could even add ssh -v , which will show you local (1:) and remote (2:) events in a very verbose way, helping in knowing where it gets stuck exactly) – Olivier Dulac 19 hours ago "
"also : before the "ssh..." line in the script, make another line with, for example: ssh root#ip "set ; pwd ; id ; whoami" and see if that works and shows the correct information. This may help be sure the ssh part is working. The "set" part will also show you the running shell (ex: if it contains BASH= , you're running bash. Otherwise SHELL=... should give a good hint (sometimes not correct) about which shell gets invoked) – Olivier Dulac 19 hours ago "
" please try without the '-n' (= run in background and wait, instead of just run and then quit). It it doesn't work, try adding -t -t -t (3 times) to the ssh, to force it to allocate a tty. But first, please drop the '-n'. – Olivier Dulac 18 hours ago "
Apparently what worked was to add the -t option to the ssh command. (you can go up to put '-t -t -t' to further force it to try to allocate the tty, depending on the situation)
I guess it's because the invoked command expected to be run within an interactive session, and so needed a "tty" to be the stdout
A possibility (but just a wild guess) : the invoked rc script outputs information, but in a buffered environment (ie, when not launched via your terminal), the calling script couldn't see enough lines to fill the buffer and start printing anything out (like when you do a "grep something | somethings else" in a buffered environment and ctrl+c before the buffer was big enough to display anything : you end up thinking no lines were foudn by the grep, whereas there was maybe a few lines already in the buffer). There is tons to be said about buffering, and I am just beginning to read about it all. forcing ssh to allocate a tty made the called command think it was outputting to a live terminal session, and that may have turned off the buffering and allowed the result to show. Maybe in the first case, it worked too, but you could never see the output?

Auto SSH and execute script

I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful

Resources