grep command inside EOF doesn't seems to be executing on remote hosts [UNIX BASH] - bash

Here is the chunk of code for reference:-
Output:
I have checked the variable values using echo and those looks fine.
But what I want do achieve is searching logs on remote hosts using grep which does not give any output.
for dir in ${log_path}
do
for host in ${Host}
do
if [[ "${userinputserverhost}" == "${host}" ]]
then
ssh -q -T username#userinputserverhost "bash -s" <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
`\$(grep -A 5 -s "\${ID}" "\${dir}"/archive/*.log)`
EOF
fi
break
done
done

First, remove all the crap around the grep.
Second, you're overquoting your vars.
Third, skip the "bash -s" if you can.
ssh -q -T username#userinputserverhost <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
Fourth, I don't see where $ID is set...so if that's being loaded on the remote system by the login or something, then that one would need the dollar sign backslashed.
Finally, be aware that here-docs are great, but sometimes here-strings are simpler if you can spare the quotes.
$: ssh 2>&1 dudeling#sandbox-server '
> date
> whoami
> ' | tee -a foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
$: cat foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
That one is more a matter of taste. Even better, if you can, write your remote-script to a local file & use that. And of course, you can always add set -vx into the script to see what gets remotely executed.
cat >tmpScript <<-'EOF'
echo -e "Fetching details: \n"
set -vx
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
ssh <tmpScript 2>&1 -q -T username#userinputserverhost | tee -a ${LogFile}
Now you have an exact copy of what was issued for debugging.

Thanks Paul for spending time and coming up with suggestions/solutions.
I have managed to get it working couple of days back. Would have felt happy to say that your solution worked 100% but even satisfied that I got it sorted on my own as it helped me learn some new stuff.
FYI - grep -A 5 -s "${ID}" "${dir}"/archive/*.log - this will work but only by using shell built-in 'declare -p' to declare the variables within EOF. Also, I read somewhere and it is recommended to use EOF unqouted as it caters variable expansion to remote hosts without any trouble.
Below piece of code is working for me in bash:
ssh -q -T username#userinputserverhost <<-EOF 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
$(declare -p ID)
$(declare -p dir)
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF

Related

How do I prevent my bash script (tailing a file) from repeatedly acting on the same line?

I was working on a script that would keep monitoring login to my server or laptop via ssh.
this was the code that I was working with.
slackmessenger() {
curl -X POST -H 'Content-type: application/json' --data '{"text":"'"$1"'"}' myapilinkwashere
## removed it the api link due to slack restriction
}
while true
do
tail /var/log/auth.log | grep sshd | head -n 1 | while read LREAD
do
echo ${LREAD}
var=$(tail -f /var/log/auth.log | grep sshd | head -n 1)
slackmessenger "$var"
done
done
The issue I'm facing is that it keeps sending the old logs due to the while loop. can there be a condition that the loop only sends the new entries/updated enter as opposed to sending the old one over and over again. could not think of a condition that would skip the old entries and only shows old one.
Instead of using head -n 1 to extract a line at a time, iterate over the filtered output of tail -f /var/log/auth.log | grep sshd and process each line once as it comes through.
#!/usr/bin/env bash
# ^^^^- this needs to be a bash script, not a sh script!
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
while IFS= read -r line; do
printf '%s\n' "$line"
slackmessenger "$line"
done < <(tail -f /var/log/auth.log | grep --line-buffered sshd)
See BashFAQ #9 describing why --line-buffered is necessary.
You could also write this as:
#!/usr/bin/env bash
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
tail -f /var/log/auth.log |
grep --line-buffered sshd |
tee >(xargs -d $'\n' -n 1 slackmessenger)

Execute command in variable with bash through ssh

I face an issue while sending a command to a local variables in ssh. Actually my variable doesn't receive the expected execution of the coding instructions.
If someone has an idea about the good syntax to be applied...
ssh -T $HR_HST -l $HR_USR -q -o StrictHostKeyChecking=no <<END_SSH_0
HermesShowCmd="sfe in;fname=$INPUT_WPPComOutFile;node=$Node;format=1" #ok
$HermesFolder/hcl \$HermesShowCmd >> $HR_LOG/$HRM_LOG #ok
HermesStatus=`$HermesFolder/hcl "$HermesShowCmd" | grep "Taken: \.\.\." | wc -l` #KO... variable not updated...
echo "$(date) --HermesStatus=\$HermesStatus" >> $HR_LOG/$HRM_LOG
END_SSH_0

How to run for loop inside heredoc while accessing remote machine

Here is my script in which I use local variable inside a remote machine using heredoc. But the loop under the heredoc takes the first variable value only. The loop runs fine inside the heredoc but with the same values.
#!/bin/bash
prod_web=($(cat /tmp/webip.txt));
new_prod_app_private_ip=($(cat /tmp/ip.txt));
no_n=($(cat /tmp/serial.txt));
ssh -t -o StrictHostKeyChecking=no ubuntu#${prod_web[0]} -p 2345 -v << EOF
set -xv
for (( x = 0; x < '${#no_n[#]}'; x++ ))
do
sudo su
echo '${no_n[x]}'
echo '${new_prod_app_private_ip[x]}'
curl -fIkSs https://'${new_prod_app_private_ip[x]}':9002 | head -n 1
done
EOF
So, my ip.txt file contains values like:
10.0.1.0
10.0.2.0
10.0.3.0
My serial.txt file:
9
10
11
So, my loop runs for only the first IP (present in /tmp/ip.txt) in the remote machine, three times. I want to run it for all the three IPs. My remote ip is present in the file /tmp/webip.txt.
Got stuck for a long time, any help is appreciated. Is there any other solution that I can go with?
There are 2 environments. On your local machine and on the remote machine. You need to think how to transfer data/variables/state/objects/handles between these machines.
If you set something on your local machine (ie. prod_web=($(cat /tmp/webip.txt));) and then just ssh to remote host (ie. ssh user#host 'echo "${prod_web[#]}"'), the variable will not be visible/exported to the remote machine. You can:
scp the files {ip,serial}.txt and execute the whole script on the remote machine, then cleanup , ie. remove the {ip,serial}.txt files from the remote machine
pass the files {ip,serial}.txt somehow merged/joined/pasted to the stdin of the ssh and then read up stdin on the remove machine
create all the commands to run on your local machine and then pass pre-prepared commands to remote machine, like ssh .... "$(for ...; do; echo curl ...; done)"
I would go with the second option, as I like passing everything using pipes and don't like to cleanup after me - removing temporary files in case of error can be a mess.
My script would probably look like this:
#!/bin/bash
set -euo pipefail
read -r host _ <webip.txt
paste serial.txt ip.txt | ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" '#!/bin/bash
set -euo pipefail
while read -r no_n ip; do
for ((i = 0; i < no_n; ++i)); do
printf "%s\n" "$no_n"
printf "%s\n" "$ip"
curl -fIkSs https://"$ip":9002 | head -n 1
done
done
'
As the remote script would become larger and less qouting friendly, I would save it into another remote_scripts.sh and execute ssh ... -m remote_scripts.sh.
I don't get what you are trying to do with that sudo su, which 100% does not do what you want.
If the no_n magic number is the number of times to execute that curl and you have xargs and you don't really care about errors, you can just do a magic and confusing oneliner:
#!/bin/bash
set -euo pipefail
read -r host _ <webip.txt
paste serial.txt ip.txt | ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" 'xargs -n2 -- sh -c "seq 0 \"\$1\" | xargs -n1 -- sh -c \"curl -fIkSs https://\\\"\\\$1\\\":9002 | head -n 1\" -- \"\$2\"" --'
Preparing all the command to run maybe actually more readable and may save some nasty qouting to resolve. But this really depends on how big serial.txt and ip.txt are and how big are the commands to be executed on the remote machine, as you want to minimize the number of bytes transferred between machines.
Here the commands to run are constructed on local machine (ie. "$(...)" is passed to ssh) and executed on remote machine:
# semi-readable script, not as fast and no xargs
ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" "$(paste serial.txt ip.txt | while read -r serial ip; do
seq 0 "$serial" | while read -r _; do
echo "curl -fIkSs \"https://$ip:9002\" | head -n 1"
done
done)"
HERE-doc does not expand shell commands, so:
$ cat <<EOF
> echo 1
> EOF
echo 1
but you can use command substitution $( ... ):
$ cat <<EOF
> $(echo 1)
> EOF
1

tee -a : No such file or Directory

Please have a look at following code.
#!/bin/sh
LOG_FILE=/home/admin/scriptLogs.log
rm -f ${LOG_FILE}
echo "`date`:: Script Execution Started" | tee -a ${LOG_FILE}
DATABASE ACCESS CODE 2>&1
echo "`date`:: Script Execution Successful " | tee -a {$LOG_FILE}
exit 0
It produces following output:
> Tue Feb 7 12:14:49 IST 2017:: Script Execution Started
> tee:{/home/admin/scriptLogs.log}: No such file or directory
> Tue Feb 7 12:14:49 IST 2017:: Script Execution Successfull
However, the file is present in the specified location. It also gets appended with data except for the last echo statement. Why such behavior?
Make sure to always quote your variables before expanding them to avoid reinterpretation (read about it here: http://www.tldp.org/LDP/abs/html/quotingvar.html)
Also, you should define LOG_FILE as a string (note the quotes below):
LOG_FILE="/home/admin/scriptLogs.log"
With that said, you have a typo in your script as mentioned by #codeforester
Also, you print that the execution was successful, without checking that it really was.
So you code should look like this:
#!/bin/sh
LOG_FILE="/home/admin/scriptLogs.log"
rm -f "$LOG_FILE"
echo "`date`:: Script Execution Started" | tee -a "$LOG_FILE"
DATABASE ACCESS CODE 2>&1
if [ $? -eq 0 ]; then
echo "`date`:: Script Execution Successful " | tee -a "$LOG_FILE"
else
...
exit 0
Note: I have removed the curly brackets, as they are not needed here (although it is not a mistake to do so)
As pointed out by #codeforester, use:
#!/bin/sh
LOG_FILE=/home/admin/scriptLogs.log
rm -f ${LOG_FILE}
echo "`date`:: Script Execution Started" | tee -a ${LOG_FILE}
DATABASE ACCESS CODE 2>&1
echo "`date`:: Script Execution Successful " | tee -a ${LOG_FILE}
exit 0

While loop to execute nagios commands not working properly

I wrote a small bash script in this post: How to search for a string in a text file and perform a specific action based on the result
I noticed that when I ran the script and check the logs, everything appears to be working but when I look at the Nagios UI, almost half of the servers listed in my text file did not get their notifications disabled. A revised version of the script is below:
host=/Users/bob/wsus.txt
password="P#assw0rd123"
while read -r host; do
region=$(echo "$host" | cut -f1 -d-)
if [[ $region == *sea1* ]]
then
echo "Disabling host notifications for: $host"
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
else
echo "Disabling host notifications for: $host"
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah02.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
fi
done < wsus.txt >> /Users/bob/disable.log 2>&1
If i run the command against the servers having the issue manually, it does get disabled in the Nagios UI, so I'm a bit confused. FYI, I'm not well versed in Bash either so this was my attempt at trying to automate this process a bit.
1 - There is a missing double-quote before the first https occurence:
You have:
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
Should be:
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
2 - Your first variable host is never used (overwritten inside the while loop).
I'm guessing what you were trying to do was something like:
hosts_file="/Users/bob/wsus.txt"
log_file="/Users/bob/disable.log"
# ...
while read -r host; do
# Do stuff with $host
done < $hosts_file >> $log_file 2>&1
3 - This looks suspicious to me:
if [[ $region == *sea1* ]]
Note: I haven't tested it yet, so this is my general feeling about this, might be wrong.
The $region isn't double-quoted, so make sure there could be no spaces / funny stuff happening there (but this should not be a problem inside a double-bracket test [[).
The *sea* looks like it would be expanded to match your current directory files matching this globbing. If you want to test this as a regular expression, you should use ~= operator or (my favorite for some reason) grep command:
if grep -q ".*sea.*" <<< "$region"; then
# Your code if match
else
# Your code if no match
fi
The -q keeps grep quiet
There is no need for test like [ or [[ because the return code of grep is already 0 if any match
The <<< simply redirects the right strings as the standard input of the left command (avoid useless piping like echo "$region" | grep -q ".*sea.*").
If this doesn't solve your problem, please provide a sample of your input file hosts_file as well as some output logs.
You could also try to see what's really going on under the hood by enclosing your script with set -x and set +x to activate debug/trace mode.

Resources