I have a small shell script as follows that I am using to login to multiple servers to capture whether the target server is using Redhat or Ubuntu as the OS version.
#!/bin/ksh
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in `cat hostlist`
do
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
The above script works, however, if I attempt to add in some awk as shown below and the server is a redhat server...my results in the osver.report.txt will only show the hostname and no OS version. I have played around with the quoting, but nothing seems to work.
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release | awk {'print $1,$2,$6,$7'} 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
If I change the script as suggested to the following:
#!/bin/bash
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in cat hostlist
do
OSVER=$(
ssh $USER#${x} bash << 'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
Then I get the following errors:
./test.bash: line 9: unexpected EOF while looking for matching `)'
./test.bash: line 16: syntax error: unexpected end of file
You're suffering from a quoting problem. When you pass a quoted command to ssh, you effectively lose one level of quoting (as if you passed the same arguments to sh -c "..."). So the command that you're running on the remote host is actually:
cat /etc/redhat-release | awk '{print ,,,}' | grep -i DISTRIB_DESCRIPTION /etc/lsb-release
One way of resolving this is to pipe your script into a shell, rather than passing it as arguments:
OSVER=$(
ssh $USER#${x} bash <<'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null ||
grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
The use of <<'EOF' here inhibits any variable expansion in the here document...without that, expressions like $1 would be expanded locally.
A better solution would be to look into something like ansible which has built-in facilities for sshing to groups of hosts and collecting facts about them, including distribution version information.
Related
I want to execute a docker command on a remote server. The problem is I don't know to escape multiple quotes.
ret=$(ssh root#server "docker exec nginx bash -c 'cat /etc/nginx/nginx.conf | grep 'ServerName' | cut -d '|' -f1'")
I get
bash: -f1: command not found
There's little need to execute so much on the remote host. The file you want to search isn't likely that big: just pipe the entire thing down via ssh to a local awk process:
ret=$(ssh root#server "docker exec nginx cat /etc/nginx/nginx.conf" |
awk -F'|' '/ServerName/ {print $1}')
Wrap your parameter string with N calls to "$(printf "%q" ...)", for N recursive calls .
ssh root#server "docker exec nginx bash -c 'cat /etc/nginx/nginx.conf | grep ServerName | cut -d | -f1'"
How may recursive calls the above line has? I don't wish to set up docker just for the test, so I may have one of the following wrong:
ssh - certainly counts
docker - ??
ngix - ??
bash - certainly counts
If there are four, then you need four calls to "$(printf "%q "str")", don't forget to add all those " marks
ssh root#server docker exec nginx bash -c "$(printf "%q" "$(printf "%q" "$(printf "%q" "$(printf "%q" "cat /etc/nginx/nginx.conf | grep ServerName | cut -d | -f1")")")")"
Explanation: ssh parses the string like bash -c does, stripping one level of quotes. docker and nginx may also each parse the string (or not). Finally, bash -c parses whatever the previous levels have parsed, and removes the final level of quotes. exec does not parse the strings, it simply passes them verbatim to the next level.
Another solution is to put the line, that you want bash to execute, into a script. Then you can simply invoke the script without all this quoting insanity.
#!/bin/bash
< /etc/nginx/nginx.conf grep ServerName | cut -d | -f1
Consider using here-document :
ret="$(ssh root#server << 'EOF'
docker exec nginx bash -c "grep 'ServerName' /etc/nginx/nginx.conf | cut -d '|' -f1"
EOF
)"
echo "$ret"
Or, simpler as suggested by #MichaelVeksler :
ret="$(ssh root#server docker exec -i nginx bash << 'EOF'
grep 'ServerName' /etc/nginx/nginx.conf | cut -d '|' -f1
EOF
)"
echo "$ret"
Here is the chunk of code for reference:-
Output:
I have checked the variable values using echo and those looks fine.
But what I want do achieve is searching logs on remote hosts using grep which does not give any output.
for dir in ${log_path}
do
for host in ${Host}
do
if [[ "${userinputserverhost}" == "${host}" ]]
then
ssh -q -T username#userinputserverhost "bash -s" <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
`\$(grep -A 5 -s "\${ID}" "\${dir}"/archive/*.log)`
EOF
fi
break
done
done
First, remove all the crap around the grep.
Second, you're overquoting your vars.
Third, skip the "bash -s" if you can.
ssh -q -T username#userinputserverhost <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
Fourth, I don't see where $ID is set...so if that's being loaded on the remote system by the login or something, then that one would need the dollar sign backslashed.
Finally, be aware that here-docs are great, but sometimes here-strings are simpler if you can spare the quotes.
$: ssh 2>&1 dudeling#sandbox-server '
> date
> whoami
> ' | tee -a foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
$: cat foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
That one is more a matter of taste. Even better, if you can, write your remote-script to a local file & use that. And of course, you can always add set -vx into the script to see what gets remotely executed.
cat >tmpScript <<-'EOF'
echo -e "Fetching details: \n"
set -vx
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
ssh <tmpScript 2>&1 -q -T username#userinputserverhost | tee -a ${LogFile}
Now you have an exact copy of what was issued for debugging.
Thanks Paul for spending time and coming up with suggestions/solutions.
I have managed to get it working couple of days back. Would have felt happy to say that your solution worked 100% but even satisfied that I got it sorted on my own as it helped me learn some new stuff.
FYI - grep -A 5 -s "${ID}" "${dir}"/archive/*.log - this will work but only by using shell built-in 'declare -p' to declare the variables within EOF. Also, I read somewhere and it is recommended to use EOF unqouted as it caters variable expansion to remote hosts without any trouble.
Below piece of code is working for me in bash:
ssh -q -T username#userinputserverhost <<-EOF 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
$(declare -p ID)
$(declare -p dir)
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
I have the below block of shell script code in Jenkinsfile
stage("Compose Source Structure")
{
sh '''
set -x
rm -vf config
wget -nv --no-check-certificate https://test-company/k8sconfigs/test-config
export KUBECONFIG=$(pwd)/test-config
kubectl config view
ns_exists=$(kubectl get namespaces | grep ${consider_namespace})
echo "Validating k8s namespace"
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${source_cluster}"
exit 1
else
echo "scanning namespace \'${namespace}\'"
mkdir -p "${HOME}/cluster-backup/${namespace}"
while read -r resource
do
echo "scanning resource \'${resource}\'"
mkdir -p "${HOME}/sync-cluster/${namespace}/${resource}"
while read -r item
do
echo "exporting item \'${item}\'"
kubectl get "$resource" -n "$namespace" "$item" -o yaml > "${HOME}/sync-cluster/${namespace}/${resource}/${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-$item.yaml"
done < <(kubectl get "$resource" -n "$namespace" 2>&1 | tail -n +2 | awk \'{print $1}\')
done < <(kubectl api-resources --namespaced=true 2>/dev/null | tail -n +2 | awk \'{print $1}\')
fi
'''
Unfortunately, I am getting error like below:
++ kubectl get namespaces
++ grep test
+ ns_exists='test Active 2d20h'
+ echo 'Validating k8s namespace'
Validating k8s namespace
/home/jenkins/workspace/k8s-sync-from-cluster#tmp/durable-852103cd/script.sh: line 24: syntax error near unexpected token `<'
I did try to escape "<" with "", so I did like the below
\<
But still having no success, any idea what I am doing wrong here?
From the docs for the sh step (emphasis mine):
Runs a Bourne shell script, typically on a Unix node. Multiple lines are accepted.
An interpreter selector may be used, for example: #!/usr/bin/perl
Otherwise the system default shell will be run, using the -xe flags (you can specify set +e and/or set +x to disable those).
The system default shell on your Jenkins server may be sh, not bash. POSIX sh will not recognize <(command) process substitution.
To specifically use the bash shell, you must include a #!/usr/bin/env bash shebang immediately after your triple quote. Putting a shebang on the next line will have no effect.
I also took the liberty of fixing shellcheck warnings for your shell code, and removing \' escapes that are not necessary.
Try this:
stage("Compose Source Structure")
{
sh '''#!/usr/bin/env bash
set -x
rm -vf config
wget -nv --no-check-certificate https://test-company/k8sconfigs/test-config
KUBECONFIG="$(pwd)/test-config"
export KUBECONFIG
kubectl config view
ns_exists="$(kubectl get namespaces | grep "${consider_namespace}")"
echo "Validating k8s namespace"
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${source_cluster}"
exit 1
else
echo "scanning namespace '${namespace}'"
mkdir -p "${HOME}/cluster-backup/${namespace}"
while read -r resource
do
echo "scanning resource '${resource}'"
mkdir -p "${HOME}/sync-cluster/${namespace}/${resource}"
while read -r item
do
echo "exporting item '${item}'"
kubectl get "$resource" -n "$namespace" "$item" -o yaml > "${HOME}/sync-cluster/${namespace}/${resource}/${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-$item.yaml"
done < <(kubectl get "$resource" -n "$namespace" 2>&1 | tail -n +2 | awk '{print $1}')
done < <(kubectl api-resources --namespaced=true 2>/dev/null | tail -n +2 | awk '{print $1}')
fi
'''
}
Hope this time it's not a duplicate. I didn't find anything.
My code:
#!/bin/bash
FILE=/home/user/srv.txt
TICKET=task
while read LINE; do
ssh -nT $LINE << 'EOF'
touch info.txt
hostname >> info.txt
ifconfig | grep inet | awk '$3 ~ "cast" {print $2}' >> info.txt
grep -i ^server /etc/zabbix/zabbix_agentd.conf >> info.txt
echo "- Done -" >> info.txt
EOF
ssh -nT $LINE "cat info.txt" >> $TICKET.txt
done < $FILE #End
My issue:
if I only use ssh $LINE it will only ssh to the host on the first line and also display an error Pseudo-terminal will not be allocated because stdin is not a terminal.
using ssh -T , fix the error message above and it will create the file info.txt
using ssh -nT , fix the error where ssh only read the first line but I get an error message cat: info.txt: No such file or directory. If I ssh to the hosts, I can confirm that there is no info.txt file in my home folder. and with ssh -T, I have this file in my home folder.
I tried with the option -t, also HERE, EOF without ' ... ' but no luck
Do I miss something?
Thanks for your help,
Iswaren
You have two problems.
If you invoke ssh without -n it may consume the $FILE input (it drains its stdin)
If you invoke ssh with -n it won't read its stdin, so none of the commands will be executed
However, the first ssh has had its input redirected to come from a heredoc, so it does not need -n.
As stated in the comments, the second ssh call is not needed. Rather than piping into info.txt and then copying that into a local file, just output to the local file directly:
while read LINE; do
ssh -T $LINE >>$TICKET.txt <<'EOF'
hostname
ifconfig | grep inet | awk '$3 ~ "cast" {print $2}'
grep -i ^server /etc/zabbix/zabbix_agentd.conf
echo "- Done -"
EOF
done <$FILE
Running my script in a terminal works fine. It also works fine in Mint 18.2 when run at boot via /etc/rc.local, but on Mint 18.1 it doesn't work. Also on 18.1 it won't run via sudo crontab -e. I'm assuming it's got something to do with the tail/grep part.
Here is the relevant part of my script - up to this point the script works;
# Takes a screen capture every time I type a string that matches one from a list
sudo -i tail -fn0 "$path"k.log | \
while read line ; do
echo "$line" | egrep --line-buffered -i -e "$pattern"
if [ $? = 0 ]
then
matches=$(echo "$line" | egrep --line-buffered -i -o "$pattern")
cap_split=$(echo "$matches" | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/ /g')
cap_string=$(echo "$cap_split" | sed -e 's/[^A-Za-z0-9\\n._-]/_/g')
sleep 1
DISPLAY=:0.0 scrot "$path"cap/"$stamp"_"$cap_string".png
echo -e "### Match found \"$cap_split\" and cap created ###"
fi
done
Why will it only work in the terminal on Mint 8.1 and not from rc.local or cron?