Output not showing all echo commands - bash

I'm using a bash script which is run on serverA and connects to serverB to run a file.
The results are saved in a variable and then echo'd. However it doesn't echo all of the data.
The script on serverA is running:
count=$(sshpass -p password ssh -t -q user#serverB cd /home/tom && ./count.sh)
echo "Count: $count"
This echos: 341 not Count: 341
The count.sh script on serverB is looping through some folders and doing a count of files.
E.g.
total=0
count=$(ls -l | wc -l | xargs)
if [ "$count" > 0 ]; then
total=$(( total + count ))
fi
echo "$total"
How do I display the full echo on serverA?

You are attempting to run ./count.sh on the local machine, not the remote host. The && is a command separator that terminates the sshpass command. Use quotes to ensure your desired shell command is passed to the remote host.
count=$(sshpass -p password ssh -t -q user#serverB 'cd /home/tom && ./count.sh')
I don't see anyway of producing the reported output, unless count.sh can run locally but something (are you using set -e?) prevents the following echo from executing at all.

Related

How to run shell script via kubectl without interactive shell

I am trying to export a configuration from a service called keycloak by using shell script. To do that, export.sh will be run from the pipeline.
the script connects to k8s cluster and run the command in there.
So far, everything goes okay the export work perfectly.
But when I try to exit from the k8s cluster with exit and directly end the shell script. therefore it will move back to the pipeline host without staying in the remote machine.
Running the command from the pipeline
ssh -t ubuntu#example1.com 'bash' < export.sh
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
kubectl -n keycloak exec -it keycloak-0 bash
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
exit
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
exit
exit
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
After the first exit the whole shell script stopped, the left commands doesn't work. it won't stay on ubuntu#example1.com.
Is there any solutions?
Run the commands inside without interactive shell using HEREDOC(EOF).
It's not EOF. It's 'EOF'. this prevents a variable expansion in the current shell.
But in the other script's /tmp/export/master-* will expand as you expect.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
# the suggested code.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
Even if scp runs successfully or not, this code will exit.

How to change name of file if already present on remote machine?

I want to change the name of a file if it is already present on a remote server via SSH.
I tried this from here (SuperUser)
bash
ssh user#localhost -p 2222 'test -f /absolute/path/to/file' && echo 'YES' || echo 'NO'
This works well with a prompt, echoes YES when the file exists and NO when it doesn't. But I want this to be launched from a crontab, then it must be in a script.
Let's assume the file is called data.csv, a condition is set in a loop such as if there already is a data.csv file on the server, the file will be renamed data_1.csv and then data_2.csv, ... until the name is unique.
The renaming part works, but the detection part doesn't :
while [[ $fileIsPresent!='false' ]]
do
((appended+=1))
newFileName=${fileName}_${appended}.csv
remoteFilePathname=${remoteFolder}${newFileName}
ssh pi#localhost -p 2222 'test -f $remoteFilePathname' && fileIsPresent='true' || fileIsPresent='false'
done
always returns fileIsPresent='true' for any data_X.csv. All the paths are absolute.
Do you have any idea to help me?
This works:
$ cat replace.sh
#!/usr/bin/env bash
if [[ "$1" == "" ]]
then
echo "No filename passed."
exit
fi
if [[ ! -e "$1" ]]
then
echo "no such file"
exit
fi
base=${1%%.*} # get basename
ext=${1#*.} # get extension
for i in $(seq 1 100)
do
new="${base}_${i}.${ext}"
if [[ -e "$new" ]]
then
continue
fi
mv $1 $new
exit
done
$ ./replace.sh sample.csv
no such file
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
$ touch sample.csv
$ ./replace.sh sample.csv
$ ls
replace.sh
sample_1.csv
sample_2.csv
However, personally I'd prefer to use a timestamp instead of a number. Note that this sample will run out of names after 100. Timestamps won't. Something like $(date +%Y%m%d_%H%M%S).
As you asked for ideas to help you, I thought it worth mentioning that you probably don't want to start up to 100 ssh processes each one logging into the remote machine, so you might do better with a construct like this that only establishes a single ssh session that runs till complete:
ssh USER#REMOTE <<'EOF'
for ((i=0;i<10;i++)) ; do
echo $i
done
EOF
Alternatively, you can create and test a bash script locally and then run it remotely like this:
ssh USER#REMOTE 'bash -s' < LocallyTestedScript.bash

Variable Not Picking up when in Quotes

I'm trying to rsync a DIR from one Server to 100s of Servers using script (Bottom)
But, When i put single or double quotes around ${host} variable, Host names are not picked properly or not resolved.
Error is like below
server1.example.com
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
and when I run only with rync command like below, It works. But, Output doesn't contain hostname which is important for me to correlate the output with associated hostname.
hostname -f && rsync -arpn --stats /usr/xyz ${host}:/usr/java
Can you please review and suggest me how to make the script work even with quotes around Host variable. ?
So, that , Output will contain hostname and output of rsync together.
==============================================
#!/bin/bash
tmpdir=${TMPDIR:-/home/user}/output.$$
mkdir -p $tmpdir
count=0
while IFS= read -r host; do
ssh -n -o BatchMode=yes ${host} '\
hostname -f && \
rsync -arpn --stats /usr/xyz '${host}':/usr/java && \
ls -ltr /usr/xyz'
> ${tmpdir}/${host} 2>&1 &
count=`expr $count + 1`
done < /home/user/servers/non_java7_nodes.list
while [ $count -gt 0 ]; do
wait $pids
count=`expr $count - 1`
done
echo "Output for hosts are in $tmpdir"
exit 0
UPDATE:
Based on observation with (set -x), Host name is being resolved on remote (self) it self, it supposed to be resolved on initiating host. I think Once we know how to make host name resolved with in initiating host even when quotes are in place.
As far as I can tell, what you're looking for is something like:
#!/bin/bash
tmpdir=${TMPDIR:-/home/user}/output.$$
mkdir -p "$tmpdir"
host_for_pid=( )
while IFS= read -r host <&3; do
{
ssh -n -o BatchMode=yes "$host" 'hostname -f' && \
rsync -arpn --stats /usr/xyz "$host:/usr/java" && \
ssh -n -o BatchMode=yes "$host" 'ls -ltr /usr/java'
} </dev/null >"${tmpdir}/${host}" 2>&1 & pids[$!]=$host
done 3< /home/user/servers/non_java7_nodes.list
for pid in "${!host_for_pid[#]}"; do
if wait "$pid"; then
:
else
echo "ERROR: Process for host ${host_for_pid[$pid]} had exit status $?" >&2
fi
done
echo "Output for hosts are in $tmpdir"
Note that the rsync is no longer inside the ssh command, so it's run locally, not remotely.

How to run for loop inside heredoc while accessing remote machine

Here is my script in which I use local variable inside a remote machine using heredoc. But the loop under the heredoc takes the first variable value only. The loop runs fine inside the heredoc but with the same values.
#!/bin/bash
prod_web=($(cat /tmp/webip.txt));
new_prod_app_private_ip=($(cat /tmp/ip.txt));
no_n=($(cat /tmp/serial.txt));
ssh -t -o StrictHostKeyChecking=no ubuntu#${prod_web[0]} -p 2345 -v << EOF
set -xv
for (( x = 0; x < '${#no_n[#]}'; x++ ))
do
sudo su
echo '${no_n[x]}'
echo '${new_prod_app_private_ip[x]}'
curl -fIkSs https://'${new_prod_app_private_ip[x]}':9002 | head -n 1
done
EOF
So, my ip.txt file contains values like:
10.0.1.0
10.0.2.0
10.0.3.0
My serial.txt file:
9
10
11
So, my loop runs for only the first IP (present in /tmp/ip.txt) in the remote machine, three times. I want to run it for all the three IPs. My remote ip is present in the file /tmp/webip.txt.
Got stuck for a long time, any help is appreciated. Is there any other solution that I can go with?
There are 2 environments. On your local machine and on the remote machine. You need to think how to transfer data/variables/state/objects/handles between these machines.
If you set something on your local machine (ie. prod_web=($(cat /tmp/webip.txt));) and then just ssh to remote host (ie. ssh user#host 'echo "${prod_web[#]}"'), the variable will not be visible/exported to the remote machine. You can:
scp the files {ip,serial}.txt and execute the whole script on the remote machine, then cleanup , ie. remove the {ip,serial}.txt files from the remote machine
pass the files {ip,serial}.txt somehow merged/joined/pasted to the stdin of the ssh and then read up stdin on the remove machine
create all the commands to run on your local machine and then pass pre-prepared commands to remote machine, like ssh .... "$(for ...; do; echo curl ...; done)"
I would go with the second option, as I like passing everything using pipes and don't like to cleanup after me - removing temporary files in case of error can be a mess.
My script would probably look like this:
#!/bin/bash
set -euo pipefail
read -r host _ <webip.txt
paste serial.txt ip.txt | ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" '#!/bin/bash
set -euo pipefail
while read -r no_n ip; do
for ((i = 0; i < no_n; ++i)); do
printf "%s\n" "$no_n"
printf "%s\n" "$ip"
curl -fIkSs https://"$ip":9002 | head -n 1
done
done
'
As the remote script would become larger and less qouting friendly, I would save it into another remote_scripts.sh and execute ssh ... -m remote_scripts.sh.
I don't get what you are trying to do with that sudo su, which 100% does not do what you want.
If the no_n magic number is the number of times to execute that curl and you have xargs and you don't really care about errors, you can just do a magic and confusing oneliner:
#!/bin/bash
set -euo pipefail
read -r host _ <webip.txt
paste serial.txt ip.txt | ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" 'xargs -n2 -- sh -c "seq 0 \"\$1\" | xargs -n1 -- sh -c \"curl -fIkSs https://\\\"\\\$1\\\":9002 | head -n 1\" -- \"\$2\"" --'
Preparing all the command to run maybe actually more readable and may save some nasty qouting to resolve. But this really depends on how big serial.txt and ip.txt are and how big are the commands to be executed on the remote machine, as you want to minimize the number of bytes transferred between machines.
Here the commands to run are constructed on local machine (ie. "$(...)" is passed to ssh) and executed on remote machine:
# semi-readable script, not as fast and no xargs
ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" "$(paste serial.txt ip.txt | while read -r serial ip; do
seq 0 "$serial" | while read -r _; do
echo "curl -fIkSs \"https://$ip:9002\" | head -n 1"
done
done)"
HERE-doc does not expand shell commands, so:
$ cat <<EOF
> echo 1
> EOF
echo 1
but you can use command substitution $( ... ):
$ cat <<EOF
> $(echo 1)
> EOF
1

the bash script only reboot the router without echoing whether it is up or down

#!/bin/bash
ip route add 10.105.8.100 via 192.168.1.100
date
cat /home/xxx/Documents/list.txt | while read output
do
ping="ping -c 3 -w 3 -q 'output'"
if $ping | grep -E "min/avg/max/mdev" > /dev/null; then
echo 'connection is ok'
else
echo "router $output is down"
then
cat /home/xxx/Documents/roots.txt | while read outputs
do
cd /home/xxx/Documents/routers
php rebootRouter.php "outputs" admin admin
done
fi
done
The other documents are:
lists.txt
10.105.8.100
roots.txt
192.168.1.100
when i run the script, the result is a reboot of the router am trying to ping. It doesn't ping.
Is there a problem with the bash script.??
If your files only contain a single line, there's no need for the while-loop, just use read:
read -r router_addr < /home/xxx/Documents/list.txt
# the grep is unnecessary, the return-code of the ping will be non-zero if the host is down
if ping -c 3 -w 3 -q "$router_addr" &> /dev/null; then
echo "connection to $router_addr is ok"
else
echo "router $router_addr is down"
read -r outputs < /home/xxx/Documents/roots.txt
cd /home/xxx/Documents/routers
php rebootRouter.php "$outputs" admin admin
fi
If your files contain multiple lines, you should redirect the file from the right-side of the while-loop:
while read -r output; do
...
done < /foo/bar/baz
Also make sure your files contain a newline at the end, or use the following pattern in your while-loops:
while read -r output || [[ -n $output ]]; do
...
done < /foo/bar/baz
where || [[ -n $output ]] is true even if the file doesn't end in a newline.
Note that the way you're checking for your routers status is somewhat brittle as even a single missed ping will force it to reboot (for example the checking computer returns from a sleep-state just as the script is running, the ping fails as the network is still down but the admin script succeeds as the network just comes up at that time).

Resources