What am I doing wrong? Trying to execute chmod o-w for multiple paths after ssh to one server.
File 1.txt contains two columns; one with same server called SERVER_hostname and second with different paths. I want script to ssh to that specific server hostname, and then make myself root(either toor, sudo eksh) and then run the command 'chmod o-w ' to those different paths from second column.
#!/bin/bash
read -r -a server < 1.txt
echo "${server[0]}"
echo "ssh -oBatchMode=yes -q "$(echo "${server[0]}")" '"$(cat 1.txt | awk '{print " sudo eksh ; \ chmod o-w " $NF";"}')"'" | sh
./254.sh
SERVER_hostname
awk: warning: escape sequence `\ ' treated as plain ` '
chmod: changing permissions of ‘/etc/nginx-controller/agent.configurator.conf.default’: Operation not permitted
chmod: changing permissions of ‘/etc/nginx-controller/agent.controller.conf.default’: Operation not permitted
chmod: changing permissions of ‘/etc/nginx-controller/copyright’: Operation not permitted
chmod: missing operand after ‘o-w’
Try 'chmod --help' for more information.
1.txt
SERVER_hostname /etc/nginx-controller/agent.configurator.conf.default
SERVER_hostname /etc/nginx-controller/agent.controller.conf.default
SERVER_hostname /etc/nginx-controller/copyright
There are various ways of achieving this. But all have some tricks up their sleeve due to the usage of ssh. When using ssh in loops or complex constructs, you always have to be aware that ssh will slurp your /dev/stdin
The quickest way to implement would be to do multiple calls to ssh using a while-loop to read the file (See BashFAQ#001). However, we force ssh to use /dev/null as input stream. This way we avoid that the while loop is broken:
while read -r host file; do
[ "$host" ] || continue
[ "$file" ] || continue
</dev/null ssh -oBatchMode=yes -q "${host}" -- sudo eksh -c "chmod o-w -- ${file}"
done < file.txt
The above method will perform multiple calls to ssh and might not be the most efficient way of doing things. You could build up the command using an array to contain the command arguments (See BashFAQ#050). In the case of the OP, this would be the different filenames:
file_list=()
while read -r h f; do [ "$f" ] && file_list+=( "${f}" ); [ "$h" ] && host="$h"; done < file.txt
ssh -oBatchMode=yes -q "${host}" -- sudo eksh -c "chmod o-w -- ${file_list[#]}"
But again, there is an issue here if your argument list is too long. So the trick is now to use xargs directly over ssh. You could do something like this:
file_list=()
while read -r h f; do [ "$f" ] && file_list+=( "${f}" ); [ "$h" ] && host="$h"; done < file.txt
printf "%s\n" "${file_list[#]}" | ssh "$host" "cat - | sudo eksh -c 'xargs chmod o-w --'"
Note: for some reason the command ssh "$host" sudo eksh -c 'xargs chmod o-w --' does not work. That is why we introduce the cat -
Related
I have the below block of shell script code in Jenkinsfile
stage("Compose Source Structure")
{
sh '''
set -x
rm -vf config
wget -nv --no-check-certificate https://test-company/k8sconfigs/test-config
export KUBECONFIG=$(pwd)/test-config
kubectl config view
ns_exists=$(kubectl get namespaces | grep ${consider_namespace})
echo "Validating k8s namespace"
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${source_cluster}"
exit 1
else
echo "scanning namespace \'${namespace}\'"
mkdir -p "${HOME}/cluster-backup/${namespace}"
while read -r resource
do
echo "scanning resource \'${resource}\'"
mkdir -p "${HOME}/sync-cluster/${namespace}/${resource}"
while read -r item
do
echo "exporting item \'${item}\'"
kubectl get "$resource" -n "$namespace" "$item" -o yaml > "${HOME}/sync-cluster/${namespace}/${resource}/${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-$item.yaml"
done < <(kubectl get "$resource" -n "$namespace" 2>&1 | tail -n +2 | awk \'{print $1}\')
done < <(kubectl api-resources --namespaced=true 2>/dev/null | tail -n +2 | awk \'{print $1}\')
fi
'''
Unfortunately, I am getting error like below:
++ kubectl get namespaces
++ grep test
+ ns_exists='test Active 2d20h'
+ echo 'Validating k8s namespace'
Validating k8s namespace
/home/jenkins/workspace/k8s-sync-from-cluster#tmp/durable-852103cd/script.sh: line 24: syntax error near unexpected token `<'
I did try to escape "<" with "", so I did like the below
\<
But still having no success, any idea what I am doing wrong here?
From the docs for the sh step (emphasis mine):
Runs a Bourne shell script, typically on a Unix node. Multiple lines are accepted.
An interpreter selector may be used, for example: #!/usr/bin/perl
Otherwise the system default shell will be run, using the -xe flags (you can specify set +e and/or set +x to disable those).
The system default shell on your Jenkins server may be sh, not bash. POSIX sh will not recognize <(command) process substitution.
To specifically use the bash shell, you must include a #!/usr/bin/env bash shebang immediately after your triple quote. Putting a shebang on the next line will have no effect.
I also took the liberty of fixing shellcheck warnings for your shell code, and removing \' escapes that are not necessary.
Try this:
stage("Compose Source Structure")
{
sh '''#!/usr/bin/env bash
set -x
rm -vf config
wget -nv --no-check-certificate https://test-company/k8sconfigs/test-config
KUBECONFIG="$(pwd)/test-config"
export KUBECONFIG
kubectl config view
ns_exists="$(kubectl get namespaces | grep "${consider_namespace}")"
echo "Validating k8s namespace"
if [ -z "$ns_exists" ]
then
echo "No namespace ${consider_namespace} exists in the cluster ${source_cluster}"
exit 1
else
echo "scanning namespace '${namespace}'"
mkdir -p "${HOME}/cluster-backup/${namespace}"
while read -r resource
do
echo "scanning resource '${resource}'"
mkdir -p "${HOME}/sync-cluster/${namespace}/${resource}"
while read -r item
do
echo "exporting item '${item}'"
kubectl get "$resource" -n "$namespace" "$item" -o yaml > "${HOME}/sync-cluster/${namespace}/${resource}/${BUILD_NUMBER}-${source_cluster}-${consider_namespace}-$item.yaml"
done < <(kubectl get "$resource" -n "$namespace" 2>&1 | tail -n +2 | awk '{print $1}')
done < <(kubectl api-resources --namespaced=true 2>/dev/null | tail -n +2 | awk '{print $1}')
fi
'''
}
I'm having a bash script that is executing commands through ssh.
FILENAMES=(
"export_production_20200604.tgz"
"export_production_log_20200604.tgz"
"export_production_session_20200604.tgz"
"export_production_view_20200604.tgz"
)
sshpass -p $PASSWORD ssh -T $LOGIN#$IP '/bin/bash' <<EOF
for f in "${FILENAMES[#]}"; do
echo Untar "$f"
done
EOF
The thing is when I execute the script, $f is empty.
I've looked at multiple solutions online to perform multiple command executions, but none works :
link 1
link 2
...
Could you help me figure it out ?
Note :
The execution of :
for f in "${FILENAMES[#]}"; do
echo Untar "$f"
done
outside the <<EOF EOF, works
On local :
bash 4.4.20(1)-release
Remote :
bash 4.2.46(2)-release
EDIT : Tricks
Having a tight timeline, and having no choice, I implemented the solution provided by #hads0m, may it helps fellow developer having the same issue :
# $1 the command
function executeRemoteCommand() {
sshpass -p $DB_PASSWORD ssh $DB_LOGIN#$DB_SERVER_IP $1
}
for i in "${!FILENAMES[#]}"; do
f=$FILENAMES[$i]
DB_NAME=$DB_NAMES[$i]
# Untar the file
executeRemoteCommand '/usr/bin/tar xzvf '$MONGODB_DATA_PATH'/'$TMP_DIRECTORY'/'$f' --strip-components=1'
# Delete the tar
executeRemoteCommand 'rm -f '$MONGODB_DATA_PATH'/'$TMP_DIRECTORY'/'$f''
# Restore the database
executeRemoteCommand 'mongorestore --host 127.0.0.1:'$DB_PORT' --username "'$MONGODB_USER'" --password "'$MONGODB_PASSWORD'" --authenticationDatabase admin --gzip "'$DB_NAME'" --db "'$DB_NAME'"'
done
You need to escape $ sign to avoid it being expanded locally and pass the array to remote.
This may be what you wanted :
#!/usr/bin/env bash
FILENAMES=(
"export_production_20200604.tgz"
"export_production_log_20200604.tgz"
"export_production_session_20200604.tgz"
"export_production_view_20200604.tgz"
)
sshpass -p $PASSWORD ssh -T $LOGIN#$IP '/bin/bash' <<EOF
$(declare -p FILENAMES)
for f in "\${FILENAMES[#]}"; do
echo Untar "\$f"
done
EOF
Try running it like this:
for f in "${FILENAMES[#]}"; do
sshpass -p $PASSWORD ssh -T $LOGIN#$IP echo Untar "$f"
done
Also, don't forget to add #!/bin/bash into the first line of your script.
Here is my script in which I use local variable inside a remote machine using heredoc. But the loop under the heredoc takes the first variable value only. The loop runs fine inside the heredoc but with the same values.
#!/bin/bash
prod_web=($(cat /tmp/webip.txt));
new_prod_app_private_ip=($(cat /tmp/ip.txt));
no_n=($(cat /tmp/serial.txt));
ssh -t -o StrictHostKeyChecking=no ubuntu#${prod_web[0]} -p 2345 -v << EOF
set -xv
for (( x = 0; x < '${#no_n[#]}'; x++ ))
do
sudo su
echo '${no_n[x]}'
echo '${new_prod_app_private_ip[x]}'
curl -fIkSs https://'${new_prod_app_private_ip[x]}':9002 | head -n 1
done
EOF
So, my ip.txt file contains values like:
10.0.1.0
10.0.2.0
10.0.3.0
My serial.txt file:
9
10
11
So, my loop runs for only the first IP (present in /tmp/ip.txt) in the remote machine, three times. I want to run it for all the three IPs. My remote ip is present in the file /tmp/webip.txt.
Got stuck for a long time, any help is appreciated. Is there any other solution that I can go with?
There are 2 environments. On your local machine and on the remote machine. You need to think how to transfer data/variables/state/objects/handles between these machines.
If you set something on your local machine (ie. prod_web=($(cat /tmp/webip.txt));) and then just ssh to remote host (ie. ssh user#host 'echo "${prod_web[#]}"'), the variable will not be visible/exported to the remote machine. You can:
scp the files {ip,serial}.txt and execute the whole script on the remote machine, then cleanup , ie. remove the {ip,serial}.txt files from the remote machine
pass the files {ip,serial}.txt somehow merged/joined/pasted to the stdin of the ssh and then read up stdin on the remove machine
create all the commands to run on your local machine and then pass pre-prepared commands to remote machine, like ssh .... "$(for ...; do; echo curl ...; done)"
I would go with the second option, as I like passing everything using pipes and don't like to cleanup after me - removing temporary files in case of error can be a mess.
My script would probably look like this:
#!/bin/bash
set -euo pipefail
read -r host _ <webip.txt
paste serial.txt ip.txt | ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" '#!/bin/bash
set -euo pipefail
while read -r no_n ip; do
for ((i = 0; i < no_n; ++i)); do
printf "%s\n" "$no_n"
printf "%s\n" "$ip"
curl -fIkSs https://"$ip":9002 | head -n 1
done
done
'
As the remote script would become larger and less qouting friendly, I would save it into another remote_scripts.sh and execute ssh ... -m remote_scripts.sh.
I don't get what you are trying to do with that sudo su, which 100% does not do what you want.
If the no_n magic number is the number of times to execute that curl and you have xargs and you don't really care about errors, you can just do a magic and confusing oneliner:
#!/bin/bash
set -euo pipefail
read -r host _ <webip.txt
paste serial.txt ip.txt | ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" 'xargs -n2 -- sh -c "seq 0 \"\$1\" | xargs -n1 -- sh -c \"curl -fIkSs https://\\\"\\\$1\\\":9002 | head -n 1\" -- \"\$2\"" --'
Preparing all the command to run maybe actually more readable and may save some nasty qouting to resolve. But this really depends on how big serial.txt and ip.txt are and how big are the commands to be executed on the remote machine, as you want to minimize the number of bytes transferred between machines.
Here the commands to run are constructed on local machine (ie. "$(...)" is passed to ssh) and executed on remote machine:
# semi-readable script, not as fast and no xargs
ssh -t -o StrictHostKeyChecking=no -p 2345 -v ubuntu#"$host" "$(paste serial.txt ip.txt | while read -r serial ip; do
seq 0 "$serial" | while read -r _; do
echo "curl -fIkSs \"https://$ip:9002\" | head -n 1"
done
done)"
HERE-doc does not expand shell commands, so:
$ cat <<EOF
> echo 1
> EOF
echo 1
but you can use command substitution $( ... ):
$ cat <<EOF
> $(echo 1)
> EOF
1
There are a number of files that I have to check if they exist in a directory. They follow a standard naming convention aside from the file extension so I want to use a wild card e.g:
YYYYMM=201403
FILE_LIST=`cat config.txt`
for file in $FILE_LIST
do
FILE=`echo $file | cut -f1 -d"~"`
SEARCH_NAME=$FILE$YYYYMM
ANSWER=`ssh -q userID#servername 'ls /home/to/some/directory/$SEARCH_NAME* | wc -l'`
returnStatus=$?
if [ $ANSWER=1 ]; then
echo "FILE FOUND"
else
echo "FILE NOT FOUND"
fi
done
The wildcard is not working, any ideas for how to make it visible to the shell?
I had much the same question just now. In despair, I just gave up and used pipes with grep and xargs to get wildcard-like functionality.
Was (none of these worked - and tried others):
ssh -t r#host "rm /path/to/folder/alpha*"
ssh -t r#host "rm \"/path/to/folder/alpha*\" "
ssh -t r#host "rm \"/path/to/folder/alpha\*\" "
Is:
ssh -t r#host "cd /path/to/folder/ && ls | grep alpha | xargs rm"
Note: I did much of my troubleshooting with ls instead of rm, just in case I surprised myself.
It's way better to use STDIN:
echo "rm /path/to/foldef/alpha*" | ssh r#host sh
With this way you can still use shell variables to construct the command. e.g.:
echo "rm -r $oldbackup/*" | ssh r#host sh
There are a number of files that I have to check if they exist in a directory. They follow a standard naming convention aside from the file extension so I want to use a wild card e.g:
YYYYMM=201403
FILE_LIST=`cat config.txt`
for file in $FILE_LIST
do
FILE=`echo $file | cut -f1 -d"~"`
SEARCH_NAME=$FILE$YYYYMM
ANSWER=`ssh -q userID#servername 'ls /home/to/some/directory/$SEARCH_NAME* | wc -l'`
returnStatus=$?
if [ $ANSWER=1 ]; then
echo "FILE FOUND"
else
echo "FILE NOT FOUND"
fi
done
The wildcard is not working, any ideas for how to make it visible to the shell?
I had much the same question just now. In despair, I just gave up and used pipes with grep and xargs to get wildcard-like functionality.
Was (none of these worked - and tried others):
ssh -t r#host "rm /path/to/folder/alpha*"
ssh -t r#host "rm \"/path/to/folder/alpha*\" "
ssh -t r#host "rm \"/path/to/folder/alpha\*\" "
Is:
ssh -t r#host "cd /path/to/folder/ && ls | grep alpha | xargs rm"
Note: I did much of my troubleshooting with ls instead of rm, just in case I surprised myself.
It's way better to use STDIN:
echo "rm /path/to/foldef/alpha*" | ssh r#host sh
With this way you can still use shell variables to construct the command. e.g.:
echo "rm -r $oldbackup/*" | ssh r#host sh