In the part of my script, I need to make ssh to a host and delete the elements of an array. In my current code for each element of the array I need to make ssh to host which takes time.
I want to make ssh to the host at one time and then delete all elements of the array.
How can I improve my below code from performance point of view?
for x in $Array
do
echo "Value of array are : $x"
ssh user#abc.host.com "rm -rf $x"
done
Why the loop at all? Using * as subscript gives all elements of an array.
ssh user#example.com "rm -rf ${Array[*]}"
Note that either way (with or without loop) will break if file names contain whitespaces.
You must input the commands in a file on your local, then upload the file, and finally run the script. Here is how it goes:
echo > rmscript.sh
for x in $Array
do
echo "Value of array are : $x"
echo "rm -rf $x" >> rmscript.sh
done
#upload
scp rmscript.sh user#abc.host.com:~
#run script
ssh user#abc.host.com "sh ~/rmscript.sh"
Related
I'm connected to a remote machine via SSH as part of a bash script. After navigating to the directory, I run ls which confirms matching files are found. However, I then try to loop through the files and run other commands on them, and the variable is now empty.
Code:
echo "DOING STUFF!"
cd /mnt/slowdata/ls8_processing
ls
for f in *.tar.gz
do
echo $f
done
Output:
DOING STUFF!
LC080330242019031901T1-SC20190606111327.tar.gz
LC080330242019042001T1-SC20190606111203.tar.gz
LC080330242019052201T1-SC20190606111130.tar.gz
LC080330252019030301T2-SC20190606111021.tar.gz
LC080330252019031901T1-SC20190606120750.tar.gz
LC080340232019031001T1-SC20190606111056.tar.gz
LC080340232019041101T1-SC20190606111215.tar.gz
LC080340242019031001T1-SC20190606111201.tar.gz
LC080340242019041101T1-SC20190606111250.tar.gz
LC080340242019052901T1-SC20190606111331.tar.gz
As can be seen via the output, the $f is picking something up, as there are the correct number of blank lines. However I wish to untar each file which I cannot do.
TIA.
You have to remove special meaning of $ to pass it to the remote host as '$' else the variable will be expanded before you send the command to the remote host.
Keep in mind the for cycle will run regardless of whether the cd was successful.
ssh server1 << EOF
cd /mnt/slowdata/ls8_processing
ls
for f in *.tar.gz
do
echo \$f
done
EOF
My example show the difference:
script.sh
#!/bin/bash
f=123
ssh -i .ssh/keyauth.pem root#server1 << EOF
for f in ./*.log
do
echo "\$f"
echo "$f"
done
EOF
Output
[edvin#server2 ~]$ ./script.sh
./sepap-install.log
123
./sepfl-upgrade.log
123
./sep-install.log
123
./sepjlu-install.log
123
./sepui-install.log
123
I'm using this script to copy virtual machines in my ESXI 6.5. The first argument of the script is the name of the directory to copy.
I would like to have a second argument, which would be the number of vms I want to copy. For now on, I need to modify the for loop every time I want to copy different number of vms. The below script creates 20 vms by copying the directory of a vm given as the first script argument. I run it like this: ./copy.sh CentOS1 but would like to have something like this: ./copy.sh CentOS1 x where x is the end condition in my for loop.
#!/bin/sh
for i in $(seq 1 1 20)
do
mkdir ./$1_$i/
cp $1/* $1_$i/
echo "Copying machine '$1_$i' ... DONE!"
done
NOTE: Please do not suggest other for solutions, like those given, for instance, here: https://www.cyberciti.biz/faq/bash-for-loop/ because I checked them and they didn't work.
Thanks.
Use a C-style for loop, if you are using bash.
for ((i=1; i<=$2; i++))
do
mkdir "./$1_$i/"
cp "$1"/* "$1_$i/"
echo "Copying machine '$1_$i' ... DONE!"
done
If you need POSIX compatibility (as implied by your shebang), then you probably can't rely on seq being available either; use a while loop.
i=1
while [ "$i" -le "$2" ]; do
mkdir ./"$1_$i"
cp "$1"/* "$1_$i"
i=$((i+1))
done
In spite of your protestations to the contrary, one of the solutions in your link would work fine:
for ((i=1; i<=$2; i++)); do
# body of loop goes here
done
would loop from 1 to the number given in the second argument
I have the following bash script, that I launch using the terminal.
dataset_dir='/home/super/datasets/Carpets_identification/data'
dest_dir='/home/super/datasets/Carpets_identification/augmented-data'
# if dest_dir does not exist -> create it
if [ ! -d ${dest_dir} ]; then
mkdir ${dest_dir}
fi
# for all folder of the dataset
for folder in ${dataset_dir}/*; do
curr_folder="${folder##*/}"
echo "Processing $curr_folder category"
# get all files
for item in ${folder}/*; do
# if the class dir in dest_dir does not exist -> create it
if [ ! -d ${dest_dir}/${curr_folder} ]; then
mkdir ${dest_dir}/${curr_folder}
fi
# for each file
if [ -f ${item} ]; then
# echo ${item}
filename=$(basename "$item")
extension="${filename##*.}"
filename=`readlink -e ${item}`
# get a certain number of patches
for i in {1..100}
do
python cropper.py ${filename} ${i} ${dest_dir}
done
fi
done
done
Given that it needs at least an hour to process all the files.
What happens if I change the '100' with '1000' in the last for loop and launch another instance of the same script?
Will the first process count to 1000 or will continue to count to 100?
I think the file will be readonly when a bash process executes it. But you can force the change. The already running process will count to its original value, 100.
You have to take care about the results. You are writing in the same output directory and have to expect side effects.
"When you make changes to your script, you make the changes on the disk(hard disk- the permanent storage); when you execute the script, the script is loaded to your memory(RAM).
(see https://askubuntu.com/questions/484111/can-i-modify-a-bash-script-sh-file-while-it-is-running )
BUT "You'll notice that the file is being read in at 8KB increments, so Bash and other shells will likely not load a file in its entirety, rather they read them in in blocks."
(see https://unix.stackexchange.com/questions/121013/how-does-linux-deal-with-shell-scripts )
So, in your case, all your script is loaded in the RAM memory by the script interpretor, and then executed. Meaning that if you change the value, then execute it again, the first instance will still have the "old" value.
I have a script which will run on remote servers,
df_command.sh:-
#!/bin/bash
if [[ $1 == "" ]]; then
echo -e "No Argument passed:- Showing default disk usage\n"
df -k > /tmp/Global_df_tmp 2>&1
cat /tmp/Global_df_tmp
else
df -k "$1" > /tmp/Global_df_tmp 2>&1
cat /tmp/Global_df_tmp
fi
This is how i run this script:-
$ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername 'bash -s -- /some/directory' < ./df_command.sh
This works fine and gives me correct output in every scenario, means if user passes any valid directory it gives me disk usage of that directory and if he/she passes some invalid directory the script gives me the proper error message back.
Problem rises when more than one user starts using the script at the same time for the same server and passes two different directories
e.g
User A:-
$ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername 'bash -s -- /user_A/directory' < ./df_command.sh
user B:-
$ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername 'bash -s -- /user_B/directory' < ./df_command.sh
Now since the tmp file for the script is same(/tmp/Global_df_tmp), whoever starts the script first will get the correct output while the second user will get the same output as first user got.
I know one solution would be to generate random number and use that instead of hardcoded tmp file, but if 100 users will use the same script then i'll end up having huge number of temporary files on the remote servers
Any other ideas?
Thank you!
The obvious solution is to not use a fixed name for temporary files. One common way to do that is to use the process identifier in a suffix, like /tmp/Global_df_tmp.$$.
Also, if you're using bash you could $RANDOM to get a random number.
Or just use the mktemp command to create a temporary file with a randomized name.
You could use the standard mktemp command to generate temporary filenames:
# Create a tempfile (in a BSD- and Linux-friendly way)
my_mktemp () {
mktemp || mktemp -t my_prefix
} 2> /dev/null
declare -r tempfile=$(my_mktemp) || echo "Cannot create temp file" >&2
df -k > "$tempfile" 2>&1
Just a little explanation: the my_mktemp is a bash function (you don't need to use that, just use mktemp if you are on Linux.
The declare -r tempfile=$(cmd) runs the command cmd and sets it as the value of a new read-only variable called temp file.
Afterwards you can use "$tempfile" to refer to the temp file name.
The script uses scp to upload a file. That works.
Now I want to log in with ssh, cd to the directory that holds the uploaded file, do an md5sum on the file. The script keeps telling me that md5sum cannot find $LOCAL_FILE. I tried escaping: \$LOCAL_FILE. Tried quoting the EOI: <<'EOI'. I'm partially understanding this, that no escaping means everything happens locally. echo pwd unescaped gives the local path. But why can I do "echo $MD5SUM > $LOCAL_FILE.md5sum", and it creates the file on the remote machine, yet "echo md5sum $LOCAL_FILE > md5sum2" does not work? And if it the local md5sum, how do I tell it to work on the remote?
scp "files/$LOCAL_FILE" "$i#$i.567.net":"$REMOTE_FILE_PATH"
ssh -T "$i#$i.567.net" <<EOI
touch I_just_logged_in
cd $REMOTE_DIRECTORY_PATH
echo `date` > I_just_changed_directories
echo `whoami` >> I_just_changed_directories
echo `pwd` >> I_just_changed_directories
echo "$MD5SUM" >> I_just_changed_directories
echo $MD5SUM > $LOCAL_FILE.md5sum
echo `md5sum $LOCAL_FILE` > md5sum2
EOI
You have to think about when $LOCAL_FILE is being interpreted. In this case, since you've used double-quotes, it's being interpreted on the sending machine. You need instead to quote the string in such a way that $LOCAL_FILE is in the command line on the receiving machine. You also need to get your "here document" correct. What you show just sends the output to touch to the ssh.
What you need will look something like
ssh -T address <'EOF'
cd $REMOTE_DIRECTORY_PATH
...
EOF
The quoting rules in bash are somewhat arcane. You might want to read up on them in Mendel Cooper's Advanced Guide to Bash Scripting.