SSH - Run multiple commands at once [duplicate] - bash

Say I have a file /templates/apple and I want to
put it in two different places and then
remove the original.
So, /templates/apple will be copied to /templates/used AND /templates/inuse
and then after that I’d like to remove the original.
Is cp the best way to do this, followed by rm? Or is there a better way?
I want to do it all in one line so I’m thinking it would look something like:
cp /templates/apple /templates/used | cp /templates/apple /templates/inuse | rm /templates/apple
Is this the correct syntax?

You are using | (pipe) to direct the output of a command into another command. What you are looking for is && operator to execute the next command only if the previous one succeeded:
cp /templates/apple /templates/used && cp /templates/apple /templates/inuse && rm /templates/apple
Or
cp /templates/apple /templates/used && mv /templates/apple /templates/inuse
To summarize (non-exhaustively) bash's command operators/separators:
| pipes (pipelines) the standard output (stdout) of one command into the standard input of another one. Note that stderr still goes into its default destination, whatever that happen to be.
|&pipes both stdout and stderr of one command into the standard input of another one. Very useful, available in bash version 4 and above.
&& executes the right-hand command of && only if the previous one succeeded.
|| executes the right-hand command of || only it the previous one failed.
; executes the right-hand command of ; always regardless whether the previous command succeeded or failed. Unless set -e was previously invoked, which causes bash to fail on an error.

Why not cp to location 1, then mv to location 2. This takes care of "removing" the original.
And no, it's not the correct syntax. | is used to "pipe" output from one program and turn it into input for the next program. What you want is ;, which seperates multiple commands.
cp file1 file2 ; cp file1 file3 ; rm file1
If you require that the individual commands MUST succeed before the next can be started, then you'd use && instead:
cp file1 file2 && cp file1 file3 && rm file1
That way, if either of the cp commands fails, the rm will not run.

Note that cp A B; rm A is exactly mv A B. It'll be faster too, as you don't have to actually copy the bytes (assuming the destination is on the same filesystem), just rename the file. So you want cp A B; mv A C

Another option is typing Ctrl+V Ctrl+J at the end of each command.
Example (replace # with Ctrl+V Ctrl+J):
$ echo 1#
echo 2#
echo 3
Output:
1
2
3
This will execute the commands regardless if previous ones failed.
Same as: echo 1; echo 2; echo 3
If you want to stop execution on failed commands, add && at the end of each line except the last one.
Example (replace # with Ctrl+V Ctrl+J):
$ echo 1 &&#
failed-command &&#
echo 2
Output:
1
failed-command: command not found
In zsh you can also use Alt+Enter or Esc+Enter instead of Ctrl+V Ctrl+J

Using pipes seems weird to me. Anyway you should use the logical and Bash operator:
$ cp /templates/apple /templates/used && cp /templates/apple /templates/inuse && rm /templates/apples
If the cp commands fail, the rm will not be executed.
Or, you can make a more elaborated command line using a for loop and cmp.

Try this..
cp /templates/apple /templates/used && cp /templates/apple /templates/inuse && rm /templates/apple

Related

why 'ls' command printing the directory content multiple times

I have the following shell script in which I want to check the specific directory content on the remote machines and print them in a file.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line
if echo $f1 | grep -q "xx.xx.xxx";
then
ssh user#$f1 ls path/*war_* > path/$f1.txt < /dev/null; ls path/*zip_* >> path/$f1.txt < /dev/null;
ssh user#$f1 ls -d /apps/jetty*_* >> path/$f1.txt < /dev/null;
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
I am using this script inside a cron job for every 2 minute as following:
*/2 * * * * /path/myscript.sh
but somehow I am ending up with the following output file:
/apps/jetty/webapps_wars/test_new.war
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/jetty_xx.xx_2020-08-05
path/jetty_new
path/jetty_xx.xx_2020-08-05
path/jetty_new
I am not sure why am I getting the files in the list twice, sometimes 3 times. but I execute the shell directly from putty, it works fine. What do I need to change in order to correct this script?
Example:
~$ cd tmp
~/tmp$ mkdir test
~/tmp$ cd !$
cd test
~/tmp/test$ mkdir -p apps/jetty/webapp_wars/ && touch apps/jetty/webapp_wars/test_new.war
~/tmp/test$ mkdir path
~/tmp/test$ touch path/{ReleaseTest.static.zip_2020-08-05,jetty_xx.xx_2020-08-05,jetty_new}
~/tmp/test$ cd ..
~/tmp$ listpath=$(find test/path \( -name "*2020-08-05" -o -name "*new" \) )
~/tmp$ listapps=$(find test/apps/ -name "*war" )
~/tmp$ echo ${listpath[#]}" "${listapps[#]} | tr " " "\n" | sort > resultfile
~/tmp$
~/tmp$ cat resultfile
test/apps/jetty/webapp_wars/test_new.war
test/path/jetty_new
test/path/jetty_xx.xx_2020-08-05
test/path/ReleaseTest.static.zip_2020-08-05
~/tmp$ rm -rf test/ && unset listapps && unset listpath && rm resultfile
~/tmp$
This way you get only one result for each pattern you are looking for in your if...then...else block of code.
Just adapt the ssh ..... find commands and take care of quotes & parentheses but there is the easiest solution, this way you do not have to rewrite the script from scratch. And be careful on local / remote variables if you use them.
You really should not use ls but the fundamental problem is probably that three separate commands with three separate wildcards could match the same file three times.
Also, one of your commands is executed locally (you forgot to put ssh etc in front of the second one), so if the wildcard matches on your local computer, that would produce a result which doesn't reflect the situation on the remote server.
Try this refactoring.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line <- XXX this is not true
if echo "$f1" | grep -q "xx.xx.xxx";
then
ssh user#$f1 "printf '%s\n' path/*war_* path/*zip_* /apps/jetty*_*" | sort -u >path/"$f1".txt < /dev/null
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
The sort gets rid of any duplicates. This assumes none of your file names contain newlines; if they do, you'd need to use something which robustly handles them (try printf '%s\0' and sort -z but these are not portable).
ls would definitely also accept three different wildcards but like the link above explains, you really never want to use ls in scripts.

How to check if a file exists or not and create/delete if does/does not exist in shell

In shell, I want to check if a file exists or not then create if it doesn't exist or delete if it exists. For this I need a one liner and am trying to do something like:
ls | awk '\filename\' <if exist delete else create>
I need the ls as my problem has some command that outputs a list of strings that need to be pipelined to awk then possibly touch/mkdir.
#!/bin/bash
if [ -z "$1" ] || [ ! -f "$1" ] # $1 is input filename and -f check if $1 is a regular file
then
rm "$1" #delete the file
else
touch "$1" #create the file
fi
save the file as filecreator.sh
change the permission to allow execution with sudo chmod a+rx
while running the script use ./filecreator.sh yourfile.extension
You can see the file in your directory.
Using oc projects and oc new-project instad of ls and touch as indicated in a comment.
oc projects |
while read -r proj; do
if [ -d "$proj" ]; then
rm -rf "$proj"
else
oc new-project "$proj"
fi
done
I don't think there is a useful way to write this as a one-liner. If you like, you can replace the newlines with semicolons, except after then and else.
You really should put your actual requirements in the question itself. ls is a superbly useless example because it cannot list a file which doesn't already exist, and you should not use ls in scripts at all.
rm yourfile 2>/dev/null || touch yourfile
If the file existed before, rm will succeed and erase the file, and the touch won't be executed. You end up with no file afterwards.
If the file did not exist before, rm will fail (but the error message is not visible, since it is directed to the bitbucket), and due to the non-zero exit code of rm, the touch will be executed. You end up with an empty file afterwards.
Caveat: If the file exists, but you don't have permissions to remove it, you won't notice this error, due to the redirection of stderr. Hence, for debugging and later diagnosis, it might be better to redirect stderr to some file instead.

Understanding a docker entrypoint script

The script is located here: https://github.com/docker-library/ghost/blob/master/docker-entrypoint.sh
#!/bin/bash
set -e
if [[ "$*" == npm*start* ]]; then
baseDir="$GHOST_SOURCE/content"
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
mkdir -p "$targetDir"
if [ -z "$(ls -A "$targetDir")" ]; then
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
fi
done
if [ ! -e "$GHOST_CONTENT/config.js" ]; then
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
fi
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
chown -R user "$GHOST_CONTENT"
set -- gosu user "$#"
fi
exec "$#"
From what I know, it says that if you use some variation of npm start to move some files around from $GHOST_SOURCE to $GHOST_CONTENT, do something to the config.js file, link the config file, set ownership of the content files, and then execute npm start as the user user. Otherwise, it just runs your commands normally.
The specifics are what are hard for me to understand because there are a lot of things from bash that I've never seen before. So I have a lot of questions.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't /*/ contain themes? Is * not a wildcard for some reason?
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like rsync? I understand the point of -C, but why -c and --one-file-system?
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the end?
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them to each other if both files already exist?
set -- gosu user "$#"
In the above what does calling set with no args do?
I hope that's not too much. I felt making a separate question for each of these would be too much especially since it's all related to each other.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't
/*/ contain themes? Is * not a wildcard for some reason?
themes/ is in the first match, but themes/*/ is not, so you need the second entry to include the contents of themes.
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
It removes the $baseDir prefix from $dir. So for example:
bash$ dir=/home/bmitch/data/docker
bash$ echo $dir
/home/bmitch/data/docker
bash$ echo ${dir#/home/bmitch}
/data/docker
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like
rsync? I understand the point of -C, but why -c and --one-file-system?
rsync may not be installed on every machine by default, tar is fairly universal. The -c is to create, vs extract, and --one-file-system avoids tar continuing to an outside mount point (nfs, symlink to root, etc).
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the
"$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the
end?
config.example.js is the input (last arg to the sed), config.js is the output (after the >). So it takes the config.example.js, change the ip address from 127.0.0.1 to 0.0.0.0, effectively listening on all interfaces/ip's instead of just internally on the loopback. The second half of the sed is changing the path.join arguments from __dirname to process.env.GHOST_CONTENT.
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them
to each other if both files already exist?
The $GHOST_SOURCE/config.js is replaced (-f) with a link to $GHOST_CONTENT/config.js. Symbolic links give a file name reference to another actual file, so there will be two names, but one copy of the data, which means you will only have a single configuration in this situation.
set -- gosu user "$#"
In the above what does calling set with no args do?
This changes the values of $1, $2, ... $n to be $1=gosu, $2=user, $3=the old $1, $4=the old $2..., essentially adding the gosu and user to the beginning of the passed parameters to the script. The -- makes sure that set doesn't interpret any values from $# as a flag for itself.

executing a command on multiple paired files

Say I have a command, command.py, and it pairs together files, File_01_R1.fastq to File_01_R2.fastq. The command executed on a single pair looks like this:
command.py -f File_01_R1.fastq -r File_01_R2.fastq
I have many files however, each with a R1 and R2 version. How can I tell this command to go through every file I have, so it also executes
command.py -f File_02_R1.fastq -r File_02_R2.fastq
command.py -f File_03_R1.fastq -r File_03_R2.fastq
and so on.
You may use a simple parameter expansion:
for f in *_R1.fastq; do
echo command.py -f "$f" -r "${f%_R1.fastq}_R2.fastq"
done
This will just print out what's to be executed. Remove the echo if you're happy with the result.
# Loop over all R1.fastq files
for f in File_*_R1.fastq; do
# Replace R1 with R2 in the filename and run the command on both files.
command.py -f "$f" -r "${f/_R1./_R2.}"
done; unset -v f
As #gniourf_gniourf indicates in his comment my answer is slightly less safe than his in that it may match at an incorrect location in the filename (whereas his is anchored at the end).

Deleting files with same using shell script

Im totally newbie in shell script.
Im need compare file name in two directories and delete files with same name.
EG:
Directory1/
one
two
three
four
Directory2/
two
four
five
After run script the directories will be:
Directory1/
one
three
Diretory2/
five
Thanks
test -f tests if a file exists:
cd dir1
for file in *
do
test -f ../dir2/$file && rm $file ../dir2/$file
done
cd ..
Quick and dirty:
while read fname
do
rm -vf Directory{1,2}/"$fname"
done < <(sort
<(cd Directory1/ && ls)
<(cd Directory2/ && ls) |
uniq -d)
This assumes a number of things about the filenames, but it should get you there with the input shown, and similar cases.
Tested too, now:
mkdir /tmp/stacko && cd /tmp/stacko
mkdir Directory{1,2}
touch Directory1/{one,two,three,four} Directory2/{two,four,five}
Runnning the command shows:
removed `Directory1/four'
removed `Directory2/four'
removed `Directory1/two'
removed `Directory2/two'
And the resulting tree is:
Directory1/one
Directory1/three
Directory2/five

Resources