do command 2 when command 1 fails in bash [duplicate] - bash

This question already has answers here:
How can I "try to do something and then detect if it fails" in bash?
(3 answers)
Closed 8 years ago.
I run this as part of a 'unlock' bash script, but it fails on the first command -
# Variables
CHUNK="/media/backup/obnam-home"
BIGNUM="17580577608458113855"
LOGTO="/home/boudiccas/logs/unlock.txt"
####################################################
sudo rm $CHUNK/chunklist/lock; sudo rm $CHUNK/$BIGNUM/lock; sudo rm $CHUNK/chunksums/lock; sudo rm $CHUNK/chunks/lock>>'$(date -R)' $LOGTO
How can I get it to continue onto the second, and further commands, even if 'x' command fails?

I think this is what you want:
# Variables
CHUNK="/media/backup/obnam-home"
BIGNUM="17580577608458113855"
LOGTO="/home/boudiccas/logs/unlock-$(date -R).txt"
####################################################
{
sudo rm $CHUNK/chunklist/lock
sudo rm $CHUNK/$BIGNUM/lock
sudo rm $CHUNK/chunksums/lock
sudo rm $CHUNK/chunks/lock
} 2>> $LOGTO
Each of the four rm commands will run, regardless of which ones succeed and which fail. Any error messages from all 4 will be redirected (2>>, not >>) to the named file. I'm assuming you want the current timestamp in the file name, so I moved the call to date to the definition of LOGTO.

Related

Cygwin BASH script file - unwanted single quotes added automatically to constant string - how to prevent

I have this BASH script which I run in a Cygwin terminal instance via the command
bash -f myfile.sh
All I need it to do is delete all *.txt files in the Cygwin /home/user directory.
#!/bin/bash
set -x
rm -rf /home/user/*.txt
This does not work, running the file (I only added "set -x" to debug when it started failing) shows
+ rm -rf '/home/user/.txt*
The problem is literally that I specify in my code in the Cygwin BASH script
rm -rf /home/user/*.txt
without any quotes, but when ran in Cygwin terminal in the BASH script, it resolves to
rm -rf '/home/user/*.txt'
e.g. single quotes are added by Cygwin BASH.
I've scoured other posts where the responses indicate the quotes are only there due to "set -x" formatting the output to show a unitary string, but without "set -x" in the script file the rm command still fails, e. g. the rm command string IS still quoted (or some other mangling is applied?), and therefore the rm line in the script does not work.
I managed to confirm that by manually running in the Cygwin terminal
rm -rf '/home/user/*.txt'
which does nothing (it just returns, leaving the .txt files intact in /home/user/), and then running
rm -rf /home/user/*.txt
manually, which does work perfectly, deleting all .txt files in the /home/user/ directory under the Cygwin terminal.
How can I get the above command to remove all .txt iles in /home/user/ from inside a Cygwin terminal BASH script file?
Thanks!
As intimated above, the answer to this is to not use -f when calling bash, e. g.
just
bash myfile.sh

getting variable name in file name for bash [duplicate]

This question already has answers here:
When do we need curly braces around shell variables?
(7 answers)
Closed 4 years ago.
I wanted to change the name of my file from file.txt to file_4i.txt and file_5i.txt according to the number I need but when I use the command below, the file name changes to file_.txt and the value of m never is indicated. I wanted to get 4i but $mi does not work either.
sudo sh -c "m=4 ; mv file.txt file_$mi.txt"
sudo sh -c "m=4 ; mv file.txt file_$m.txt"
Use single quotes so the variable doesn't expand early, and use {} so mi isn't interpreted as the variable name:
sudo sh -c 'm=4 ; mv file.txt file_${m}i.txt'
sudo sh -c 'm=4 ; mv file.txt file_$m.txt'

Bash script sudo and variables [duplicate]

This question already has an answer here:
Bash foreach loop works differently when executed from .sh file [duplicate]
(1 answer)
Closed 4 years ago.
Totally new to Bash here, actually I've avoided it like a plague for 10 years.
Today, there is no way around it.
After a few hours of beating my head against the keyboard, I discovered that sudo and any bash variable in a command gets stripped out.
So I have something like
somescript.sh
for i in {1..5}
do
filename=somefilenumber"$i".txt
echo $filename
done
on the command line now if I run it
user#deb:~$ ./somescript.sh
I get the expected
somefilenumber1.txt
somefilenumber2.txt
somefilenumber3.txt
somefilenumber4.txt
somefilenumber5.txt
but if I run with sudo, like
user#deb:~$ sudo ./somescript.sh
I'll get this
somefilenumber{1..5}.txt
This is a huge problem because I'm trying to cp files and rm files in a loop with the variable.
So here is the code with cp and rm
for i in {1..10}
do
filename=somefilenumber"$i".txt
echo $filename
cp "$filename" "someotherfilename.txt"
rm "$filename"
done
I end up getting
cp: cannot stat 'somefilenumber{1..5}.txt': No such file or directory
rm: cannot remove 'somefilenumber{1..5}.txt': No such file or directory
I need to run sudo also because of other programs that require it.
Is there any way around this?
Even if nothing else require sudo, and I don't use it, the rm command will prompt me for every file if I'm sure that I want to remove it or not. The whole point is to not be sitting here tied to the computer while it runs through hundreds of files.
You could try to replace {1..10} with seq 1 10:
for i in `seq 1 10`
do
filename=somefilenumber"$i".txt
echo $filename
cp "$filename" "someotherfilename.txt"
rm "$filename"
done
Your problem sounds like the environment has something wrong for root, do you start the script with:
#!/bin/bash
?

How to remove the file '--help' [duplicate]

This question already has answers here:
How to remove files starting with double hyphen?
(7 answers)
Closed 7 years ago.
Ok, I somewhat stupidly created the file '--help'.
I don't know how exactly I created the file, but it is probably something like an accidental redirection of output. :
cat somefile.txt > --help
But what I mainly noted it's harder to delete such file, than meets the eye.
rm --help
or
mv --help removeme.txt
rm removeme.txt
See --help not as a file but an option.
The only method I could figure out was something like this
cd somedir
rm $OLDPWD/--help
Or even more lame delete it with fileroller or an other filemanager.
But there must be some way to do it right?
Use this:
rm -- --help
rm ./--help
That should do the trick:
rm -- --help
The -- basically says that everyting afterwards shall not be treated as an option.

Bash make and open directory [duplicate]

This question already has answers here:
One command to create and change directory
(9 answers)
Closed 7 years ago.
Let's say I have the following command:
mkdir directory && cd directory
I normally do this a lot during the day so I'm wondering if there is a simpler shorter way of doing this.
Does anybody know?
you can call last argument by &_
mkdir directory && cd $_
this is result
system:/tmp # mkdir directory && cd $_
system:/tmp/directory #
Put the following code in your ~/.bashrc or ~/.zshrc :
mkcd () {
mkdir "$1"
cd "$1"
}
Then in your shell, enter the following command mkcd foo. As you can see, this function need one argument which are the name of the directory.

Resources