I'm sure this has been asked but my search has been fruitless.
I want to run 3 bash commands in order with both the second and third only running if the first succeeded.
Example:
## Setup
mkdir so_test_tmp && cd so_test_tmp
echo "This is something" > something
cd ..
## Example commands
cd so_test_tmp ??? cat nothing ??? cd .. # 0.
cd so_test_tmp ??? cat something ??? cd .. # 1.
cd does_not_exist ??? cat nothing ??? cd .. # 2.
These three commands should always end in PWD. In 0. the first cd is run, then the last. In 1. all three commands successfully run. In 2. the first command fails so the second and third are not run.
What about?
pushd .; cmd1 && cmd2 && ... cmdn; popd
pushd . saves your current dir.
Then you execute your commands; you use && so that, if one fails, the others are not executed.
popd goes back to your initial dir.
EDIT: regarding your comment below, yes, this pushd .; popd construct is quite silly; it lets you forget about how the execution of each set of commands went.
pushd .; cd so_test_tmp && cat nothing; popd; # 0.
pushd .; cd so_test_tmp && cat something; popd; # 1.
pushd .; cd does_not_exist && cat nothing; popd; # 2.
You finish at your original dir after running the three sets of commands.
Within each set of commands, whenever a command fails, it shortcuts the execution of the others behind (see they are separated by &&).
If you need to know if each set of commands succeeded or not, you can always test the result of the execution (and go to your initial dir and save it again before running the following set of commands):
pushd .;
cd so_test_tmp && cat nothing && cd .. ; # 0.
test $? -eq 0 || (popd; pushd .) ;
cd so_test_tmp && cat something && cd ..; # 1.
test $? -eq 0 || (popd; pushd .) ;
cd does_not_exist && cat nothing && cd ..; # 2.
test $? -eq 0 || (popd; pushd .) ;
Specifically for cd somewhere && somecommand && cd ..
The cd .. is only necessary because you're doing cd so_test_tmp inside your parent shell, as opposed to the subshell that's fork()ed off to then be replaced with a copy of /bin/cat.
By creating an explicit subshell with ( ... ), you can scope the cd to its contents. By using exec for the last command in the subshell, you can consume it, balancing out the performance overhead of that subshell's creation.
(cd so_test_tmp && exec cat nothing) # 0.
(cd so_test_tmp && exec cat something) # 1.
(cd does_not_exist && exec cat nothing) # 2.
Note that this applies only when the command you're running in a subdirectory doesn't change the state of the shell that started it (like setting a variable). If you need to set a variable, you might instead want something like output=$(cd /to/directory && exec some_command).
Answering the more general question
Use && to connect the first command to a group with the second and third commands, and use ; to combine those 2nd and 3rd commands, if your goal is to ensure that both 2nd and 3rd run if-and-only-if the 1st succeeds.
cd so_test_tmp && { cat nothing; cd ..; } # 0.
cd so_test_tmp && { cat something; cd ..; } # 1.
cd does_not_exist && { cat nothing; cd ..; } # 2.
Setup:
$ cd /tmp
$ mkdir so_test_tmp
$ echo "This is something" > so_test_tmp/something
Wrapping an if/then/fi around OPs current examples:
$ if cd so_test_tmp; then cat nothing; cd ..; fi ; pwd
cat: nothing: No such file or directory
/tmp
$ if cd so_test_tmp; then cat something; cd ..; fi ; pwd
This is something
/tmp
$ if cd does_not_exist; then cat something; cd ..; fi ; pwd
-bash: cd: does_not_exist: No such file or directory
/tmp
Related
I run a script which compresses certain files and rename them.
I don't want to copy it to each directory and operate it from inside
how can I run it from outside the target directory to act on it, tried
'bash compress.sh target_dir/'
In a bash script you can use input via $1 for the first input ($2 for second,..). You could write your script such that it is working with this. Easiest way might be to store pwd
#!/usr/bin/env bash
baseDir=$PWD
cd "$1" || echo 'Check input'; exit 1
# ..your stuff..
cd "$baseDir" || :
To do this for all sub directories you can use find and a loop:
#!/usr/bin/env bash
baseDir=$PWD # or any other path
allSubDirectories=$(find "$baseDir" -type d)
for d in $allSubDirectories
do
cd "$d" || :
# ..your stuff..
cd "$baseDir" || :
done
I have this for loop
for repository in ./*/; do
echo $repository && cd $repository && git checkout -b prod && cd - >/dev/null;
done
But if branch prod already exists it prints a message and exit the loop.
How can ignore this error and just go to the next directory ?
Thanks
So the problem is that git checkout -b prod returns failure to the shell if the branch already exists. Since it's connected to the next command (cd -) with the conditional operator &&, that next command only runs if git succeeds. So when git fails, the cd doesn't run, and your shell is left in the wrong directory to continue its loop.
In general, when you want your code to continue even if a command fails, separate the commands with ; or newlines instead of &&.
But a better solution in this case is to just do the cd in a subshell so that it doesn't affect the outer loop's working directory and you don't have to cd - at all:
for repository in ./*/; do
echo "$repository" && (
cd "$repository" && git checkout -b prod
)
done
That will work fine even if the branch creation fails. It will still print out the error message; if you don't want to see those, add the redirect:
for repository in ./*/; do
echo "$repository" && (
cd "$repository" && git checkout -b prod
) 2>/dev/null
done
I've also quoted the expansion of $repository in the commands, which you should almost always do in shell scripts. With the unquoted version, you would get an error if any of the repo directory names had spaces in them, for instance.
Also, that "no side effects in a subshell" thing is great for doing part of your work in a different directory, but it applies more widely. If you had a more complicated loop that set any shell variables or anything while it was in the subdir, those would also be lost. Just something to keep in mind.
Like this
home=$PWD
for repository in "$home"/*/; do
basename "$repository" # to 'echo' $repository
cd "$repository" && git checkout -b prod
done
Better use pushd and popd and additionally it is saver to use find:
while read -r repository; do
pushd "${repository}"
if git checkout -b prod; then
echo "git checkout success"
else
echo "git chechout error"
fi
popd
done < <( find . -mindepth 1 -maxdepth 1 -type d -print )
I am trying to do "cd" inside "if" condition in a Bash script. It stays in the same directory after "if". So I have to do "cd" outside "if" and then use $? value in if. Is there a way to avoid using this extra step? What is the shortest possible way of doing it?
See three variants of my code:
#!/bin/bash
set +e
# If I write the following:
if ! (rm -rf sim && mkdir sim && cd sim); then
echo $0: Cannot prepare simulation directory
exit 1
fi
# it does not change the current directory
echo $PWD
# Also, if I write the following:
if ! rm -rf sim || ! mkdir sim || ! cd sim; then
echo $0: Cannot prepare simulation directory
exit 1
fi
# it also does not change the current directory
echo $PWD
# In order to change the current directory, I need to write:
rm -rf sim && mkdir sim && cd sim
if [ $? -eq 1 ]; then
echo $0: Cannot prepare simulation directory
exit 1
fi
# Now it prints .../sim
echo $PWD
cd ..
# Is it possible to write it without an additional line with $?
exit
Parenthesis in bash create a subshell -- a fork()-ed off copy of the shell with its own environment variables, its own current directory, etc; thus, in your first attempt, the cd took effect only until the closing paren ended the subshell. (POSIX doesn't strictly require this subshell behavior, but it does require that the environment created by parenthesis have its own working directory, so cd's effects will be scoped on all standard-compliant shells, whether or not that shell actually uses a fork() here in all circumstances).
Use curly brackets, not parenthesis, for grouping when you don't want to create a subshell. That is:
if ! { rm -rf sim && mkdir sim && cd sim; }; then
echo "$0: Cannot prepare simulation directory"
exit 1
fi
That said, your second approach works perfectly well (though it's unwieldy to write and not a conventional idiom).
bash <<'EOF'
cd /tmp
echo "Starting in $PWD"
if ! rm -rf sim || ! mkdir sim || ! cd sim; then
echo "$0: Cannot prepare simulation directory"
exit 1
fi
echo "Ending in $PWD"
EOF
...properly emits:
Starting in /tmp
Ending in /tmp/sim
Let me first write a quick Makefile as a showcase:
#!/bin/make -f
folders := $(shell find -mindepth 1 -maxdepth 1 -type d -print)
make_dir:
#mkdir -p "test0"
pwd_test:
#cd "test0" && pwd
#pwd
pwd_all:
#for f in $(folders); do \
cd "$${f}" && pwd; \
pwd; \
cd ..; \
done
First do make make_dir and then see the different results:
➜ so make pwd_test
/data/cache/tmp/so/test0
/data/cache/tmp/so
➜ so make pwd_all
/data/cache/tmp/so/test0
/data/cache/tmp/so/test0
You see that in the for loop it is necessary to do cd ... Apparently, now there is no child process spawn for the cd X && pwd command, while that is normally the case. Is this behaviour specific to make or specific to my shell?
Make spawns a new process for each command in the rule. Since the for loop is one command you get only one process.
Take a look at Recipe Execution
Edit:
Each line in a makefile gets it own subshell. Commands that have
\ tells make that the next line should be part of the current line.
The reason the for loop get its own subshell is because make see the line as
#for f in $(folders); do cd "$${f}" && pwd; pwd; cd ..; done
MadScientist explains it fairly well. Any command that you can type in your
shell in one line will be executed by make in one subshell or process.
If you were to run this in ksh, ksh would be passed
for f in $(folders); do cd "$${f}" && pwd; pwd; cd ..; done and it would be
run in that one subshell. If ksh did not have a for loop implemented this
probably would error and make would say the command returned some error code.
Explanation of pwd_test
pwd_test:
#cd "test0" && pwd
#pwd
#cd "test0" && pwd is seen as one line so the subshell updates its current
working directory and then prints out what the current working is.
#pwd At this line make spawns a new subshell that contains the old working
directory (or the directory make was called form) and pwd prints that
directory.
Basically just like the title says...
I want to ls the directory that I'm currently running my script in, and for every folder in the directory, cd into that directory and execute my script using the folder name as the argument.
ie: In ${HOME} I have 2 directories say '31' and '32' (will always be numerical and incremental like that)
So, in my script I'm going to cd in 31, rsync some files into that directory from another machine, cd .., then cd into 32 and repeat till there are no more folders.
I have everything working with my current get.exp and running:
for x in ls; do cd $x ; get.exp $x ; cd .. ; done
as a bash alias. But I'd love to cut out the alias...
The tcl equivalent of your loop is
foreach x [glob -nocomplain *] {
cd $x
exec get.exp $x
cd ..
}
You might be able to simplify that (perhaps just using source get.exp or calling the appropriate proc) but it's the same basic idea.