How to wrap multiple subdirectories mkdir command - bash

I want to create multiple subdirectories.
My command is:
mkdir -p dir1/{dir1.1/{dir1.1.1,dir1.1.2},dir1.2,dir1.3}
It works, result is:
dir1
dir1.1
dir1.1.1
dir1.1.2
dir1.2
dir1.3
However I want to make this command look nicer (more readable). Tried to:
mkdir -p \
dir1/{\
dir1.1/{\
dir1.1.1,\
dir1.1.2},\
dir1.2,\
dir1.3}
And this doesn't work. Result is:
ls *
dir1 dir1.1 dir1.1.1, dir1.1.2}, dir1.2, dir1.3}
How can I wrap such mkdir command?

Try the following:
eval mkdir -p `echo \
dir1/{\
dir1.1/{\
dir1.1.1,\
dir1.1.2},\
dir1.2,\
dir1.3}\
| sed -E 's/\s*//g'`
Explanation: Your original code introduces spaces into the parameter, so instead of calling
mkdir -p dir1/{dir1.1/{dir1.1.1,dir1.1.2},dir1.2,dir1.3}
You are actually calling the command with the following parameters:
mkdir -p dir1/{ dir1.1/{ dir1.1.1, dir1.1.2}, dir1.2, dir1.3}
And this is why you got the wrong directories created. Therefore, to solve this, I first stripped the whitespaces using sed, and then used eval to evaluate the resulting command. This solution should work for simple cases, but some special characters within the directory names (such as white spaces) may cause issues.
Hope this helps!

If you want readable, just call mkdir multiple times. I doubt that directory creation is going to form any kind of bottleneck in your program.
mkdir dir1
mkdir -p dir1/dir1.{1,2,3}
mkdir -p dir1/dir1.1/dir1.1.{1,2}

The problem is the whitespace in the beginning of each line, which causes the lines to be treated as different arguments of the mkdir command. To overcome this, you can do:
mkdir -p \
dir1/{\
dir1.1/{\
dir1.1.1,\
dir1.1.2},\
dir1.2,\
dir1.3}
with no whitespace in the beginning. Whether this is more readable than the first command is debatable.

Related

Understanding a docker entrypoint script

The script is located here: https://github.com/docker-library/ghost/blob/master/docker-entrypoint.sh
#!/bin/bash
set -e
if [[ "$*" == npm*start* ]]; then
baseDir="$GHOST_SOURCE/content"
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
mkdir -p "$targetDir"
if [ -z "$(ls -A "$targetDir")" ]; then
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
fi
done
if [ ! -e "$GHOST_CONTENT/config.js" ]; then
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
fi
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
chown -R user "$GHOST_CONTENT"
set -- gosu user "$#"
fi
exec "$#"
From what I know, it says that if you use some variation of npm start to move some files around from $GHOST_SOURCE to $GHOST_CONTENT, do something to the config.js file, link the config file, set ownership of the content files, and then execute npm start as the user user. Otherwise, it just runs your commands normally.
The specifics are what are hard for me to understand because there are a lot of things from bash that I've never seen before. So I have a lot of questions.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't /*/ contain themes? Is * not a wildcard for some reason?
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like rsync? I understand the point of -C, but why -c and --one-file-system?
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the end?
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them to each other if both files already exist?
set -- gosu user "$#"
In the above what does calling set with no args do?
I hope that's not too much. I felt making a separate question for each of these would be too much especially since it's all related to each other.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't
/*/ contain themes? Is * not a wildcard for some reason?
themes/ is in the first match, but themes/*/ is not, so you need the second entry to include the contents of themes.
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
It removes the $baseDir prefix from $dir. So for example:
bash$ dir=/home/bmitch/data/docker
bash$ echo $dir
/home/bmitch/data/docker
bash$ echo ${dir#/home/bmitch}
/data/docker
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like
rsync? I understand the point of -C, but why -c and --one-file-system?
rsync may not be installed on every machine by default, tar is fairly universal. The -c is to create, vs extract, and --one-file-system avoids tar continuing to an outside mount point (nfs, symlink to root, etc).
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the
"$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the
end?
config.example.js is the input (last arg to the sed), config.js is the output (after the >). So it takes the config.example.js, change the ip address from 127.0.0.1 to 0.0.0.0, effectively listening on all interfaces/ip's instead of just internally on the loopback. The second half of the sed is changing the path.join arguments from __dirname to process.env.GHOST_CONTENT.
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them
to each other if both files already exist?
The $GHOST_SOURCE/config.js is replaced (-f) with a link to $GHOST_CONTENT/config.js. Symbolic links give a file name reference to another actual file, so there will be two names, but one copy of the data, which means you will only have a single configuration in this situation.
set -- gosu user "$#"
In the above what does calling set with no args do?
This changes the values of $1, $2, ... $n to be $1=gosu, $2=user, $3=the old $1, $4=the old $2..., essentially adding the gosu and user to the beginning of the passed parameters to the script. The -- makes sure that set doesn't interpret any values from $# as a flag for itself.

mkdir multiple subfolders in bash script

I have the following script:
#!/bin/bash
path="/parentfolder/{child_1,child_2}"
mkdir -p $path
mkdir -p /parentfolder/{child_3,child_4}
Running it creates the following folders:
/parentfolder/{child_1,child_2}
/parentfolder/child_3
/parentfolder/child_4
How can I make the script create the following folder structure:
/parentfolder/child_1
/parentfolder/child_2
/parentfolder/child_3
/parentfolder/child_4
You cannot use brace expansion in a quoted variable; either put the braces in the command itself, or assign the variable differently. If you need the values to be in a variable, using an array would seem suitable.
#!/bin/bash
paths=(/parentfolder/{child_1,child_2,child_3,child_4})
mkdir -p "${paths[#]}"
path=`echo /parentfolder/{child_1,child_2}`
expansion needs a command to work properly.

Can I make parallel sub-directories under a mother-directory without change to that mother-directory?

I was learing basic terminal command these days.
I found that I can make parallel directories by add a space between them:
mkdir dir_a dir_b
But when I trying to make parallel directories under a mother-directory, this fails.
mkdir dir_a/dir_a_1 dir_a_2
# Failed, the dir_a_2 is on top level
Is there a way that I can make parallel sub-directories under a mother-directory without change to that mother-directory (without cd method) ?
Each argument to mkdir is rooted in the current directory. The behavior your are seeing is intentional. There's no special treatment of the first argument to mkdir.
However, there are several options available to achieve the result you are looking for.
You could use a loop:
for f in dir_a_1 dir_a_2; do mkdir -p "dir_a/$f"; done
You could use pushd:
mkdir dir_a; pushd dir_a; mkdir dir_a_1 dir_a_2; popd
You could use printf and command substitution:
mkdir -p $(printf 'dir_a/%s ' dir_a_1 dir_a_2)
You could use printf and xargs:
printf 'dir_a/%s ' dir_a_1 dir_a_2 | xargs mkdir -p
Any of these should work for you.

What difference between those two shell commands?

There are two shell commands:
[[ ! -d $TRACEDIR/pattrace/rpt/.tracking ]] && mkdir $TRACEDIR/pattrace/rpt/.tracking
[[ ! -d $TRACEDIR/pattrace/rpt/.tracking ]] && mkdir -p $TRACEDIR/pattrace/rpt/.tracking
Obviously, the only difference between those commands is -p flag. But what this flag does in this context?
Thanks.
From the mkdir man page:
-p, --parents
no error if existing, make parent directories as needed
In other words, if the directories needed don't exist, they will be created as required. If the directories already exist, it won't cause an error.
This is a good place to look for man pages (in addition to using google of course)
mkdir with the -p option will create all necessary parent directories of the specified path, should they not exist (see man pages). Also, with -p you won't get an error if the directory itself already exists.
In your particular case, the first command might fail because the test for the complete path is not sufficient. The test will also fail if only, say, $TRACEDIR/ exists but the subsequent mkdir will the fail because it would require $TRACEDIR/pattrace/rpt/ to exist.
The second command will work, because mkdir -p creates all missing directories "in between" as well.

Getting directory portion of a list of source files in a for loop

I have the following gnu make script:
for hdrfile in $(_PUBLIC_HEADERS) ; do \
echo $(dir $$hdrfile) ; \
done
The _PUBLIC_HEADERS variable has a list of relative paths, like so:
./subdir/myheader1.h
./subdir/myheader2.h
The output I get from the for loop above is:
./
./
I expect to see:
./subdir/
./subdir/
What am I doing wrong? Note that if I change the code to:
echo $(dir ./subdir/myheader1.h)
it works in this case. I think maybe it has something to do with the for loop but I'm not sure.
You are confusing make variables (or functions) with shell variables when executing the for-loop. Note that $(dir ...) is a make construct that gets expanded by make before the command is executed by the shell. However, you want the shell to execute that command inside the loop.
What you could do is replace $(dir) with the corresponding command dirname which gets executed by the shell. So it becomes:
for hdrfile in $(_PUBLIC_HEADERS) ; do \
dirname $$hdrfile ; \
done
This should give the desired result.

Resources