What is the reson my expansion is not working - bash

I have this code to generate a random directory structure given some parameters, and a running against the eval limits, so I am trying to use Xargs to work around as per #TheOtherGuy but I am doing something wrong.
DIRCMD="mkdir -p $OUTDIR/\"$FLDIR\"/$FLCHILDREN"
In VS CODE watch
declare -- DIRCMD="mkdir -p ./rndpath/\"LUl\"/{\"KYh\",\"NQ \",\"NU\",\"Hjn\",\"lS\",\"TEW\"}/{\"Rbf\",\"DU\",\"N4\",\"Da7o\",\"aNK\",\"2oS\"}"
And do
eval "$DIRCMD"
Everything works unless I hit the eval expansion limits.
As Per #ThatOtherGuy trying to work around the limitation I tried
dircmd1="printf "%s\0 " $OUTDIR/\"$FLDIR\"/$FLCHILDREN"
and
eval "$dircmd1" | xargs -0 mkdir -p
[admin#119 rndpath]$ tree -a --dirsfirst -s ./
./
└── [ 4096] LUl
└── [ 4096] {KYh,NQ,NU,Hjn,lS,TEW}
└── [ 4096] {Rbf,DU,N4,Da7o,aNK,2oS}
What am I doing wrong?

Here is the answer
DIRCMD="printf \"%s\0\" ./\'$FLDIR\'/$FLCHILDREN" then eval "$DIRCMD" | xargs -0 mkdir -p
somehow without the backlashes it did not work.
I have no idea why and would be nice to know.

Related

Why does Rsync of tree structure into root break filesystem on Raspberry Pi?

I have developed an application which I am trying to install on raspberry pi via a script. The directory structure I have is this:
pi#raspberrypi:~/inetdrm $ tree files.rpi/
files.rpi/
├── etc
│   └── config
│   └── inetdrm
├── lib
│   └── systemd
│   └── system
│   └── inetdrm.service
└── usr
└── local
└── bin
└── inetdrm
When I try to install the tree structure onto the pi with this install.sh: script
#! /bin/bash
FILES="./files.rpi"
sudo rsync -rlpt "$FILES/" /
sudo chmod 644 /lib/systemd/system/inetdrm.service
sudo chmod +x /usr/local/bin/inetdrm
#sudo systemctl start inetdrm.service
#sudo systemctl enable inetdrm.service
The filesystem on the pi breaks. I loose all access to commands, the script fails, as shown on this transcript.
pi#raspberrypi:~/inetdrm $ ./install.sh
./install.sh: line 4: /usr/bin/sudo: No such file or directory
./install.sh: line 5: /usr/bin/sudo: No such file or directory
pi#raspberrypi:~/inetdrm $ ls
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $ pwd
/home/pi/inetdrm
pi#raspberrypi:~/inetdrm $ ls /
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $
Rebooting the pi results in kernel panic due to no init. Does anyone know what's going on?
I encountered the same issue. Turns out Rsync is not the right tool for the job. My solution was to deploy with the script below. Before writing the files to the target destination, it checks if the file contents are different. So it won't overwrite if the files are already there. You could even run this automatically on every reboot.
#!/usr/bin/env bash
FILES="files.rpi"
deploy_dir () {
shopt -s nullglob dotglob
for SRC in "${1}"/*; do
# Strip files dir prefix to get destination path
DST="${SRC#$FILES}"
if [ -d "${SRC}" ]; then
if [ -d "${DST}" ]; then
# Destination directory already exists,
# go one level deeper
deploy_dir "${SRC}"
else
# Destination directory doesn't exist,
# copy SRC dir (including contents) to DST
echo "${SRC} => ${DST}"
cp -r "${SRC}" "${DST}"
fi
else
# Only copy if contents aren't the same
# File attributes (owner, execution bit etc.) aren't considered by cmp!
# So if they change somehow, this deploy script won't correct them
cmp --silent "${SRC}" "${DST}" || echo "${SRC} => ${DST}" && cp "${SRC}" "${DST}"
fi
done
}
deploy_dir "${FILES}"
Ok, so after a good nights sleep, I worked out what is going on.
Rsync doesn't just do a simple copy or replace operation. It first makes a temporary copy of what it is replacing, and then moves that temporary copy into place. When doing a folder merge, it seems it does something similar causing (in my case) all the binaries in the /usr/* tree to be replaced while some are still in use.
The solution:
use --inplace
ie:
sudo rsync --inplace -rlpt "$FILES/" /
which causes rsync to work on the files (and directories, it seems) in their existing location rather than doing a copy-and-move.
I have tested the solution and confirmed it works, but I can not find any explicit mention of how rsync handles directory merge without the --inplace flag, so if someone can provide more info, that'd be great.
UPDATE: I found that when using --inplace the issue still occurs if rsync is interrupted for some reason. I'm not entirely certain about the inner workings of directory merge in rsync, so I have concluded that it may not be the best tool for this job. Instead I wrote my own deployment function. Here it is in case anyone stumbling across this post finds it useful:
#! /bin/bash
FILES="files.rpi"
installFiles(){
FILELIST=$(find "$1" -type f)
for SRC in $FILELIST; do
DEST="/$(echo "$SRC"| cut -f 2- -d/)"
DIR=$(dirname "$DEST")
if [ ! -d "$DIR" ]; then
sudo mkdir -p "$DIR"
fi
echo "$SRC => $DEST"
sudo cp "$SRC" "$DEST"
done
}
installFiles "$FILES"

Understanding a docker entrypoint script

The script is located here: https://github.com/docker-library/ghost/blob/master/docker-entrypoint.sh
#!/bin/bash
set -e
if [[ "$*" == npm*start* ]]; then
baseDir="$GHOST_SOURCE/content"
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
mkdir -p "$targetDir"
if [ -z "$(ls -A "$targetDir")" ]; then
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
fi
done
if [ ! -e "$GHOST_CONTENT/config.js" ]; then
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
fi
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
chown -R user "$GHOST_CONTENT"
set -- gosu user "$#"
fi
exec "$#"
From what I know, it says that if you use some variation of npm start to move some files around from $GHOST_SOURCE to $GHOST_CONTENT, do something to the config.js file, link the config file, set ownership of the content files, and then execute npm start as the user user. Otherwise, it just runs your commands normally.
The specifics are what are hard for me to understand because there are a lot of things from bash that I've never seen before. So I have a lot of questions.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't /*/ contain themes? Is * not a wildcard for some reason?
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like rsync? I understand the point of -C, but why -c and --one-file-system?
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the end?
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them to each other if both files already exist?
set -- gosu user "$#"
In the above what does calling set with no args do?
I hope that's not too much. I felt making a separate question for each of these would be too much especially since it's all related to each other.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't
/*/ contain themes? Is * not a wildcard for some reason?
themes/ is in the first match, but themes/*/ is not, so you need the second entry to include the contents of themes.
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
It removes the $baseDir prefix from $dir. So for example:
bash$ dir=/home/bmitch/data/docker
bash$ echo $dir
/home/bmitch/data/docker
bash$ echo ${dir#/home/bmitch}
/data/docker
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like
rsync? I understand the point of -C, but why -c and --one-file-system?
rsync may not be installed on every machine by default, tar is fairly universal. The -c is to create, vs extract, and --one-file-system avoids tar continuing to an outside mount point (nfs, symlink to root, etc).
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the
"$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the
end?
config.example.js is the input (last arg to the sed), config.js is the output (after the >). So it takes the config.example.js, change the ip address from 127.0.0.1 to 0.0.0.0, effectively listening on all interfaces/ip's instead of just internally on the loopback. The second half of the sed is changing the path.join arguments from __dirname to process.env.GHOST_CONTENT.
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them
to each other if both files already exist?
The $GHOST_SOURCE/config.js is replaced (-f) with a link to $GHOST_CONTENT/config.js. Symbolic links give a file name reference to another actual file, so there will be two names, but one copy of the data, which means you will only have a single configuration in this situation.
set -- gosu user "$#"
In the above what does calling set with no args do?
This changes the values of $1, $2, ... $n to be $1=gosu, $2=user, $3=the old $1, $4=the old $2..., essentially adding the gosu and user to the beginning of the passed parameters to the script. The -- makes sure that set doesn't interpret any values from $# as a flag for itself.

Can I make parallel sub-directories under a mother-directory without change to that mother-directory?

I was learing basic terminal command these days.
I found that I can make parallel directories by add a space between them:
mkdir dir_a dir_b
But when I trying to make parallel directories under a mother-directory, this fails.
mkdir dir_a/dir_a_1 dir_a_2
# Failed, the dir_a_2 is on top level
Is there a way that I can make parallel sub-directories under a mother-directory without change to that mother-directory (without cd method) ?
Each argument to mkdir is rooted in the current directory. The behavior your are seeing is intentional. There's no special treatment of the first argument to mkdir.
However, there are several options available to achieve the result you are looking for.
You could use a loop:
for f in dir_a_1 dir_a_2; do mkdir -p "dir_a/$f"; done
You could use pushd:
mkdir dir_a; pushd dir_a; mkdir dir_a_1 dir_a_2; popd
You could use printf and command substitution:
mkdir -p $(printf 'dir_a/%s ' dir_a_1 dir_a_2)
You could use printf and xargs:
printf 'dir_a/%s ' dir_a_1 dir_a_2 | xargs mkdir -p
Any of these should work for you.

How to wrap multiple subdirectories mkdir command

I want to create multiple subdirectories.
My command is:
mkdir -p dir1/{dir1.1/{dir1.1.1,dir1.1.2},dir1.2,dir1.3}
It works, result is:
dir1
dir1.1
dir1.1.1
dir1.1.2
dir1.2
dir1.3
However I want to make this command look nicer (more readable). Tried to:
mkdir -p \
dir1/{\
dir1.1/{\
dir1.1.1,\
dir1.1.2},\
dir1.2,\
dir1.3}
And this doesn't work. Result is:
ls *
dir1 dir1.1 dir1.1.1, dir1.1.2}, dir1.2, dir1.3}
How can I wrap such mkdir command?
Try the following:
eval mkdir -p `echo \
dir1/{\
dir1.1/{\
dir1.1.1,\
dir1.1.2},\
dir1.2,\
dir1.3}\
| sed -E 's/\s*//g'`
Explanation: Your original code introduces spaces into the parameter, so instead of calling
mkdir -p dir1/{dir1.1/{dir1.1.1,dir1.1.2},dir1.2,dir1.3}
You are actually calling the command with the following parameters:
mkdir -p dir1/{ dir1.1/{ dir1.1.1, dir1.1.2}, dir1.2, dir1.3}
And this is why you got the wrong directories created. Therefore, to solve this, I first stripped the whitespaces using sed, and then used eval to evaluate the resulting command. This solution should work for simple cases, but some special characters within the directory names (such as white spaces) may cause issues.
Hope this helps!
If you want readable, just call mkdir multiple times. I doubt that directory creation is going to form any kind of bottleneck in your program.
mkdir dir1
mkdir -p dir1/dir1.{1,2,3}
mkdir -p dir1/dir1.1/dir1.1.{1,2}
The problem is the whitespace in the beginning of each line, which causes the lines to be treated as different arguments of the mkdir command. To overcome this, you can do:
mkdir -p \
dir1/{\
dir1.1/{\
dir1.1.1,\
dir1.1.2},\
dir1.2,\
dir1.3}
with no whitespace in the beginning. Whether this is more readable than the first command is debatable.

bash for semantic file structure creation

Update 2010-11-02 7p: Shortened description; posted initial bash solution.
Description
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, Tagsistant.
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
proj ID
tags
extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
ls -R in a top directory to get all the file names
identify those files with a [ and ] in the filename (tagged files)
with what's left, enter a loop:
strip out the proj ID, tags, and extension
create all the necessary dirs based on the tags
create symlinks to the file in all of the dirs created
First Solution! 2010-11-3 7p
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
It's actually pretty neat. If you want to try it, do this:
create a dir somewhere
use touch to create a bunch of files with the format above: proj_name[tag1-tag2].ext
define the top_dir variable
run the script
play around!
ToDo
make this work using an "ls -R" in order to get into sub-dirs in my actual tree
robustness check
consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
Hmm, big problem, too big to do on a short break...
But I can give you an example of one of the various ways you could structure the script...
#!/bin/sh
ls -1 / | (while read fname; do
echo "$fname"
test=hello
# example transformation...
test2=`echo $fname | tr a-z A-Z`
echo "$test2"
done
echo post-loop processing here, $test
# then finally close the subshell with a right paren
)
Maybe something like this for each tag?
find . -type f|grep -Z "[[-]$tag[]-]"| \
xargs -0 -I %%% ln -s "../../%%%" "tagfolder/$tag/"
Note: The second line doesn't really work, don't know why.

Resources