Say you had the following directory structure:
# directory structure
├── GIT-REPO
│ ├── dev
│ ├── production
│ ├── mgmt
I'm looking for a way in a Makefile to find the environment based on what directory it is living in. I found a way to do this in bash with the following:
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IFS='/' read -r -a DIR_ARRAY <<< "$DIR"
GIT_REPO=some-repo
for ((i=0; i < ${#DIR_ARRAY[#]}; i++)) do
if [ "${DIR_ARRAY[$i]}" = "$GIT_REPO" ] ; then
echo ${DIR_ARRAY[$i+1]}
fi
done
But I'm having a hard time translating this into a Makefile. Each of these environment directories will have a Makefile as well as subdirectories. I want to be able to dynamically look up what environment it is under by finding the name of the directory to the right of the $GIT_REPO directory.
So here's an example:
/home/user/git_repo/mgmt
/home/user/git_repo/prod
/home/user/git_repo/prod/application/
/home/user/git_repo/dev/
/home/user/my_source_files/git_repo/prod/application
You'll see there's some similarities, but the overall length of dir's is different. They all share the git_repo and all contain an environment (prod, dev, mgmt). At the top level of each directory above is a Makefile where I want to pull the environment. My bash example was a lot more complicated than I needed it to be and could use sed instead. This is what is in my Makefile now:
GIT_REPO="my_repo"
ENV=$(shell pwd | sed "s/^.*\/$(GIT_REPO)\///" | cut -d / -f 1)
What this will do is look for the Git repository text and strip the repository name and any root directory before it. Then we apply cut and separate it by the '/' path and grab the first element. This will always return the environment folder.
I have a very specific use case where I want to dynamically get the environment in my Makefile rather than statically defining it each time.
Related
I have the following directory structure
mkdir -p joe/{0,1}
mkdir -p john/0
tree
.
├── joe
│ ├── 0
│ └── 1
└── john
└── 0
And I want to call a program for each entry, in this case the program should be called 3 times, for program joe/0, program joe/1 and program john/0
How can I do this in pure bash script?
Thanks in advance.
Loop through ./*/*/.
for arg in ./*/*/; do
program "$arg"
# if you don't want the trailing slash:
# program "${arg%/}"
done
This is called filename expansion, and is documented here.
I have developed an application which I am trying to install on raspberry pi via a script. The directory structure I have is this:
pi#raspberrypi:~/inetdrm $ tree files.rpi/
files.rpi/
├── etc
│ └── config
│ └── inetdrm
├── lib
│ └── systemd
│ └── system
│ └── inetdrm.service
└── usr
└── local
└── bin
└── inetdrm
When I try to install the tree structure onto the pi with this install.sh: script
#! /bin/bash
FILES="./files.rpi"
sudo rsync -rlpt "$FILES/" /
sudo chmod 644 /lib/systemd/system/inetdrm.service
sudo chmod +x /usr/local/bin/inetdrm
#sudo systemctl start inetdrm.service
#sudo systemctl enable inetdrm.service
The filesystem on the pi breaks. I loose all access to commands, the script fails, as shown on this transcript.
pi#raspberrypi:~/inetdrm $ ./install.sh
./install.sh: line 4: /usr/bin/sudo: No such file or directory
./install.sh: line 5: /usr/bin/sudo: No such file or directory
pi#raspberrypi:~/inetdrm $ ls
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $ pwd
/home/pi/inetdrm
pi#raspberrypi:~/inetdrm $ ls /
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $
Rebooting the pi results in kernel panic due to no init. Does anyone know what's going on?
I encountered the same issue. Turns out Rsync is not the right tool for the job. My solution was to deploy with the script below. Before writing the files to the target destination, it checks if the file contents are different. So it won't overwrite if the files are already there. You could even run this automatically on every reboot.
#!/usr/bin/env bash
FILES="files.rpi"
deploy_dir () {
shopt -s nullglob dotglob
for SRC in "${1}"/*; do
# Strip files dir prefix to get destination path
DST="${SRC#$FILES}"
if [ -d "${SRC}" ]; then
if [ -d "${DST}" ]; then
# Destination directory already exists,
# go one level deeper
deploy_dir "${SRC}"
else
# Destination directory doesn't exist,
# copy SRC dir (including contents) to DST
echo "${SRC} => ${DST}"
cp -r "${SRC}" "${DST}"
fi
else
# Only copy if contents aren't the same
# File attributes (owner, execution bit etc.) aren't considered by cmp!
# So if they change somehow, this deploy script won't correct them
cmp --silent "${SRC}" "${DST}" || echo "${SRC} => ${DST}" && cp "${SRC}" "${DST}"
fi
done
}
deploy_dir "${FILES}"
Ok, so after a good nights sleep, I worked out what is going on.
Rsync doesn't just do a simple copy or replace operation. It first makes a temporary copy of what it is replacing, and then moves that temporary copy into place. When doing a folder merge, it seems it does something similar causing (in my case) all the binaries in the /usr/* tree to be replaced while some are still in use.
The solution:
use --inplace
ie:
sudo rsync --inplace -rlpt "$FILES/" /
which causes rsync to work on the files (and directories, it seems) in their existing location rather than doing a copy-and-move.
I have tested the solution and confirmed it works, but I can not find any explicit mention of how rsync handles directory merge without the --inplace flag, so if someone can provide more info, that'd be great.
UPDATE: I found that when using --inplace the issue still occurs if rsync is interrupted for some reason. I'm not entirely certain about the inner workings of directory merge in rsync, so I have concluded that it may not be the best tool for this job. Instead I wrote my own deployment function. Here it is in case anyone stumbling across this post finds it useful:
#! /bin/bash
FILES="files.rpi"
installFiles(){
FILELIST=$(find "$1" -type f)
for SRC in $FILELIST; do
DEST="/$(echo "$SRC"| cut -f 2- -d/)"
DIR=$(dirname "$DEST")
if [ ! -d "$DIR" ]; then
sudo mkdir -p "$DIR"
fi
echo "$SRC => $DEST"
sudo cp "$SRC" "$DEST"
done
}
installFiles "$FILES"
The bash (thank you #Charles Duffy) stores each uniqueid in an array and then passes them to %q to get the unique path. That seems to work what I am having trouble with is renaming each .png with the unique value in %q. I thought it was working but upon closer inspection, a .png file is being sent to scp.... but only 1 and with the wrong uniqueid. It seems like the first .png is being used by scp, but with the last uniqueid. In this example there are 2, but there may be more or less. I added a loop and that did not seem to work, I am at a loss. Thank you :).
I hope this help and thank you :).
├──/path/to/ ---- common path after ssh ---
│ ├── ID1* --- unique %q represents the unique id and * represents random text after it ---
│ │ └── /%q*/folder
│ ├── ID2* --- unique %q represents the unique id and * represents random text after it ---
│ │ └── /%q/folder
Description:
After ssh to the common path on the server, each unique ID from %q is used to further navigate to folder. In each folder there is a png (cn_results), that the unique ID from %q is append to (ID-cn_results) and this append file is scp to xxx#xxx.xx.xx.xxx:/path/to/%q*/destination.
declare -p array='([0]="ID1" [1]="ID2")' --- this is where the rename value are ---
current output in each /path/to/%q*/folder --- on the server---
cn_results.png
desired output in each /path/to/%q*/destination after scp
uniqueid1-cn_results.png
uniqueid2-cn_results.png
I can manually ssh into directory and the .png is there, though it is only cn_results before the scp where it is renamed/ append with the array value and then scp. I tried to add the loop to scp and rename as such:
printf -v cmd_q '(cd /path/to/%q*/*/folder && for ID in "${array[#]}" ; do exec sshpass -f file.txt scp "$ID" xxx#xxx.xx.xx.xxx:path/to/destination/${ID}-cn_results.png)\n' "${array[#]}" ; done
sshpass -f out.txt ssh -o strictHostKeyChecking=no -t xxx#xxx.xx.xx.xx "$cmd_q"
Here's a command that I think might produce the desired results. It copies all files in the source-folder with the name cn_results.png to the target folder, appending a unique id to the start using $(cat /proc/sys/kernel/random/uuid)
find ./source-folder -name 'cn_results.png' -exec sh -c 'cp "$1" "./target-folder/$(cat /proc/sys/kernel/random/uuid)_memo_content.xml"' _ {} \;
Starting with this directory structure:
$ tree
.
├── 1
│ └── 2
│ └── foo.jar
└── a
└── b
└── c
└── setAlias
The goal is to come up with the contents of setAlias, so I can source the file, and it will create an alias that runs java -jar /absolute/path/to/foo.jar
Here's what I have so far:
FOO="java -jar $(realpath $(dirname $_)/../../../1/2/foo.jar)"
echo "Setting Alias:"
echo " foo -> $FOO"
alias foo='$FOO'
If I source setAlias from its own directly, everything works fine. But if I set it from the root directory, I have to run it twice before the absoulute path is resolved:
$ source a/b/c/setAlias
realpath: ./../../../1/2/foo.jar: No such file or directory
Setting Alias:
foo -> java -jar
$ source a/b/c/setAlias
Setting Alias:
foo -> java -jar /home/MatrixManAtYrService/1/2/foo.jar
If I do this from ./a/b/c the path is resolved on the first try.
What is happening here? Why does realpath take two tries to find the file?
This is a very strange thing to do, but it's easily explained. Here's an excerpt from man bash under Special Parameters
$_ [..] expands to the last argument to the previous command, after expansion. [...]
In other words, it refers to the last argument of the most recently executed command:
$ echo foo bar baz
foo bar baz
$ echo $_
baz
In your case, you run some arbitrary command not shown in your post, followed by source twice:
$ true foobar # $_ now becomes "foobar"
$ source a/b/c/setAlias # fails because $_ is "foobar", sets $_ to a/b/c/setAlias
$ source a/b/c/setAlias # works because $_ is now "a/b/c/setAlias"
In other words, your source will only work when preceded by a command that uses the value you require of $_ as its last argument. This could be anything:
$ wc -l a/b/c/setAlias # also sets $_ to a/b/c/setAlias
4
$ source a/b/c/setAlias # Works, because $_ is set to the expected value
Maybe you wanted to get the current script's path instead?
Okay,
I would like to do the following in the shell.
If I am in a subdir like css which is inside a subdir like mypage (e.g. projects/mypage/htdocs/css) I would like to go into the root dir of the project, which is mypage. I would like to write this as a function to us as a command. The only "fixed" value is projects.
So basically if I am within any subdir of projects in the shell and I type the command goroot (or whatever) I want the function to check if it is in fact inside a subdir of projects and if so, go to the current subdir.
E.g.
~/projects/mypage/htdocs/css › goroot [hit return]
~/projects/mypage > [jumped to here]
Is this at all possible and if so how could I achieve this?
Assuming I am understanding correctly, this should work:
goroot() { cd $(sed -r 's#(~/projects/[^/]*)/.*#\1#' <<< $PWD); }
This sed command effectively strips off everything after ~/projects/SOMETHING and then changes to that directory. If you're not in ~/projects/ then it will leave you in the current directory.
Note: this assumes that $PWD uses the ~ to denote home, if it is something like /home/user/ then amend the sed command appropriately.
projroot=/home/user/projects
goroot() {
# Strip off project root prefix.
local m=${d#$projroot/}
if [ "$m" = "$d" ]; then
echo "Not in ~/projects"
return
fi
# Strip off project directory.
local suf=${m#*/}
if [ "$suf" = "$m" ]; then
echo "Already in project root."
return
fi
# cd to concatenation of project root, and project directory (stripped of sub-project path).
cd "$projroot/${m%/$suf}"
}