Why does sourcing this file behave differently the second time? - bash

Starting with this directory structure:
$ tree
.
├── 1
│   └── 2
│   └── foo.jar
└── a
└── b
└── c
└── setAlias
The goal is to come up with the contents of setAlias, so I can source the file, and it will create an alias that runs java -jar /absolute/path/to/foo.jar
Here's what I have so far:
FOO="java -jar $(realpath $(dirname $_)/../../../1/2/foo.jar)"
echo "Setting Alias:"
echo " foo -> $FOO"
alias foo='$FOO'
If I source setAlias from its own directly, everything works fine. But if I set it from the root directory, I have to run it twice before the absoulute path is resolved:
$ source a/b/c/setAlias
realpath: ./../../../1/2/foo.jar: No such file or directory
Setting Alias:
foo -> java -jar
$ source a/b/c/setAlias
Setting Alias:
foo -> java -jar /home/MatrixManAtYrService/1/2/foo.jar
If I do this from ./a/b/c the path is resolved on the first try.
What is happening here? Why does realpath take two tries to find the file?

This is a very strange thing to do, but it's easily explained. Here's an excerpt from man bash under Special Parameters
$_ [..] expands to the last argument to the previous command, after expansion. [...]
In other words, it refers to the last argument of the most recently executed command:
$ echo foo bar baz
foo bar baz
$ echo $_
baz
In your case, you run some arbitrary command not shown in your post, followed by source twice:
$ true foobar # $_ now becomes "foobar"
$ source a/b/c/setAlias # fails because $_ is "foobar", sets $_ to a/b/c/setAlias
$ source a/b/c/setAlias # works because $_ is now "a/b/c/setAlias"
In other words, your source will only work when preceded by a command that uses the value you require of $_ as its last argument. This could be anything:
$ wc -l a/b/c/setAlias # also sets $_ to a/b/c/setAlias
4
$ source a/b/c/setAlias # Works, because $_ is set to the expected value
Maybe you wanted to get the current script's path instead?

Related

If/elif statement only works when command is inserted between semicolon and then

I'm seeing strange behaviour with a bash script I'm writing at the moment.
When running a if/elif statement, the "then" portion is only triggering when I put a command between the semicolon and then.
The below code is the original, where the elif does not trigger
#!/bin/bash
set -e
echo "Checking all policies have test cases in directory: $1"
cd $1 #$1 is the first argument passed to this script, in this case it is the output of the command "pwd"
set +e
for policy in *.sentinel; do
test_dir=$(echo $policy | awk -F. '{print $1}')
if [ ! -d "test/$test_dir" ]; then
echo "No test cases for policy $policy in subfolder $test_dir"
else
cd test/$test_dir
if ! ls fail.hcl >/dev/null || ! ls pass.hcl >/dev/null; then
echo "Missing at least one of fail*.hcl, pass*.hcl or mock*.sentinel in for policy $policy in subfolder $test_dir"
elif egrep -ir --include=*.sentinel "tfconfig/v2" .
echo hello
then
cp ../../../sentinel_mocks/mock-tfconfig-v2.sentinel ./test-mock-tfconfig-pass-v2.sentinel
fi
cd ../../
fi
done
#!/bin/bash
set -e
echo "Checking all policies have test cases in directory: $1"
cd $1 #$1 is the first argument passed to this script, in this case it is the output of the command "pwd"
set +e
for policy in *.sentinel; do
test_dir=$(echo $policy | awk -F. '{print $1}')
if [ ! -d "test/$test_dir" ]; then
echo "No test cases for policy $policy in subfolder $test_dir"
else
cd test/$test_dir
if ! ls fail.hcl >/dev/null || ! ls pass.hcl >/dev/null; then
echo "Missing at least one of fail*.hcl, pass*.hcl or mock*.sentinel in for policy $policy in subfolder $test_dir"
elif egrep -ir --include=*.sentinel "tfconfig/v2" .
echo hello # if this line isn't here then the below does not run
then
cp ../../../sentinel_mocks/mock-tfconfig-v2.sentinel ./test-mock-tfconfig-pass-v2.sentinel
fi
cd ../../
fi
done
The folder structure is as follows, the script is being ran from policy_set_1 with the command - ./test.sh .
policy_set_1
├── authorised_vpc.sentinel
└── test
├── authorised_vpc
│   ├── fail.hcl
│   ├── mock-tfconfig-fail-v2.sentinel
│   ├── mock-tfconfig-pass-v2.sentinel
│   ├── pass.hcl
│   └── sentinel.hcl
I've never seen this behaviour before so any insight would be appreciated.
I realised the egrep command I was using was looking in the wrong folder.
I amended it to the following and it worked:
egrep -ir --include=*.sentinel "tfconfig/v2" ../../$policy

Why does Rsync of tree structure into root break filesystem on Raspberry Pi?

I have developed an application which I am trying to install on raspberry pi via a script. The directory structure I have is this:
pi#raspberrypi:~/inetdrm $ tree files.rpi/
files.rpi/
├── etc
│   └── config
│   └── inetdrm
├── lib
│   └── systemd
│   └── system
│   └── inetdrm.service
└── usr
└── local
└── bin
└── inetdrm
When I try to install the tree structure onto the pi with this install.sh: script
#! /bin/bash
FILES="./files.rpi"
sudo rsync -rlpt "$FILES/" /
sudo chmod 644 /lib/systemd/system/inetdrm.service
sudo chmod +x /usr/local/bin/inetdrm
#sudo systemctl start inetdrm.service
#sudo systemctl enable inetdrm.service
The filesystem on the pi breaks. I loose all access to commands, the script fails, as shown on this transcript.
pi#raspberrypi:~/inetdrm $ ./install.sh
./install.sh: line 4: /usr/bin/sudo: No such file or directory
./install.sh: line 5: /usr/bin/sudo: No such file or directory
pi#raspberrypi:~/inetdrm $ ls
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $ pwd
/home/pi/inetdrm
pi#raspberrypi:~/inetdrm $ ls /
-bash: /usr/bin/ls: No such file or directory
pi#raspberrypi:~/inetdrm $
Rebooting the pi results in kernel panic due to no init. Does anyone know what's going on?
I encountered the same issue. Turns out Rsync is not the right tool for the job. My solution was to deploy with the script below. Before writing the files to the target destination, it checks if the file contents are different. So it won't overwrite if the files are already there. You could even run this automatically on every reboot.
#!/usr/bin/env bash
FILES="files.rpi"
deploy_dir () {
shopt -s nullglob dotglob
for SRC in "${1}"/*; do
# Strip files dir prefix to get destination path
DST="${SRC#$FILES}"
if [ -d "${SRC}" ]; then
if [ -d "${DST}" ]; then
# Destination directory already exists,
# go one level deeper
deploy_dir "${SRC}"
else
# Destination directory doesn't exist,
# copy SRC dir (including contents) to DST
echo "${SRC} => ${DST}"
cp -r "${SRC}" "${DST}"
fi
else
# Only copy if contents aren't the same
# File attributes (owner, execution bit etc.) aren't considered by cmp!
# So if they change somehow, this deploy script won't correct them
cmp --silent "${SRC}" "${DST}" || echo "${SRC} => ${DST}" && cp "${SRC}" "${DST}"
fi
done
}
deploy_dir "${FILES}"
Ok, so after a good nights sleep, I worked out what is going on.
Rsync doesn't just do a simple copy or replace operation. It first makes a temporary copy of what it is replacing, and then moves that temporary copy into place. When doing a folder merge, it seems it does something similar causing (in my case) all the binaries in the /usr/* tree to be replaced while some are still in use.
The solution:
use --inplace
ie:
sudo rsync --inplace -rlpt "$FILES/" /
which causes rsync to work on the files (and directories, it seems) in their existing location rather than doing a copy-and-move.
I have tested the solution and confirmed it works, but I can not find any explicit mention of how rsync handles directory merge without the --inplace flag, so if someone can provide more info, that'd be great.
UPDATE: I found that when using --inplace the issue still occurs if rsync is interrupted for some reason. I'm not entirely certain about the inner workings of directory merge in rsync, so I have concluded that it may not be the best tool for this job. Instead I wrote my own deployment function. Here it is in case anyone stumbling across this post finds it useful:
#! /bin/bash
FILES="files.rpi"
installFiles(){
FILELIST=$(find "$1" -type f)
for SRC in $FILELIST; do
DEST="/$(echo "$SRC"| cut -f 2- -d/)"
DIR=$(dirname "$DEST")
if [ ! -d "$DIR" ]; then
sudo mkdir -p "$DIR"
fi
echo "$SRC => $DEST"
sudo cp "$SRC" "$DEST"
done
}
installFiles "$FILES"

Makefile: Reading/splitting an array

Say you had the following directory structure:
# directory structure
├── GIT-REPO
│   ├── dev
│   ├── production
│   ├── mgmt
I'm looking for a way in a Makefile to find the environment based on what directory it is living in. I found a way to do this in bash with the following:
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
IFS='/' read -r -a DIR_ARRAY <<< "$DIR"
GIT_REPO=some-repo
for ((i=0; i < ${#DIR_ARRAY[#]}; i++)) do
if [ "${DIR_ARRAY[$i]}" = "$GIT_REPO" ] ; then
echo ${DIR_ARRAY[$i+1]}
fi
done
But I'm having a hard time translating this into a Makefile. Each of these environment directories will have a Makefile as well as subdirectories. I want to be able to dynamically look up what environment it is under by finding the name of the directory to the right of the $GIT_REPO directory.
So here's an example:
/home/user/git_repo/mgmt
/home/user/git_repo/prod
/home/user/git_repo/prod/application/
/home/user/git_repo/dev/
/home/user/my_source_files/git_repo/prod/application
You'll see there's some similarities, but the overall length of dir's is different. They all share the git_repo and all contain an environment (prod, dev, mgmt). At the top level of each directory above is a Makefile where I want to pull the environment. My bash example was a lot more complicated than I needed it to be and could use sed instead. This is what is in my Makefile now:
GIT_REPO="my_repo"
ENV=$(shell pwd | sed "s/^.*\/$(GIT_REPO)\///" | cut -d / -f 1)
What this will do is look for the Git repository text and strip the repository name and any root directory before it. Then we apply cut and separate it by the '/' path and grab the first element. This will always return the environment folder.
I have a very specific use case where I want to dynamically get the environment in my Makefile rather than statically defining it each time.

Search up through directory path to find a specific directory

I understand how to recursively search down a hierarchy for a file or directory, but can't figure out how to search up the hierarchy and find a specific directory.
Given a path & file such as these fellas :
/Users/username/projects/project_name/lib/sub_dir/file.rb
/Users/username/projects/project_name/lib/sub_dir/2nd_sub_dir/3rd_sub_dir/file.rb
/Users/username/projects/project_name/spec/sub_dir/file.rb
How using the terminal can I get :
/Users/username/projects/project_name
N.B. I know that the next directory down from project_name is spec/ or lib/
Pure bash (no sub-processes spawning or other commands). Depending on how flexible you want it to be you may want to consider running the argument of rootdir() function through readlink -fn first. Explained here.
#!/bin/bash
function rootdir {
local filename=$1
local parent=${filename%%/lib/*}
if [[ $filename == $parent ]]; then
parent=${filename%%/spec/*}
fi
echo $parent
}
# test:
# rootdir /Users/username/projects/project_name/lib/sub_dir/file.rb
# rootdir /Users/username/projects/project_name/spec/sub_dir/file.rb
# rootdir /Users/username/projects/project_name/lib/sub_dir/2nd_sub_dir/3rd_sub_dir/file.rb
# output:
# /Users/username/projects/project_name
# /Users/username/projects/project_name
# /Users/username/projects/project_name
you can use perl:
cat file | perl -pe "s#(.+)(?:spec|lib).+#\1#"
where in file:
/Users/username/projects/project_name/lib/sub_dir/file.rb /Users/username/projects/project_name/lib/sub_dir/2nd_sub_dir/3rd_sub_dir/file.rb /Users/username/projects/project_name/spec/sub_dir/file.rb
or you can use sed:
cat file | sed 's/\(^.*\)\(spec\|lib\).*/\1/'

Bash cd .. returns nothing

I have a Bash script:
src="/home/xubuntu/Documents"
mkdir -p "$src/folder1"
src="$src/folder1"
# Do something
printf "SRC IS: $src\n"
src=`cd ..` # RETURN TO PARENT DIRECTORY
printf "SRC IS: $src\n"
Basically I want to create a new folder, then do something inside the folder and after that's done I want to return to the parent directory Documents. For some reason however, src=`cd ..` returns nothing.
SRC IS: /home/xubuntu/Documents
SRC IS:
Any ideas why?
You can access to the parent :
src=$(cd ..&&pwd)
Much better and without using cd:
src=${src%/*} # src is the parent directory
cd is just to change directory, not to display it; that is done with pwd; i.e.
cd ..
src=`pwd`
#or slightly faster
src=$PWD
what is happening is you are assigning the output from the command "cd .." to src
which (as you can see when you do it on the command line) is nothing. Use readlink -f to accomplish what you need.
What you want to do instead is this:
src="/home/xubuntu/Documents"
mkdir -p "$src/folder1"
src="$src/folder1"
# Do something
printf "SRC IS: $src\n"
src=`readlink -f $src/..` # RETURN TO PARENT DIRECTORY
printf "SRC IS: $src\n"
i assume thats what you wanted to do, return the the src it's parent folder.

Resources