When extracting an RPM on MacOs using a bash script, how come I can't see the files? - bash

I have the following executable bash script
#!/usr/bin/env bash
function testRpm(){
local rpm=$1
local tempDir=$(mktemp -d)
pushd $tempDir #>/dev/null
rpm2cpio $rpm | cpio -idmuv
find store -name "*.jar"
}
testRpm $1
It seems pretty straight forward to me, extract the RPM, show the files. The problem is when I run it, the find doesn't show the files, it shows the directories though. If I manually enter the commands it works great.
eg.
bash -x ./test.sh myrpm.rpm
+ testRpm myrpm.rpm
+ local rpm=myrpm.rpm
++ mktemp -d
+ local tempDir=/var/folders/z4/7cl6z4_x5vq1dllx8l6vf73r0000gn/T/tmp.YWdEnKUG
+ pushd /var/folders/z4/7cl6z4_x5vq1dllx8l6vf73r0000gn/T/tmp.YWdEnKUG
/var/folders/z4/7cl6z4_x5vq1dllx8l6vf73r0000gn/T/tmp.YWdEnKUG
~/IdeaProjects
+ rpm2cpio myrpm.rpm
+ cpio -idmuv
./store/tmp/myfile1
./store/tmp/myfile2
33279 blocks
+ find store
store
store/tmp
The above script appears to work perfectly on Redhat, but not macos. If anyone has any suggestions, tips or solutions, I'd appreciate it.

What may be happening is that when you pushd, you're losing access to the RPM file, e.g. when you're in /home/me, "foo.rpm" refers to /home/me/foo.rpm but when you change directory to /tmp, "foo.rpm" now refers to /tmp/foo.rpm.
Solve this by using the absolute path to the RPM when extracting using realpath:
#!/usr/bin/env bash
function testRpm(){
local rpm=$(realpath -- "$1")
local tempDir=$(mktemp -d)
pushd $tempDir #>/dev/null
rpm2cpio "$rpm" | cpio -idmuv
find store -name "*.jar"
}
testRpm "$1"

Related

mv/cp commands not working as expected wih xargs in bash

Hi I have 2 parent directories with these contents, under /tmp:
Note parent directory names have ";" in it- not recommended in Unix like systems, but those directories are pushed by an external application, and that's the way we have to deal with it.
I need to move these parent directories (along with their contents) to /tmp/archive - on a RHEL 7.9 (Maipo) machine
My simple code:
ARCHIVE="/tmp/archive"
[ -d "${ARCHIVE}" ] && mkdir -p "${ARCHIVE}"
ls -lrth /tmp | awk "\$NF ~ /2021-.*/{print \$NF}" | xargs -I "{}" mv "{}" ${ARCHIVE}/
But when I run this script, mv copies one of the parent directory as it is, but for the other one, it just moves the contents of the parent directory, not the directory itself:
I tried the same script with cp -pvr command in place of mv, and its the same behavior
When I run the same script in a Ubuntu 18 system, the behavior is as expected i.e - the parent directories get moved to archive folder.
Why is there this difference in behavior between a Ubuntu and a RHEL system, for the same script
Try a simpler approach:
mkdir -p /tmp/archive
mv -v /tmp/2021-*\;*\;*/ /tmp/archive

Understanding a docker entrypoint script

The script is located here: https://github.com/docker-library/ghost/blob/master/docker-entrypoint.sh
#!/bin/bash
set -e
if [[ "$*" == npm*start* ]]; then
baseDir="$GHOST_SOURCE/content"
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
mkdir -p "$targetDir"
if [ -z "$(ls -A "$targetDir")" ]; then
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
fi
done
if [ ! -e "$GHOST_CONTENT/config.js" ]; then
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
fi
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
chown -R user "$GHOST_CONTENT"
set -- gosu user "$#"
fi
exec "$#"
From what I know, it says that if you use some variation of npm start to move some files around from $GHOST_SOURCE to $GHOST_CONTENT, do something to the config.js file, link the config file, set ownership of the content files, and then execute npm start as the user user. Otherwise, it just runs your commands normally.
The specifics are what are hard for me to understand because there are a lot of things from bash that I've never seen before. So I have a lot of questions.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't /*/ contain themes? Is * not a wildcard for some reason?
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like rsync? I understand the point of -C, but why -c and --one-file-system?
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the end?
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them to each other if both files already exist?
set -- gosu user "$#"
In the above what does calling set with no args do?
I hope that's not too much. I felt making a separate question for each of these would be too much especially since it's all related to each other.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't
/*/ contain themes? Is * not a wildcard for some reason?
themes/ is in the first match, but themes/*/ is not, so you need the second entry to include the contents of themes.
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
It removes the $baseDir prefix from $dir. So for example:
bash$ dir=/home/bmitch/data/docker
bash$ echo $dir
/home/bmitch/data/docker
bash$ echo ${dir#/home/bmitch}
/data/docker
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like
rsync? I understand the point of -C, but why -c and --one-file-system?
rsync may not be installed on every machine by default, tar is fairly universal. The -c is to create, vs extract, and --one-file-system avoids tar continuing to an outside mount point (nfs, symlink to root, etc).
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the
"$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the
end?
config.example.js is the input (last arg to the sed), config.js is the output (after the >). So it takes the config.example.js, change the ip address from 127.0.0.1 to 0.0.0.0, effectively listening on all interfaces/ip's instead of just internally on the loopback. The second half of the sed is changing the path.join arguments from __dirname to process.env.GHOST_CONTENT.
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them
to each other if both files already exist?
The $GHOST_SOURCE/config.js is replaced (-f) with a link to $GHOST_CONTENT/config.js. Symbolic links give a file name reference to another actual file, so there will be two names, but one copy of the data, which means you will only have a single configuration in this situation.
set -- gosu user "$#"
In the above what does calling set with no args do?
This changes the values of $1, $2, ... $n to be $1=gosu, $2=user, $3=the old $1, $4=the old $2..., essentially adding the gosu and user to the beginning of the passed parameters to the script. The -- makes sure that set doesn't interpret any values from $# as a flag for itself.

copy files while preserving directory structure in mac

How to copy files from one directory to another while preserving the directory structure in mac?
I found that you can use cp --parents in ubuntu but unfortunately that doesn't work in mac.
I ended up using rsync -R to solve this.
On OS X you can use ditto <source> <destination>
See here:
http://osxdaily.com/2014/06/11/use-ditto-copy-files-directories-mac-command-line/
I'm tired of writing this manually, so I'm going to provide a non rsync way for future reference.
#!/bin/bash
cpParents() {
src=(${*: 1:-1})
dest=${*: -1:1}
for filename in $src; do
[ -e "$filename" ] || continue
dirPath=$(dirname "${filename}")
mkdir -p $dest/$dirPath
cp $filename $dest/$dirPath
done
}

Source bash script to another one [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Reliable way for a bash script to get the full path to itself?
I have bash script test.sh which use functions from another search.sh script by following lines:
source ../scripts/search.sh
<call some functions from search.sh>
Both scripts are located in git repository. search.sh in <git_root>/scripts/ directory, test.sh is located in the same directory (but, generally speaking, could be located anywhere inside <git_root> directory - I mean I can't rely on the following source search.sh approach ).
When I call test.sh script from <git_root>/scripts/ everything works well, but as soon as I change current working directory test.sh fails:
cd <git_root>/scripts/
./test.sh //OK
cd ..
./scripts/test.sh //FAILS
./scripts/test.sh: line 1: ../scripts/search.sh: No file or directory ...
Thus what I have:
Relative path of search.sh script towards <git_root> directory
What I want: To have ability to run test.sh from anywhere inside <git_root> without errors.
P.S.: It is not possible to use permanent absolute path to search.sh as git repository can be cloned to any location.
If both the scripts are in the same directory, then if you get the directory that the running script is in, you use that as the directory to call the other script:
# Get the directory this script is in
pushd `dirname $0` > /dev/null
SCRIPTPATH=`pwd -P`
popd > /dev/null
# Now use that directory to call the other script
source $SCRIPTPATH/search.sh
Taken from the accepted answer of the question I marked this question a duplicatre of: https://stackoverflow.com/a/4774063/440558
Is there a way to identify this Git repository location? An environment variable set? You could set PATH in the script itself to include the Git repository:
PATH="$GIT_REPO_LOCATION/scripts:$PATH"
. search.sh
Once the script is complete, your PATH will revert to its old value, and $GIT_REPO_LOCATION/scripts will no longer be part of the PATH.
The question is finding this location to begin with. I guess you could do something like this in your script:
GIT_LOCATION=$(find $HOME -name "search.sh" | head -1)
GIT_SCRIPT_DIR=$(dirname $GIT_LOCATION)
PATH="$GIT_SCRIPT_DIR:$PATH"
. search.sh
By the way, now that $PATH is set, I can call the script via search.sh and not ./search.sh which you had to do when you were in the scripts directory, and your PATH didn't include . which is the current directory (and PATH shouldn't include . because it is a security hole).
One more note, you could search for the .git directory too which might be the Git repository you're looking for:
GIT_LOCATION=$(find $HOME -name ".git" -type d | head -1)
PATH="$GIT_LOCATION:$PATH"
. search.sh
You could do this:
# Get path the Git repo
GIT_ROOT=`git rev-parse --show-toplevel`
# Load the search functions
source $GIT_ROOT/scripts/search.sh
How get Git root directory!
Or like #Joachim Pileborg says, but you have to pay attention that you must know the path of this one to another script;
# Call the other script
source $SCRIPTPATH/../scripts/search.sh
# Or if it is in another path
source $SCRIPTPATH/../scripts/seachers/search.sh
The Apache Tomcat scripts use this approach:
# resolve links - $0 may be a softlink
PRG="$0"
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`/"$link"
fi
done
PRGDIR=`dirname "$PRG"`
Any way, you have to put this snippet on all scripts that use other scripts.
For the people who would rather not use git's features for finding the parent directory. If you can be sure you'll always be running the script from within the git directory, you can use something like this:
git_root=""
while /bin/true ; do
if [[ "$(pwd)" == "$HOME" ]] || [[ "$(pwd)" == "/" ]] ; then
break
fi
if [[ -d ".git" ]] ; then
git_root="$(pwd)"
break
fi
cd ..
done
I haven't tested this but it will just loop back until it hits your home directory or / and it will see if there is a .git directory in each parent directory. If there is, it sets the git_root variable and it will break out. If it doesn't find one, git_root will just be an empty string. Then you can do:
if [[ -n "$git_root" ]] ; then
. ${git_root}/scripts/search.sh
fi
IHTH

Bash script to safely create symlinks?

I'm trying to store all my profile configuration files (~/.xxx) in git. I'm pretty horrible at bash scripting but I imagine this will be pretty straight forward for you scripting gurus.
Basically, I'd like a script that will create symbolic links in my home directory to files in my repo. Twist is, I'd like it warn and prompt for overwrite if the symlink will be overwriting an actual file. It should also prompt if a sym link is going to be overwritten, but the target path is different.
I don't mind manually editing the script for each link I want to create. I'm more concerned with being able to quickly deploy new config scripts by running this script stored in my repo.
Any ideas?
The ln command is already conservative about erasing, so maybe the KISS approach is good enough for you:
ln -s git-stuff/home/.[!.]* .
If a file or link already exists, you'll get an error message and this link will be skipped.
If you want the files to have a different name in your repository, pass the -n option to ln so that it doesn't accidentally create a symlink in an existing subdirectory of that name:
ln -sn git-stuff/home/profile .profile
...
If you also want to have links in subdirectories of your home directory, cp -as reproduces the directory structure but creates symbolic links for regular files. With the -i option, it prompts if a target already exists.
cp -i -as git-stuff/home/.[!.]* .
(My answer assumes GNU ln and GNU cp, such as you'd find on Linux (and Cygwin) but usually not on other unices.)
The following has race conditions, but it is probably as safe as you can get without filesystem transactions:
# create a symlink at $dest pointing to $source
# not well tested
set -e # abort on errors
if [[ ( -h $dest && $(readlink -n "$dest") != $source ) || -f $dest || -d $dest ]]
then
read -p "Overwrite $dest? " answer
else
answer=y
fi
[[ $answer == y ]] && ln -s -n -f -v -- "$source" "$dest"

Resources