I am new to git and github. I am working on a project where I need to commit my changes to github repository in a specific branch.
But I am getting the error
$ git commit
3.5.0.1
s/3.5.0.1/3.5.1.1/g
sed: can't read ../../build.gradle: No such file or directory
I have also attached the pre-commit file code here.
#!/bin/sh
## finding the exact line in the gradle file
#ORIGINAL_STRING=$(cat ../../build.gradle | grep -E '\d\.\d\.\d\.\d')
## extracting the exact parts but with " around
#TEMP_STRING=$(echo $ORIGINAL_STRING | grep -Eo '"(.*)"')
## the exact numbering scheme
#FINAL_VERSION=$(echo $TEMP_STRING | sed 's/"//g') # 3.5.0.1
#Extract APK version
v=$(cat build.gradle | grep rtVersionName | awk '{print $1}')
FINAL_VERSION=$(echo ${v} | cut -d"\"" -f2)
echo ${FINAL_VERSION}
major=0
minor=0
build=0
assets=0
regex="([0-9]+).([0-9]+).([0-9]+).([0-9]+)"
if [[ $FINAL_VERSION =~ $regex ]]; then
major="${BASH_REMATCH[1]}"
minor="${BASH_REMATCH[2]}"
build="${BASH_REMATCH[3]}"
assets="${BASH_REMATCH[4]}"
fi
# increment the build number
build=$(echo $build + 1 | bc)
NEW_VERSION="${major}.${minor}.${build}.${assets}"
SED_ARGUMENT=$(echo "s/${FINAL_VERSION}/${NEW_VERSION}/g")
echo $SED_ARGUMENT
sed -i -e `printf $SED_ARGUMENT` ../../build.gradle
The error comes in the last line of this file basically. I am using windows.
Things I tried:
sed -i -e `printf $SED_ARGUMENT` ../../build.gradle
sed -i ' ' -e `printf $SED_ARGUMENT` ../../build.gradle
I am unable to understand where am I actually doing wrong. Kindly help me out.
sed: can't read ../../build.gradle: No such file or directory
This one is rather simple. Your build.gradle file is not at ../../build.gradle.
The solution is to determine actual path to the build.gradle file relative to the script, and change the path in the script.
To debug this, do echo Current Directory: $PWD in the script to see what the actual working directory is, then you should be able to determine the correct path to use.
so first off, I'm not sure if I'm doing something wrong or if there are bugs that need to be worked out in the go-task project on GitHub.
I'm running into issues where occasionally I run into an error that looks something like this:
context canceled
task: Failed to run task "start": task: Failed to run task "upstream:common": task: Failed to run task "upstream:common:merge:variables": task: Failed to run task "upstream:common:merge:variables:subtype": fork/exec /usr/bin/jq: bad file descriptor
Some sample code that causes this issue is:
function mergePackages() {
# Merge the files
TMP="$(mktemp)"
jq --arg keywords "$(jq '.keywords[]' "$1" "$2" | jq -s '. | unique')" -s -S '.[0] * .[1] | .keywords = ($keywords | fromjson) | .' "$1" "$2" > "$TMP"
mv "$TMP" "$3"
}
for FOLDER in project-*/; do
SUBTYPE="$(echo "$FOLDER" | sed 's/project-\(.*\)\//\1/')"
mergePackages "./.common/package.hbs.json" "project-$SUBTYPE/package.hbs.json" "./.common-$SUBTYPE/package.hbs.json" &
done
wait
The issue seems to occur most often when I'm adding & to the end of a bash command for an inline function to execute although I have seem it happen even if a function is called without &.
Is there anything I am doing wrong?
How do you use a command line argument as a file path and check for file existence in Bash?
I have the simple Bash script test.sh:
#!/bin/bash
set -e
echo "arg1=$1"
if [ ! -f "$1" ]
then
echo "File $1 does not exist."
exit 1
fi
echo "File exists!"
and in the same directory, I have a data folder containing stuff.txt.
If I run ./test.sh data/stuff.txt I see the expected output:
arg1=data/stuff.txt
"File exists!"
However, if I call this script from a second script test2.sh, in the same directory, like:
#!/bin/bash
fn="data/stuff.txt"
./test.sh $fn
I get the mangled output:
arg1=data/stuff.txt
does not exist
Why does the call work when I run it manually from a terminal, but not when I run it through another Bash script, even though both are receiving the same file path? What am I doing wrong?
Edit: The filename does not have spaces. Both scripts are executable. I'm running this on Ubuntu 18.04.
The filename was getting an extra whitespace character added to it as a result of how I was retrieving it in my second script. I didn't note this in my question, but I was retrieving the filename from folder list over SSH, like:
fn=$(ssh -t "cd /project/; ls -t data | head -n1" | head -n1)
Essentially, I wanted to get the filename of the most recent file in a directory on a remote server. Apparently, head includes the trailing newline character. I fixed it by changing it to:
fn=$(ssh -t "cd /project/; ls -t data | head -n1" | head -n1 | tr -d '\n' | tr -d '\r')
Thanks to #bigdataolddriver for hinting at the problem likely being an extra character.
I have the following script. Unfortunately I not able to get it run on Jenkins.
#!/bin/bash
function pushImage () {
local serviceName=$1
local version=$(getImageVersionTag $serviceName)
cd ./dist/$serviceName
docker build -t $serviceName .
docker tag $serviceName gcr.io/$PROJECT_ID/$serviceName:$version
docker tag $serviceName gcr.io/$PROJECT_ID/$serviceName:latest
docker push gcr.io/$PROJECT_ID/$serviceName
cd ../..
}
function getImageVersionTag () {
local serviceName=$1
if [ $BUILD_ENV = "dev" ];
then
echo $(timestamp)
else
if [ $serviceName = "api" ];
then
echo $(git tag -l --sort=v:refname | tail -1 | awk -F. '{print $1"-"$2"-"$3"-"$4}')
else
echo $(git tag -l --sort=refname | tail -1 | awk -F. '{print $1"-"$2"-"$3"-"$4}')
fi
fi
}
function timestamp () {
echo $(date +%s%3N)
}
set -x
## might be api or static-content
pushImage $1
I'm receiving this error on Jenkins
10:10:17 + sh push-image.sh api
10:10:17 push-image.sh: 2: push-image.sh: Syntax error: "(" unexpected
I already configured Jenkins global parameter to /bin/bash as default shell execute environment, but still having same error.
The main issue here in usage of functions, as other scripts that has been executed successfully don't have any.
How this can be fixed?
Short answer: make sure you're running bash and not sh
Long answer: sh (which is run here despite your effort of adding a shebang) is the bourne shell and does not understand the function keyword. Simply removing it will solve your issue.
Please note however that all your variable expansions should be quoted to prevent against word splitting and globbing. Ex: local version=$(getImageVersionTag "$serviceName")
See shellcheck.net for more problems appearing in your file (usage of local var=$(...)) and explicit list of snippets which are missing quotes.
I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.