Git pre-push to prevent merge - bash

i made this bash script for my git hooks in order to avoid merging by mistake.
At work we use git-extension and it prefills the branch name, if you forget to prefix the branch name by refs/for or refs/drafts then your commits are merged on the distant repo bypassing the code review.
This can only happen to people with merge right. but sometimes someone with merge right will make that mistake.
#!/bin/sh
if [[ `grep 'refs/for'` ]]; then
echo "push to refs/for/ authorized"
elif [[ `grep 'refs/drafts/'` ]]; then
echo "push to refs/drafts/ authorized"
else
echo "PUSH DENIED, MERGE DETECTED, please merge using gerrit 'submit' button only"
exit 1
fi
if I push to refs/for/v1.1.x it get push to refs/for/ authorized
however if I push to refs/drafts/v1.1.x I get PUSH DENIED, MERGE DETECTED, please merge using gerrit 'submit' button only
How is this possible, the condition syntax is exactly the same.
Thanks.

As Anthony Sottile commented, your first grep command reads all the input, checking for refs/for. Your second grep reads the remaining input; since your first one read all of the input, none remains. The second one therefore never finds anything.
You can deal with this in many different ways. One is to copy stdin to a temporary file that you can read repeatedly. Another is to structure your hook so that you read the input only once, checking each line for each condition. This second method is obviously more efficient, though this "obviousness" is a bit of an illusion, depending on details I'm not going to get into here.
Anyway, the way I would write this is:
#! /bin/sh
summary_status=0 # assume all OK initially
while read lref lhash rref rhash; do
case $rref in
refs/for/*|refs/draft/*) echo "push to $rref ok";;
*) echo "push to $rref not ok"; summary_status=1;;
esac
done
exit $summary_status
This variant does not require bash features (hence the /bin/sh rather than /bin/bash in the #! line); if you change it to one that does, be sure to change the #! line. It reads the input exactly once. It also verifies that you did not run, e.g.:
git push remote master:refs/for/master develop
which your existing script would allow (since one is OK and one is not).

Related

Husky pre commit hook and squashing commits

I am using "husky": "^7.0.4".
My team squashes their commits before opening a PR.
I have a pre-commit file to automate this workflow. Every other time I run the commit function, the pre-commit flow works perfectly. So the 1st, 3rd, 5th, etc. works. The 2nd, 4th, 6th, etc time prints this error
fatal: cannot lock ref 'HEAD': is at 766hdjoXXX but expected 766e11XXX
I thought it might be because I wasn't changing the file, however when I tried changing something, that didn't work either(it succeeds and fails every other time regardless). Any idea what's wrong?
Here is the pre-commit file:
read -n1 -p "Do you want to squash commits? [n/Y]" SHOULD_SQUASH < /dev/tty
case $SHOULD_SQUASH in
n|N)
echo
echo Skipping squash, now linting files...
;;
y|Y)
[ -z "$SQUASH_BRANCH" ] && SQUASH_BRANCH=develop
branch=$(git symbolic-ref HEAD)
echo
echo Squashing all commits from $branch
git reset $(git merge-base $SQUASH_BRANCH $branch)
echo ------SUCCESS!------
echo Commits successfully squashed.
git add .
echo Added all files successfully.
;;
*)
echo
echo Skipping squash, now linting files...
;;
esac
npx pretty-quick --staged
npm run lint
The squash function is from a custom function, that works with no problem, we created that lives in .zshrc.
Pre-commit files in general should not use git reset and git add. It is possible to make this work sometimes, but you get odd effects, including the one you're seeing (and sometimes worse ones). A pre-commit script should limit itself to testing whether the commit is OK, and if so, exiting zero; if not, the script should exit nonzero, without attempting to make any changes.1
Instead of calling your script .git/pre-commit and invoking it with git commit, call it makecommit and invoke it as makecommit. Or, call it git-c and invoke it as git c. Have the script do its thing—including run npm lint—and if all looks good, have it run git commit. You won't need a pre-commit hook at all, but if you like, you can have one that reads as follows:
[ "$RUN_FROM_OUR_SCRIPT" = yes ] && exit 0
echo "don't run git commit directly, run our script instead"
exit 1
Then instead of just git commit, have your script do:
RUN_FROM_OUR_SCRIPT=yes git commit
which will set the variable that the pre-commit hook tests to make sure git commit was run from your script.
Note that you will no longer need to redirect the read from /dev/tty. (You probably should also consider using git reset --soft, and/or verifying the that index content matches the working tree content.)
1If you like to live dangerously, and really want to have a script that can update files, make sure that $GIT_INDEX_FILE is unset or set to .git/index, to make sure you have not been invoked as git commit --only.

Gitlab: reset/roll back does not work in pre-receive hook

I was trying to roll back commit to the previous version when script "Check.py" returns 0 or 1 in pre-receive hook (please see the following code). My issue is even though "git reset --soft $i^1" did execute, I still see the newest commit on GitLab. What I want to achieve is when $ret equals 0 or 1, I'd like to roll back the commit to the previous one in the current branch.
Thank you all for your help!
read oldrev newrev branch
mapfile -t my_array < <(git rev-list $oldrev..$newrev)
for i in ${my_array[#]}
do
git show $i > /tmp/$$.temp
python /script/Check.py /tmp/$$.temp
ret=$?
if [ $ret -ne 2 ]; then
git reset --soft $i^1
fi
done
A pre-receive hook runs before some ref names (e.g., refs/heads/foobranch) is/are updated. Its job is to:
read all the lines (you read only one, i.e., you assume just one name is proposed to be updated);
inspect those lines to see whether or not to allow the push to update all those names; and
exit 0 to allow the updates to begin, one at a time, or exit nonzero to reject the entire push.
Your script never has any explicit exit so it would exit with the status of the last command run (but see below). Zero means all the updates are allowed.
Besides this, your script runs git reset --soft, which resets the current branch, whatever that is. The current branch is not related to the branches and/or tags being updated by this particular git push, except by happenstance.
The most immediate bug, however, is probably this:
I was trying to roll back commit to the previous version when script "Check.py" returns 0 or 1 ...
Your script checks for $ret -ne 2. The exit status from python /script/Check.py /tmp/$$.temp is in $?, not $ret. This produces a syntax error, after which bash simply exits (zero, in my test; I found this a bit surprising but it would explain your results). Edit: now that you have ret=$?, the script itself will run, but if HEAD names a branch that the git push updates, and if you allow the git push to proceed, the branch name will be updated and the effect of the reset eliminated. (If you exit nonzero, the reset may take effect, but no push has occurred and that's a bad idea.)
Fixing this will still leave you with the remaining issues. Fixing those requires knowing what you really wanted to achieve, which you have not mentioned here.

git hook pre-push bash script validate tag

I'm trying to validate a git tag using the pre-push git hook but whenever I execute git push origin <tag-name> it takes the previous tag as the latest from /refs/tags.
To replicate the issue:
Step 1. git commit
step 2. git tag V1.0.0-US123-major
step 3. git push origin V1.0.0-US123-major
So when step 3 executes the pre-push script should take "V1.0.0-US123-major" tag and validates against the below regex. If the tag matches with regex then it is a valid tag else abort git push.
#!/bin/sh
read_tag="$(git describe --abbrev=0 --tags)"
if [[ $read_tag =~ (^v[0-9]{1}.[0-9]{1}.[0-9]{1}-[a-zA-Z]+-[a-zA-Z]+$) ]]; then
echo "matched"
else
echo "not matched"
exit 1
fi
My expectation is when I use git push origin 2.2.2.2, the pre-push script does not return exit1 rather it accepts the tag and pushing to origin which is not correct.
git push origin 2.2.2
latest tag: v5.5.5-abcd-tues
matched
Can someone help me with this, please?
Your pre-push hook is checking the current revision, not the tag you're pushing, because git describe describes HEAD if you don't specify otherwise.
When you use a pre-push hook, the references being pushed are passed in on standard input. Assuming the thing you want to check is the name of the remote reference (that is, the one that's going to end up on the server), then it could look something like this (using POSIX syntax):
#!/bin/sh
set -e
while read lref new rref old
do
case $rref in
refs/tags/*)
if echo "$rref" | \
grep -qsE '(^refs/tags/v[0-9]{1}.[0-9]{1}.[0-9]{1}-[a-zA-Z]+-[a-zA-Z]+$)'
then
echo "matched"
else
echo "not matched"
exit 1
fi;;
*)
;;
esac
done
Do note that while a pre-push hook can help the developer make good choices and avoid mistakes, it's not an effective control, because it can be trivially bypassed. If you need to restrict what gets pushed to the server, you need to do that either with a pre-receive hook, your server implementation, or a CI system. See the relevant Git FAQ entry for more.

Bash Script Help "if (user input) = then (var) ="

I a writing a script to cherry pick all open changes from Gerrit. I found one that works sort of, though I need to be able to change inputs so that I do not have a script for each repo hardcoded with that specific repo's information.
#! /bin/sh
REMOTE="${1-review}"
ssh -p 29418 user#gerrit.remote.com gerrit query --format=text --patch-sets status:open branch:XXX project:XXX | grep revision: | awk '{print $2;}' | while read ID
do
git fetch "${REMOTE}" && git cherry-pick "${ID}"
done
Now I have been able to pick open changes successfully but I am trying to make it so I can pass input to change username, branch, project and remote. With the current method I need to enter my username, project, branch, and remote manually into the script. Then it is only good for that specific repo.
I have been having trouble with if/then statements. I know as it looks now none of the things I am asking for are coded, I wanted to provide someone with a working model though.
I did change username and the particular details, easy enough for someone to use this script themselves to cherry-pick by inserting the requisite information.
If I do something like this:
PROJECT="$1"
if [ "$1" = "XX" ]; then
"$PROJECT="project:name of project"
Then bash returns XX command not found. I am not trying to make it a command I want it to be input to be inserted into the ssh command later on. Also I am trying to not only use if but also else if so that PROJECT can be whatever is input.
I think I am almost there though completely stumped at this point.
Assume $1 is equal to "XX". Your code:
PROJECT="$1"
will assign PROJECT=XX. Next,
if [ "$1" = "XX" ]; then
is true, "then" clause will be executed. This clause is:
"$PROJECT="project:name of project"
that tries to execute command "XX=...", causing "command not found"
Suggestion, remove $ on this line, as in:
PROJECT="project:name of project"

Bash script to start Solr deltaimporthandler

I am after a bash script which I can use to trigger a delta import of XML files via CRON. After a bit of digging and modification I have this:
#!/bin/bash
# Bash to initiate Solr Delta Import Handler
# Setup Variables
urlCmd='http://localhost:8080/solr/dataimport?command=delta-import&clean=false'
statusCmd='http://localhost:8080/solr/dataimport?command=status'
outputDir=.
# Operations
wget -O $outputDir/check_status_update_index.txt ${statusCmd}
2>/dev/null
status=`fgrep idle $outputDir/check_status_update_index.txt`
if [[ ${status} == *idle* ]]
then
wget -O $outputDir/status_update_index.txt ${urlCmd}
2>/dev/null
fi
Can I get any feedback on this? Is there a better way of doing it? Any optimisations or improvements would be most welcome.
This certainly looks usable. Just to confirm, you intend to run this ever X minutes from your crontab? That seems reasonsable.
The only major quibble (IMHO) is discarding STDERR information with 2>/dev/null. Of course it depends on what are your expectations for this system. If this is for a paying customer or employer, do you want to have to explain to the boss, "gosh, I didn't know I was getting error message 'Cant connect to host X' for the last 3 months because we redirect STDERR to /dev/null"! If this is for your own project, and your monitoring the work via other channels, then not so terrible, but why not capture STDERR to file, and if check that there are no errors. as a general idea ....
myStdErrLog=/tmp/myProject/myProg.stderr.$(/bin/date +%Y%m%d.%H%M)
wget -O $outputDir/check_status_update_index.txt ${statusCmd} 2> ${myStdErrLog}
if [[ ! -s ${myStdErrLog} ]] ; then
mail -s "error on myProg" me#myself.org < ${myStdErrLog}
fi
rm ${myStdErrLog}
Depending on what curl includes in its STDERR output, you may need filter what is in the StdErrLog to see if there are "real" error messages that you need to have sent to you.
A medium quibble is your use backticks for command substitution, if you're using dbl-sqr-brackets for evaluations, then why not embrace complete ksh93/bash semantics. The only reason to use backticks is if you think you need to be ultra-backwards compatible and that you'll be running this script under the bourne shell (or possibly one of the stripped down shells like dash).Backticks have been deprecated in ksh since at least 1993. Try
status=$(fgrep idle $outputDir/check_status_update_index.txt)
The $( ... ) form of command substitution makes it very easy to nest multiple cmd-subtitutions, i.e. echo $(echo one $(echo two ) ). (Bad example, as the need to nest cmd-sub is pretty rare, I can't think of a better example right now).
Depending on your situation, but in a large production environement, where new software is installed to version numbered directories, you might want to construct your paths from variables, i.e.
hostName=localhost
portNum=8080
SOLRPATH=/solr
SOLRCMD='delta-import&clean=false"
urlCmd='http://${hostName}:${portNum}${SOLRPATH}/dataimport?command=${SOLRCMD}"
The final, minor quibble ;-). Are you sure ${status} == *idle* does what you want?
Try using something like
case "${status}" in
*idle* ) .... ;;
* ) echo "unknown status = ${status} or similar" 1>&2 ;;
esac
Yes, your if ... fi certainly works, but if you want to start doing more refined processing of infomation that you put in your ${status} variable, then case ... esac is the way to go.
EDIT
I agree with #alinsoar that 2>/dev/null on a line by itself will be a no-op. I assumed that it was a formatting issue, but looking in edit mode at your code I see that it appears to be on its own line. If you really want to discard STDERR messages, then you need cmd ... 2>/dev/null all on one line OR as alinsoar advocates, the shell will accept redirections at the front of the line, but again, all on one line ;-!.
IHTH

Resources