Gitlab: reset/roll back does not work in pre-receive hook - bash

I was trying to roll back commit to the previous version when script "Check.py" returns 0 or 1 in pre-receive hook (please see the following code). My issue is even though "git reset --soft $i^1" did execute, I still see the newest commit on GitLab. What I want to achieve is when $ret equals 0 or 1, I'd like to roll back the commit to the previous one in the current branch.
Thank you all for your help!
read oldrev newrev branch
mapfile -t my_array < <(git rev-list $oldrev..$newrev)
for i in ${my_array[#]}
do
git show $i > /tmp/$$.temp
python /script/Check.py /tmp/$$.temp
ret=$?
if [ $ret -ne 2 ]; then
git reset --soft $i^1
fi
done

A pre-receive hook runs before some ref names (e.g., refs/heads/foobranch) is/are updated. Its job is to:
read all the lines (you read only one, i.e., you assume just one name is proposed to be updated);
inspect those lines to see whether or not to allow the push to update all those names; and
exit 0 to allow the updates to begin, one at a time, or exit nonzero to reject the entire push.
Your script never has any explicit exit so it would exit with the status of the last command run (but see below). Zero means all the updates are allowed.
Besides this, your script runs git reset --soft, which resets the current branch, whatever that is. The current branch is not related to the branches and/or tags being updated by this particular git push, except by happenstance.
The most immediate bug, however, is probably this:
I was trying to roll back commit to the previous version when script "Check.py" returns 0 or 1 ...
Your script checks for $ret -ne 2. The exit status from python /script/Check.py /tmp/$$.temp is in $?, not $ret. This produces a syntax error, after which bash simply exits (zero, in my test; I found this a bit surprising but it would explain your results). Edit: now that you have ret=$?, the script itself will run, but if HEAD names a branch that the git push updates, and if you allow the git push to proceed, the branch name will be updated and the effect of the reset eliminated. (If you exit nonzero, the reset may take effect, but no push has occurred and that's a bad idea.)
Fixing this will still leave you with the remaining issues. Fixing those requires knowing what you really wanted to achieve, which you have not mentioned here.

Related

Husky pre commit hook and squashing commits

I am using "husky": "^7.0.4".
My team squashes their commits before opening a PR.
I have a pre-commit file to automate this workflow. Every other time I run the commit function, the pre-commit flow works perfectly. So the 1st, 3rd, 5th, etc. works. The 2nd, 4th, 6th, etc time prints this error
fatal: cannot lock ref 'HEAD': is at 766hdjoXXX but expected 766e11XXX
I thought it might be because I wasn't changing the file, however when I tried changing something, that didn't work either(it succeeds and fails every other time regardless). Any idea what's wrong?
Here is the pre-commit file:
read -n1 -p "Do you want to squash commits? [n/Y]" SHOULD_SQUASH < /dev/tty
case $SHOULD_SQUASH in
n|N)
echo
echo Skipping squash, now linting files...
;;
y|Y)
[ -z "$SQUASH_BRANCH" ] && SQUASH_BRANCH=develop
branch=$(git symbolic-ref HEAD)
echo
echo Squashing all commits from $branch
git reset $(git merge-base $SQUASH_BRANCH $branch)
echo ------SUCCESS!------
echo Commits successfully squashed.
git add .
echo Added all files successfully.
;;
*)
echo
echo Skipping squash, now linting files...
;;
esac
npx pretty-quick --staged
npm run lint
The squash function is from a custom function, that works with no problem, we created that lives in .zshrc.
Pre-commit files in general should not use git reset and git add. It is possible to make this work sometimes, but you get odd effects, including the one you're seeing (and sometimes worse ones). A pre-commit script should limit itself to testing whether the commit is OK, and if so, exiting zero; if not, the script should exit nonzero, without attempting to make any changes.1
Instead of calling your script .git/pre-commit and invoking it with git commit, call it makecommit and invoke it as makecommit. Or, call it git-c and invoke it as git c. Have the script do its thing—including run npm lint—and if all looks good, have it run git commit. You won't need a pre-commit hook at all, but if you like, you can have one that reads as follows:
[ "$RUN_FROM_OUR_SCRIPT" = yes ] && exit 0
echo "don't run git commit directly, run our script instead"
exit 1
Then instead of just git commit, have your script do:
RUN_FROM_OUR_SCRIPT=yes git commit
which will set the variable that the pre-commit hook tests to make sure git commit was run from your script.
Note that you will no longer need to redirect the read from /dev/tty. (You probably should also consider using git reset --soft, and/or verifying the that index content matches the working tree content.)
1If you like to live dangerously, and really want to have a script that can update files, make sure that $GIT_INDEX_FILE is unset or set to .git/index, to make sure you have not been invoked as git commit --only.

git hook pre-push bash script validate tag

I'm trying to validate a git tag using the pre-push git hook but whenever I execute git push origin <tag-name> it takes the previous tag as the latest from /refs/tags.
To replicate the issue:
Step 1. git commit
step 2. git tag V1.0.0-US123-major
step 3. git push origin V1.0.0-US123-major
So when step 3 executes the pre-push script should take "V1.0.0-US123-major" tag and validates against the below regex. If the tag matches with regex then it is a valid tag else abort git push.
#!/bin/sh
read_tag="$(git describe --abbrev=0 --tags)"
if [[ $read_tag =~ (^v[0-9]{1}.[0-9]{1}.[0-9]{1}-[a-zA-Z]+-[a-zA-Z]+$) ]]; then
echo "matched"
else
echo "not matched"
exit 1
fi
My expectation is when I use git push origin 2.2.2.2, the pre-push script does not return exit1 rather it accepts the tag and pushing to origin which is not correct.
git push origin 2.2.2
latest tag: v5.5.5-abcd-tues
matched
Can someone help me with this, please?
Your pre-push hook is checking the current revision, not the tag you're pushing, because git describe describes HEAD if you don't specify otherwise.
When you use a pre-push hook, the references being pushed are passed in on standard input. Assuming the thing you want to check is the name of the remote reference (that is, the one that's going to end up on the server), then it could look something like this (using POSIX syntax):
#!/bin/sh
set -e
while read lref new rref old
do
case $rref in
refs/tags/*)
if echo "$rref" | \
grep -qsE '(^refs/tags/v[0-9]{1}.[0-9]{1}.[0-9]{1}-[a-zA-Z]+-[a-zA-Z]+$)'
then
echo "matched"
else
echo "not matched"
exit 1
fi;;
*)
;;
esac
done
Do note that while a pre-push hook can help the developer make good choices and avoid mistakes, it's not an effective control, because it can be trivially bypassed. If you need to restrict what gets pushed to the server, you need to do that either with a pre-receive hook, your server implementation, or a CI system. See the relevant Git FAQ entry for more.

Git pre-push to prevent merge

i made this bash script for my git hooks in order to avoid merging by mistake.
At work we use git-extension and it prefills the branch name, if you forget to prefix the branch name by refs/for or refs/drafts then your commits are merged on the distant repo bypassing the code review.
This can only happen to people with merge right. but sometimes someone with merge right will make that mistake.
#!/bin/sh
if [[ `grep 'refs/for'` ]]; then
echo "push to refs/for/ authorized"
elif [[ `grep 'refs/drafts/'` ]]; then
echo "push to refs/drafts/ authorized"
else
echo "PUSH DENIED, MERGE DETECTED, please merge using gerrit 'submit' button only"
exit 1
fi
if I push to refs/for/v1.1.x it get push to refs/for/ authorized
however if I push to refs/drafts/v1.1.x I get PUSH DENIED, MERGE DETECTED, please merge using gerrit 'submit' button only
How is this possible, the condition syntax is exactly the same.
Thanks.
As Anthony Sottile commented, your first grep command reads all the input, checking for refs/for. Your second grep reads the remaining input; since your first one read all of the input, none remains. The second one therefore never finds anything.
You can deal with this in many different ways. One is to copy stdin to a temporary file that you can read repeatedly. Another is to structure your hook so that you read the input only once, checking each line for each condition. This second method is obviously more efficient, though this "obviousness" is a bit of an illusion, depending on details I'm not going to get into here.
Anyway, the way I would write this is:
#! /bin/sh
summary_status=0 # assume all OK initially
while read lref lhash rref rhash; do
case $rref in
refs/for/*|refs/draft/*) echo "push to $rref ok";;
*) echo "push to $rref not ok"; summary_status=1;;
esac
done
exit $summary_status
This variant does not require bash features (hence the /bin/sh rather than /bin/bash in the #! line); if you change it to one that does, be sure to change the #! line. It reads the input exactly once. It also verifies that you did not run, e.g.:
git push remote master:refs/for/master develop
which your existing script would allow (since one is OK and one is not).

How should one deal with Mercurial's "nothing changed" error ($? = 1) for scripted commits?

I'm cleaning up a client's tracking system that uses Mercurial (version 2.0.2) to capture state and automatically commits all changes every hour. I migrated it from cron to Rundeck so they will get status if/when things fail, which immediately caused the job to start filling Rundeck's failed jobs due to "nothing changed" errors. I immediately went to Google and I am surprised that this issue is raised, but I did not find answers.*
It seems like three should be a basic, clean sh or bash option (but their command line environment supports Python and pip modules if necessary).
My go-to responses for this type of thing are:
issue the 'correct' command before issuing the command that might fail when things are OK, then failure of the command actually indicates errors
read the docs and use the error codes to distinguish what's happening, implementing responses appropriately**
do some variant of #1 or #2 where I issue the command and grep the output***
* I will concede that I struggle to search for hg material, in part because of the wealth of information, similarity to git, and my bad habit of thinking in git terms. That being said, I see the issue out there, including "[issue2341] hg commit returns error when nothing changed". I did not find any replies to this and I find no related discussion on StackOverflow.
** I see at https://www.selenic.com/mercurial/hg.1.html that hg commit "Returns 0 on success, 1 if nothing changed." Unfortunately, I know there are other ways for commit to fail. Just now, I created a dummy Mercurial (mq) patch and attempted a commit, getting "abort: cannot commit over an applied mq patch". In that instance, the return code was 255.
*** I suppose I can issue the initial commit and capture error code, stdout, and stderr; then process the text if return is 1 and error or continue as appropriate.
If you want a command that only commits when something has changed, write a command that checks if something has changed. You can do it more "elegantly" by writing it as a simple bash conditional:
if [ -n "$(hg status -q)" ]
then
hg commit -m "Automatic commit $(date)"
fi
You can hg status | wc -l (or grep output) before commit, and commit only if there are changes in working directory
My initial workaround (my least favorite of the options I mention) is to add a Rundeck 'error handler', a command that executes in response to an error. My command is:
hg commit -m "Automatic commit..." files 2>&1 | grep "nothing changed" && echo "Ignoring 'nothing changed' error"
This duplicates the 'nothing changed', but suppresses the error only if it is the 'nothing changed' error. Ugly, but tolerable if nobody has better suggestions...

Interpret a Hudson job as successful even though one of the invoked programs fails

I have a Hudson job that periodically merges changes from an upstream bazaar repository.
Currently, when there are no changes upstream, Hudson reports this job as a failure because the bzr commit command returns with an error. My script looks like something this:
bzr branch lp:~lorinh/project/my-local-branch
cd my-local-branch
REV_UPSTREAM=`bzr version-info lp:project --custom --template="{revno}"`
bzr merge lp:project
bzr commit -m "merged upstream version ${REV_UPSTREAM}"
./run_tests.sh
bzr push lp:~lorinh/project/my-local-branch
If there are no changes to merge, the Hudson console output looks something like this:
+ bzr branch lp:~lorinh/project/my-local-branch
Branched 807 revision(s).
+ bzr merge lp:project
Nothing to do.
+ bzr commit -m merged upstream version 733
Committing to: /var/lib/hudson/jobs/merge-upstream/workspace/myproject/
aborting commit write group: PointlessCommit(No changes to commit)
bzr: ERROR: No changes to commit. Use --unchanged to commit anyhow.
Sending e-mails to: me#example.com
Finished: FAILURE
The problem is that I don't want Hudson to report this as a failure. How do I modify my commands so the script terminates at the failed commit, but it isn't interpreted by Hudson as an error? I tried changing the commit command to:
bzr commit -m "merged upstream version ${REV_UPSTREAM}" || exit
But that didn't work.
(Note: I realize I could use Hudson's "Poll SCM" instead of "Build periodically". However, with bazaar, if somebody does a push with local commits that were done before the most recent modifications, then Hudson won't detect a change to the repository.)
You were very close! Here's the corrected version of what you were trying:
bzr commit -m "merged upstream version ${REV_UPSTREAM}" || exit 0
This now does what you asked for, but isn't perfect. I'll get to that later.
Note the tiny important change from your version - we are now being explicit that we should exit with code 0 (success), if the bzr command does not do so. In your version, exit (with no argument) will terminate your script but return the exit code of the last command executed - in this case the bzr commit.
More about exit
How do we find out about this behaviour of exit? The exit command is a shell built-in - to find documentation on it we use the help command:
help exit
Which on my machine tells me:
exit: exit [n]
Exit the shell.
Exits the shell with a status of N. If N is omitted, the exit status
is that of the last command executed.
Here's a decent tutorial on exit and exit codes in the bash shell
Hudson and exit codes
Hudson follows this common convention of interpreting exit code 0 as success, and any other code as failure. It will flag your build as a failure if the build script it executes exits with a non-zero code.
Why your script is stopping after the bzr commit
If, as you say, you have the following and your script is stopping after the bzr commit...
bzr commit -m "merged upstream version ${REV_UPSTREAM}"
./run_tests.sh
... I suspect your script has an instruction such as set -e or is being invoked with something like bash -e build_script.sh
Either of these tells the shell to exit immediately if a command exits with a non-zero status, and to pass along that same "failure" exit code. (There are subtleties - see footnote 1).
Disabling exit-on-error
While this behaviour of exiting on error is extremely useful, sometimes we'd like to disable it temporarily. You've found one way, in
bzr commit whatever || true
We can also disable the error-checking with set +e.
Here's a pattern you may find useful. In it we will:
Disable exit-on-error (with set +e)
run the command which may error bzr commit whatever
capture its exit code ($?) for later inspection
Re-enable exit-on-error (with set -e)
Test and act upon the exit code of any commands
Let's implement that. Again we'll exit 0 (success) if the bzr command failed.
set +e
bzr commit whatever
commit_status=$?
set -e
if [[ "$commit_status" != "0" ]]; then
echo "bzr commit finds nothing to do. Build will stop, with success"
exit 0
fi
echo "On we go with the rest of the build script..."
Note that we bracket as little as possible with set +e / set -e. If we have typos in our script in that section, they won't stop the script and there'll be chaos. Read the section "Avoiding set -e" in the post "Insufficiently known POSIX shell features" for more ideas.
What's wrong with foo || exit 0 ?
As I mentioned earlier, there's a problem with our first proposed solution. We've said that when bzr commit is non-zero (i.e. it doesn't commit normally) we'll always stop and indicate success. This will happen even if bzr commit fails for some other reason (and with some other non-zero exit code): perhaps you've made a typo in your command invocation, or bzr can't connect to the repo.
In at least some of these cases, you'd probably want the build to be flagged as a failure so you can do something about it.
Towards a better solution
We want to be specific about which non-zero exit codes we expect from bzr, and what we'll do about each.
If you look back at the set +e / set -e pattern above, it shouldn't be difficult to expand the conditional logic (if) above into something that can deal with a number of specific exit codes from bzr, and with a catch-all for unanticipated exit codes which (I suggest) fails the build and reports the offending code and command.
To find out the exit codes for any command, read the docs or run the command and then run echo $? as the next command. $? holds the exit code of the previous command.
Footnote 1: The exit-on-error behaviour switched with set -e has some subtleties you'll need to read up on, concerning behaviour when commands are in pipelines, conditional statements and other constructs.
given that bzr doesn't seem to emit a correct exit code (based on you bzr ... || exit example), one solution is to capture the output of bzr and then scan for ERROR or other.
bzr commit -m "merged upstream version ${REV_UPSTREAM}" 2>&1 | tee /tmp/bzr_tmp.$$
case $( < /tmp/bzr_tmp.$$ ) in
*ERROR* )
printf "error running bzr, found error msg = $(< /tmp/bzr_tmp.$$)\n"
exit 1
;;
* )
: # everything_OK this case target
# just to document the default action of 'nothing' ;-)
;;
esac
A slightly easier case target regex, based on your sample output, would be *FAILURE ) ....
the $( < file ) is a newer feature of the shell that you can think of as $(cat file), but is more efficient in use of process resources, as it doesn't need to launch a new process (cat), to dump a file.
I hope this helps.

Resources