How can I play a sound whenever I commit to Git? - macos

Writing code alone at home can be an isolating experience. There you are, day in day out, quietly making magic with your mind (sarcasm, obv.) only to silently commit the fruits of your labor into the void of your source control repository, appreciated by no one. If only a crowd of children could be retained for the sole purpose of cheering you on every time you complete something.
How can I play a victory sound every time I commit to Git?

Amazingly, Brandon Keepers over at Collective Idea had the same exact same thought. Anyway, here is what my version of his script looks like:
#!/bin/sh
toplevel_path=`git rev-parse --show-toplevel`
afplay -v 0.1 $toplevel_path/.git/hooks/happykids.wav > /dev/null 2>&1 &
I put this in a file called .git/hooks/post-commit.playsound. I then trigger this from the main .git/hooks/post-commit script as follows:
#!/bin/sh
toplevel_path=`git rev-parse --show-toplevel`
$toplevel_path/.git/hooks/post-commit.tweet
$toplevel_path/.git/hooks/post-commit.playsound
Where the post-commit.tweet script is the script from this StackOverflow answer. If you aren’t also tweeting your commit posts, you’ll want to delete that line.
If you want this to work for every single Git repository from now on, add these scripts to your git-core templates. You’ll have to figure out where these are (it’s different for every setup). For my Mac, they’re located here: /opt/local/share/git-core/templates/hooks/post-commit.

Related

More efficient way to parse git commands?

In-short: would like prompt to appear faster, although it's not sluggish.
Making a custom prompt for my bash terminal; the following list is in my /etc/bash.bashrc
I already use the "gitstatus" repo, which speeds up certain git commands. I think slowdowns come from the number of commands themselves. I want to know if I can generally use LESS git commands to do the same thing.
Here is a list of everything I do:
Obtain branch (if head detached, commands requiring it skipped)
Check for upstream
git rev-list --left-right --count "$branch"..."$upstream" to check if ahead or behind
Check for stashes
EDIT: Disregard #5. I called command #8 first, obtained this information, and appended #5 to PS1 before command #8
Check for dirty branch (done separately; I know #8 provides this info, but this command is called earlier on, and I like the symbol there)
Check for remote
Check for untracked files (separately than the bullet below, as they are located early in the prompt as I treat them as a higher priority "problem")
All at once check for modified, added, removes, or unmerged files by parsing git status -s
These are run using one git command per line. Will provide an image if needed as well.
On Bash for Windows terminal.
The answer to my own extremely specific question:
In my case, I'm trying to parse git status -s in function foo, and call function bar which detects if untracked files exist. The thing is, bar's output is appended to PS1 before foo's. It seems fine, but I'm trying to minimize the amount of git commands called every time in my bashrc. So, insteading of parsing git status -s in foo and then separately finding if untracked files exist in bar, I can call foo, create a untracked_files_exist variable, and make it true if such is given by git status -s. Then I can call bar after, use this untracked_files_exist how I want, and separately append the functions' outputs to PS1 in whichever order after both are called.
if that doesn't make sense:
If you want a fast prompt, call a parse-able git function that displays as much information as possible. If you want the prompt to contain such information in a different order than the git command outputs, don't append to PS1 INSIDE of the parsing functions. Do so AFTER so you have control over the order of the prompt.

Adding other useful info to a git archive filename automagically

Stumbled across this gem: Export all commits into ZIP files or directories whose inital answer met my needs for exporting commits from certain branches (like develop for example) into separate zip files - all done via a simple, yet clever, one-liner:
git rev-list --all --reverse | while read hash; do git archive --format zip --output ../myproject-commit$((i=i+1))-$hash.zip $hash; done
In my version I replaced the --all with --first-parent develop.
What I would like to do now is make the filenames more useful by including the commit date and commit author in the filename. I've Googled around a bit, grokked the git archive documentation, but do not seem to find any other 'parameters' I could use that are readily available like $hash.
I'm guessing I will need to expand the loop and call up the relevant bits individually, save them into bash variables and pass them on to the output option with something like ${author}, unless anyone else knows a cleaner, simpler way to do this, or can point me to documentation or other examples where I could pull the needed info from other parts of git? Thanks in advance for any insights.

Protecting scripts from errant clobbering

I spent some time building this handy bash script that accepts input via stdin. I got the idea from the top answer to this question: Pipe input into a script
However, I did something really dumb. I typed the following into the terminal:
echo '{"test": 1}' > ./myscript.sh
I meant to pipe it | to my script instead of redirecting > the output of echo.
Up until this point in my life, I never accidentally clobbered any file in this manner. I'm honestly surprised that it took me until today to make this mistake. :D
At any rate, now I've made myself paranoid that I'll do this again. Aside from marking the script as read-only or making backup copies of it, is there anything else I can do to protect myself? Is it a bad practice in the first place to write a script that accepts input from stdin?
Yes, there is one thing you can do -- check your scripts into a source-code-control repository (git, svn, etc).
bash scripts are code, and any non-trivial code you write should be checked in to source-code-control (and changes committed regularly) so that when something like this happens, you can just restore the most-recently-committed version of the file and continue onwards.
This is a very open-ended question, but I usually put scripts in a global bin folder (~/.bin or so). This lets me invoke them as myscript rather than path/to/myscript.sh, so if I accidentally used > instead of |, it'd just create a file by that name in the current directory - which is virtually never ~/.bin.

How to convert this script into a custom mercurial command?

I have the following script:
#!/bin/bash
if [ $# -ne 2 ]; then
echo -n "$0 - a utility for applying uncommitted changes to a "
echo "remote hg repository locally also"
echo "Usage: $0 user#hostname path/to/repository"
exit -1
fi
user_at_hostname="$1"
remote_path="$2"
ssh "$user_at_hostname" hg -R "$remote_path" diff | hg import --no-commit -
It's not the most glorious piece of code, and I would rather do something more "mercurial" than that, so to speak. Specifically, I was wondering whether I could achieve the same using a mercurial alias / custom command. Can I?
PS - I had also thought about maybe issuing some sort of shelve command on the remote repository instead of just getting a diff, but I don't want to make thing too complicated.
If you just want to convert this script into an hg foo command without changing it, use a shell alias. Just copy the last line and replace the custom variables with $1 and $2.
If you want to make this look more like a "normal" Mercurial workflow, you could start by committing the changes and then pulling them. I imagine that you are avoiding this workflow so that you can change your mind about these modifications without polluting your repository's history with "oops" commits. If that is the case, then you will likely be interested in the Evolve extension. The Evolve extension is intended to provide a safe and reasonably well-behaved system for sharing mutable history. In this way, you can commit a change; share it with another repository; amend, rebase, squash, or otherwise modify the change; and then share the modified changeset with the same repository or a different one. You can also prune changesets from history and share the fact that you pruned them. If this sharing causes problems, such as amending a commit which has descendants, Mercurial will detect those problems and offer a fix (e.g. rebase the descendants onto the new version of the commit) which you can execute automatically with hg evolve. While the extension is still experimental, it does basically work for most simple use cases.
If experimental software isn't of interest to you, you can still flag the repository as non-publishing. This will allow you to use more traditional history-editing machinery such as hg rebase, hg histedit, and hg strip even after you have pushed to the repository. However, revisions which are destroyed in one repository will not automatically vanish from other repositories without the evolve extension. You will have to strip them by hand.
Finally, note that hg push --force does not destroy revisions. It creates new anonymous branches, typically resulting in a messy history, but without actually losing any data. It is different from git in this fashion.

Compiling historical information (esp. SLOCs) about a project

I am looking for a tool that will help me to compile a history of certain code metrics for a given project.
The project is stored inside a mercurial repository and has about a hundred revisions. I am looking for something that:
checks out each revision
computes the metrics and stores them somewhere with an identifier of the revision
does the same with the next revisions
For a start, counting SLOCs would be sufficient, but it would also be nice to analyze # of Tests,TestCoverage etc.
I know such things are usually handled by a CI Server, however I am solo on this project and thus haven't bothered to set up a CI Server (I'd like to use TeamCity but I really didn't see the benefit of doing so in the beginnig). If I'd set up my CI Server now, could it handle that?
According to jitter's suggestion I have written a small bash script running inside cygwin using sloccount for counting the source lines. The output was simply dumped to a textfile:
#!/bin/bash
COUNT=0 #startrev
STOPATREV = 98
until [ $COUNT -gt $STOPATREV ]; do
hg update -C -r $COUNT >> sloc.log # update and log
echo "" >> sloc.log # echo a newline
rm -r lib # dont count lib folder
sloccount /thisIsTheSourcePath | print_sum
let COUNT=COUNT+1
done
You could write a e.g. shell script which
checks out first version
run sloccount on it (save output)
check out next version
repeat steps 2-4
Or look into ohloh which seems to have mercurial support by now.
Otherwise I don't know of any SCM statistics tool which supports mercurial. As mercurial is relatively young (since 2005) it might take some time until such "secondary use cases" are supported. (HINT: maybe provide a hgstat library yourself as there are for svn and csv)
If it were me writing software to do that kind of thing, I think I'd dump metrics results for the project into a single file, and revision control that. Then the "historical analysis" tool would have to pull out old versions of just that one file, rather than having to pull out every old copy of the entire repository and rerun all the tests every time.

Resources